📥 Hello, and greetings from the Central Office
We value transparency and want you to see the real work happening behind the scenes each week.
Security & Extensions
• PDA AR Extension received a full round of testing including IP restriction checks.
• Code updates prepared for release after final verification.
• Folder protection analysis confirmed limitations under wp-content, with further research planned.
Support & Team Collaboration
• Support team delivered tailored solutions including unique download links and shortcode handling.
• Collaboration continued with Development, Onfolio PayPal, and Onfolio to resolve edge cases.
Looking Ahead to December
• PPWP Free lead magnet redesign and integration.
• Migration of 404 page and lead magnet flow to live sites.
• Support for Analytics Dashboard launch.
• More improvements to PPWP shortcode systems and opt-in logic.
Thank you for following our work. More updates coming soon.
You Can Block OpenAI From Scraping. But Not Google.

Cloudflare published its Year in Review analyzing crawler activity across 2025. The data reveals which AI companies crawl most aggressively, how little traffic they send back, and which bots creators are blocking in robots.txt.
Here's the problem: Googlebot crawls for both search indexing and AI model training. You cannot separate them. Block Googlebot's AI training and you lose Google search visibility. Other AI bots you can block individually. Google forces an impossible choice.
The Extraction Economics
Cloudflare tracked crawler activity in October and November 2025. Googlebot reached 11.6% of unique web pages in the sample. That's more than 3 times the pages seen by OpenAI's GPTBot at 3.6%, and nearly 200 times more than PerplexityBot at 0.06%.
The crawl-to-refer ratios show the real cost. Anthropic hit 100,000:1, meaning 100,000 page crawls for every 1 visitor sent back. OpenAI reached 3,700:1. Perplexity stayed under 400:1. Google's search maintained 3-30:1, but that combines both search indexing and AI training. The AI training portion extracts without returning traffic.
Run the math on your WordPress site. You have 1,000 posts. Anthropic crawls at 100,000:1. That's 100 million page loads returning 1,000 visitors total. Your server handles the bandwidth. Anthropic trains models. You get 1,000 visitors who might never reach your site because the AI answered their question.
Here's the question site owners face: does blocking ClaudeBot hurt search rankings? No. ClaudeBot serves only AI training. Block it and you lose nothing except inclusion in Claude's training data. But Googlebot? Block it and your site disappears from Google Search results—which still drives 40-60% of traffic for most content sites.
What You Can Actually Control
WordPress site owners can add this to robots.txt:
User-agent: GPTBot
Disallow: /
User-agent: ClaudeBot
Disallow: /
User-agent: CCBot
Disallow: /That blocks OpenAI, Anthropic, and Common Crawl from AI training while keeping Google search visibility intact. Cloudflare's analysis of 3,900 top domains shows creators ARE blocking these bots. GPTBot, ClaudeBot, and CCBot had the highest number of full disallow directives.
But add User-agent: Googlebot with Disallow: / and you're choosing between AI training or organic search traffic. Most sites can't afford to lose Google referrals. Google's dual-purpose crawler creates competitive advantage: creators must allow AI training to maintain search visibility.
Platform Dependency Eliminates All Leverage
The same content published in two places faces different extraction rules:
An owned WordPress site gives you partial leverage. You can block GPTBot, ClaudeBot, and CCBot individually while keeping search visibility. You control the robots.txt file and decide which bots to allow.
Medium creators? Zero leverage. The platform allows all bots by default. No robots.txt access. Different infrastructure, different rules entirely.
Medium allows all major AI crawlers. Substack's the same. Social platforms? They all opted in. Creators on these platforms have zero ability to block AI training on their content. They depend entirely on platform decisions.
User-action crawling grew 15x in 2025. When someone asks ChatGPT a question, OpenAI's ChatGPT-User bot visits pages in real-time. Traditional crawling scrapes content for model training. User-action crawling happens live during queries. Both extract content. Neither requires compensation. Both depend on whether you control robots.txt.
So Ask Yourself
Does your content live on infrastructure you control? Can you edit robots.txt? Do you own the domain hosting your 1,000 posts, or does Medium, Substack, LinkedIn?
AI companies crawl at vastly different ratios. Anthropic extracts 100,000 pages per visitor returned. Google? Just 3-30 pages per visitor. But Google's the only one forcing the binary choice between AI training and search visibility.
A WordPress site with 5,000 posts? The owner blocks ClaudeBot and GPTBot. Keeps Googlebot because losing Google traffic would cut revenue 50%. A Medium publication with 5,000 posts can't block anything. Same content. Different leverage entirely.
Own your infrastructure or accept platform extraction rules. No middle ground exists.

YouTube adds comments to Shorts ads and lets creators link to brand sites — Advertisers can now turn on comments for eligible Shorts ads. Creators posting branded content can link directly to brand websites. The constraint: YouTube controls which features creators access and when. Platform infrastructure determines what you can build.
AI search, and social simultaneously — E-E-A-T principles apply to AI search as much as traditional SEO. Multi-platform approach spans YouTube, TikTok, Reddit, ChatGPT, Perplexity. Brand mentions are leading factors in AI visibility. The shift: from blue-link SERPs to authoritative structured data AI models can quote.
Creators need repeatable formats to build IP beyond trend-chasing — Hot Ones, Chicken Shop Date, Call Her Daddy built around formats unmistakably theirs. Perfectly Imperfect hit 150K subscribers with simple format: Hot Recs, Cool People, twice weekly. Now operates 90K-user network. Owning format leads to owning IP, which creates business flywheel. Platform trend-chasing builds no lasting value.
AI-generated content floods platforms while scams steal creator likenesses — AI TikTok compilations hitting 5M views generate $1K from Creator Fund. Scam accounts steal female creator videos, replace with AI avatars for OnlyFans. Platforms failing to enforce labeling rules. Meta, Amazon, DirecTV offering generative AI ad services. The trajectory: platforms sell ads directly to clients, cutting out creators entirely.
SEO experts recommend brand mentions and multi-platform presence for 2026 — Kevin Indig: "Visibility is the result of having the right content, engaging on the right channels, and being mentioned in the right places." Andrea Volpini: "Visibility will depend on agentic readiness: clean structured data, stable identifiers, precise ontologies, knowledge graphs." Search fragmented across Google, ChatGPT, Reddit, LinkedIn, niche forums. Single-platform optimization no longer sufficient..

The Verge's creator economy series revealed MrBeast's YouTube operation spent three straight years in the red, including negative $110 million in 2024. All viral videos function as marketing for the real business: chocolate bars at Walmart. Jake and Logan Paul sell bottled water now. Every major creator eventually pivots to selling products because it's more lucrative than chasing platform views and brand deals.
Michael
Operator @WP Folio - now WP Defense Lab. Same Plugins. Different Name.