Neil Vogel, CEO of People, Inc., recently accused Google of abusing its web crawling tools to both index publisher content for search and simultaneously harvest that same content for its own AI products. Vogel argues that Google uses a single crawler for both tasks and refuses to split them, effectively inhibiting publishers like People, Inc. from resisting without also sacrificing search traffic — which once accounted for roughly 65% but has dropped into the “high-20s” percentiles over recent years. Vogel claims that Google is an intentional “bad actor” in this regard, as blocking the crawler that feeds its AI also blocks indexing in its search engine, leaving publishers caught between maintaining traffic and preserving content rights.
Sources, TechCrunch, AInvest, HyperAI
Key Takeaways
– Google’s single-crawler approach means publishers can’t effectively block Google’s AI content harvesting without simultaneously suffering losses in their search engine visibility.
– Publisher traffic driven by Google Search for People, Inc. has fallen from about 65% three years ago to the high 20s percent presently, underscoring how dependent digital publishers are on search referrals.
– Vogel frames the issue as not just technical or policy‐linked but ethical: he labels Google “an intentional bad actor,” alleging unfair competition because it uses publisher content in its AI tools without a separate crawler or compensatory arrangement.
In-Depth
In the current debate between publishers and technology giants, Neil Vogel’s remarks represent a forceful pushback from legacy media. As CEO of People, Inc., Vogel has thrown down a gauntlet: Google is abusing its dominance by using the same crawler (or web bot) for both indexing content for search and scraping or harvesting that same content for its own artificial intelligence products.
To Vogel, this dual use isn’t just inefficient design—it’s a strategic leverage that gives Google an unfair advantage over publishers who generate original content. If a publisher blocks the crawler to protect content, that same action precludes their content from appearing in Google Search. The consequence? Either you protect your content and lose search traffic, or you allow the crawler and lose control over your work. Vogel suggests this is no accident: according to him, Google “knows this, and they’re not splitting their crawler. So they are an intentional bad actor here.”
The financial stakes are high. Vogel puts the drop in traffic via Google Search for People, Inc. from about 65% three years ago, to high 20s now—this decline matters because search traffic remains a major source of audience, advertising revenue, and discoverability for digital publishers. As traffic falls, publishers face pressure to adapt: negotiate content deals, pursue legal or regulatory action, or accept a diminished role in a Google-dominated ecosystem. Vogel’s accusations also feed into broader concerns within the media industry: that AI tools trained (explicitly or implicitly) on publishers’ content can compete with or devalue that content, without compensating creators or giving them real control.
From a technical standpoint, the question of whether Google can or should split the crawler signals into two distinct operations—one strictly for indexing and search; the other for AI ingestion—is central. Splitting them could allow publishers to block one while allowing the other, preserving control. But such a move presents engineering, legal, and business complexity. It may also raise difficult questions about what counts as legitimate use of publicly posted content, what rights publishers retain, and how compensation should work when companies make downstream products using said content. Vogel seems to want leverage in negotiation: either in contracts, regulation, or possibly through public pressure. Whether he will get that leverage remains uncertain, since Google holds vast infrastructure, reach, and influence over how the open web gets discovered by end users.
Ethically, Vogel’s framing hinges on fairness and the unintended (or intended) consequences of dominant platforms’ control over discovery. His warning isn’t just about traffic metrics—it’s about how the incentives digital publishers face have shifted: from creating value for readers to creating content for clicks and appeasing search algorithms, because until recently, search visibility was the main path to audience. If that path narrows or changes shape (say, via AI overviews or responses that reduce clickthroughs), then the model many publishers rely on is threatened. Vogel’s critique echoes others in the industry. For instance, media alliances have accused Google of creating “AI Mode” or summaries that reduce user traffic to original publisher sites.
In short, Vogel’s challenge to Google underscores a conflict over who profits from content in the machine learning era, and who controls visibility in the information ecosystem. The outcome of this dispute could affect not only large publishers like People, Inc., but smaller content creators and news organizations who depend heavily on search referrals. Whether through regulation, legal challenges, contractual terms, or new business models, publishers will likely press for transparency, compensation, or structural change. Whether Google will adjust its practices—or whether the economic or regulatory pressure becomes strong enough—remains to be seen.

