A recent wave of unexplained automated traffic has swept across the internet, with websites of all sizes reporting significant spikes in visits traced to IP addresses in Lanzhou, China and routed through Singapore, leading to puzzling analytics and concerns among website operators. The surge, highlighted by reports from Semafor, Wired and other outlets, shows that this bot traffic has accounted for a surprising share of visits even to U.S. government websites in recent months, often showing near-zero engagement that suggests non-human actors. Experts are debating the intent behind the bots, with many pointing to the possibility of companies harvesting data to train artificial intelligence models and improve machine learning systems; however, the exact origin and purpose remain unclear, and while the bot visits don’t seem to cause direct harm, they skew analytics and impose operational costs on site operators. Several sites have tried blocking ranges of IP addresses to alleviate the issue, and the broader trend reflects a rapidly evolving internet environment where automated traffic increasingly dominates online activity.
Sources
https://www.semafor.com/article/02/15/2026/mysterious-bot-traffic-sweeps-web
https://www.wired.com/story/made-in-china-niche-websites-are-seeing-a-surge-of-mysterious-traffic-from-china/
https://www.inc.com/ava-levinson/a-surge-of-strange-bot-traffic-from-china-has-website-owners-alarmed-heres-what-it-means-for-your-data
Key Takeaways
- A notable surge of bot traffic traced to China and Singapore has hit a wide range of websites, distorting analytics and raising questions about its purpose.
- The bots often show characteristics inconsistent with human browsing and are suspected to be part of automated scraping activity potentially linked to AI training.
- Website operators are scrambling to mitigate the impact, but the broader trend underscores how automated traffic continues to shape the internet’s landscape.
In-Depth
The global digital landscape is experiencing an unusual and significant uptick in automated internet traffic that experts and site owners are struggling to fully understand. Over the past few months, sites ranging from small personal blogs to large government portals have observed dramatic increases in visits that are not generated by real human users but by automated systems often referred to as bots. According to reporting from Semafor and corroborating accounts in Wired and Inc, many of these visits have been traced to IP addresses geographically associated with Lanzhou, China — a city known more for manufacturing than tech infrastructure — with connections routed through Singapore. What makes this influx particularly perplexing is not just the volume but the behavior of the traffic: it shows virtually no engagement on sites and typically doesn’t follow patterns expected of real users, such as clicking links or spending time on pages.
Website operators initially puzzled by these anomalies first assumed they had unexpectedly tapped into new international audiences, only to discover through analytics tools that virtually all of the traffic originated from specific non-human sources. For some sites, these “bot” visits now account for a significant portion of their overall traffic metrics, confusing analytics reports and complicating efforts to understand genuine user behavior. The issue extends beyond anecdotal cases; data compiled by analytics platforms show that a notable share of visits to even U.S. government websites in recent months can be attributed to this mysterious traffic pattern.
While the motives behind the bot surge are not definitively known, there is growing speculation among technologists that companies engaged in artificial intelligence and machine learning research may be deploying these automated systems to harvest large amounts of publicly available data. The logic behind this theory is that training advanced AI models — particularly those used for natural language generation — often requires access to massive datasets drawn from the web. If that is the case, the bots may be operating as large-scale web crawlers, systematically scanning sites to build or update training repositories. However, reputable AI companies typically identify their bots so site owners can manage access, and many of the bots involved in this surge appear to be disguising themselves to evade traditional detection and blocking methods.
The surge of bot traffic presents practical challenges for site owners. In addition to skewed traffic figures, the increased load on servers can consume resources, drive up hosting costs, and interfere with the performance of online services. Some administrators have responded by blocking traffic from specific IP ranges or geographic regions, though such measures are imperfect and can risk excluding legitimate users. Moreover, the trend highlights a broader issue in the modern internet era: a growing portion of online activity is driven by automated systems, not humans. As bots become more sophisticated and indistinguishable from human users, the tools and strategies for managing them must evolve accordingly.
The implications of this trend extend to data privacy, internet governance, and the economics of web publishing. If companies are indeed collecting data at scale for AI purposes without transparent consent, that raises questions about fair use, copyright, and the value exchange between content creators and technology platforms. Meanwhile, the rise of bot traffic underlines the ongoing transformation of the internet into a space where automated agents operate alongside — and often at the expense of — genuine human engagement. Further investigation, both by independent researchers and industry stakeholders, will be necessary to untangle the motivations behind this traffic spike and to develop frameworks that protect the interests of website owners, internet users and the broader digital ecosystem.

