In August 2025, more than 200 contractors employed to improve Google’s AI tools—including Gemini and the AI Overviews search summary feature—were abruptly laid off. These workers, many with advanced degrees and working under GlobalLogic (a Hitachi-owned outsourcing firm), contend that the layoffs followed their complaints about low pay, job security, and onerous performance expectations. Google says the cuts were part of a “ramp-down” in certain projects rather than a response to worker organizing. Companies that contract out these roles are being scrutinized for how they manage compensation and oversight, especially given accusations of union-busting and that contractors were doing critical work yet lacking the benefits and security of full employment.
Sources: Wired, San Francisco Chronicle
Key Takeaways
– Contract Workers doing Critical AI Work Still Lack Stability: The contractors laid off were part of what Google considers its “super rater” program—evaluating, refining, and rewriting AI outputs. Yet despite the complexity and importance of their work, many lacked traditional employment benefits and job security.
– Labor Disputes and Organizing Viewed as Catalyst: Many of the affected workers say the layoffs followed their efforts to raise concerns about compensation and to organize. There are also allegations of retaliation via silence, suppression of internal communication, or simply being excluded.
– Automation and Cost Management vs Ethics and Fair Pay Trade-Offs: While Google frames the layoffs as part of project ramp-downs, broader tensions remain around the ethics of depending on lower-paid contractor work, potential replacement by automation, fairness in compensation (especially compared to in-house full-time AI raters), and responsibility for working conditions when using third-party firms.
In-Depth
Google’s latest round of layoffs has drawn sharp attention to the edge between artificial intelligence ambition and human labor cost. In August 2025, over 200 contractors tasked with helping shape Google’s AI features were let go without warning. These weren’t casual roles—they were “super raters,” many holding Master’s or PhDs, hired through GlobalLogic (a contractor) to evaluate, refine, and improve things like Google’s Gemini chatbot and the AI Overviews summaries slapped atop search results.
What turned this layoff news from a routine cost-cutting story into labor controversy is the pattern workers describe: pay rates that lag behind in-house counterparts, performance metrics that feel harsh, low job security, and a sense that speaking up (or organizing) could aggravate one’s risk of being cut. Contractors say that when they raised issues around both compensation and working conditions, the response was not dialogue but disruption and, ultimately, dismissal. The opacity surrounding why projects are “ramped down” and how decisions on who stays or goes are made hasn’t helped either.
It’s important to note that Google says these layoffs flow from project needs rather than a response to union activity. But regardless, it raises hard questions about the supply chain of AI development: how much of the labor behind large-scale AI systems is carried out by people in precarious roles, how they’re compensated, and how much oversight or recourse they have. In a climate where AI is billed as the future, what this shows is that the present still depends heavily on human effort—and that when that effort is outsourced and under-protected, tensions are likely to erupt.
For Google, balancing speed, cost, and ethical labor practices may be one of the defining tests in making its AI ambitions sustainable—not just technologically, but socially.

