Researchers from BetterUp Labs in collaboration with Stanford’s Social Media Lab have coined the term “workslop” to describe AI-generated content that looks polished but lacks the real substance needed to move a task forward. Their study, published in Harvard Business Review, finds that 40 percent of U.S. desk workers surveyed reported receiving workslop in the past month, with each incident taking nearly two hours to fix. The researchers argue that this trend helps explain why 95 percent of organizations say they see zero return on their AI investments: workslop shifts effort downstream to coworkers who must correct or redo the output rather than saving time. The study urges leaders to model thoughtful AI use, enact clear guardrails, and promote norms that ensure AI is used as a complement to — not a substitute for — human judgment.
Sources: Harvard Business Review, BetterUp
Key Takeaways
– Workslop is AI output that appears competent but fails to contribute meaningful content, creating hidden burdens on recipients who must interpret, correct, or redo work.
– Despite widespread adoption, 95 percent of organizations report zero measurable ROI on AI — workslop could be a driving explanation for that gap.
– Leaders must set deliberate norms, train teams in good AI-use practices, and require human oversight rather than assuming AI output is finished work.
In-Depth
In an age when generative AI is being integrated into offices and workflows everywhere, there’s a growing tension between expectation and reality. The promise has long been that AI would accelerate work, reduce drudgery, and free humans to tackle higher value tasks. But in practice, many organizations aren’t seeing those gains — and the concept of “workslop” may offer a stark explanation.
Workslop is more than a dismissive jab — it’s a term developed in the Harvard Business Review to label a phenomenon where AI-generated work (reports, memos, slides, even code) looks clean, complete, and professional at first glance, but lacks meaningful depth, context, insight, or reliability. In short: it masquerades as value but often fails to deliver it. According to the study, among 1,150 full-time U.S. employees surveyed, 40 percent said they had received workslop in the prior month. Each incident typically demands nearly two hours of cleanup or correction by whoever receives it. At scale, that’s a substantial drag on productivity.
One of the more striking data points: despite the surge of AI adoption in enterprises, 95 percent of organizations report seeing no return on those investments. Generative AI hasn’t been the winsome productivity booster many hoped — instead, workslop helps explain the disappointment. When AI outputs require human rework, the promised time savings evaporate, and teams waste time in cycles of review, revision, and correction.
What fuels workslop? It often emerges when employees treat AI tools like magic pens rather than assistance: passing on AI output as if it were final, without human judgment or critical review. In many cases, users are undertrained in prompt design, lack domain context, or don’t apply sufficient oversight. That gap between tool and expertise leads to superficial results. The cumulative effect is that downstream coworkers inherit the burden.
To counter workslop, the authors stress the importance of leadership modeling how to use AI with intention, not abandon. Guardrails matter: teams need guidelines about when AI output is acceptable, what quality checkpoints should exist, and where humans must intervene. Cultivating a mindset that treats AI as a collaborator — not replacement — helps maintain accountability and context. Teams should encourage iterative review, push back on lazy adoption, and embed training so that staff internalize what constitutes “good” AI-augmented work.
In short: AI holds enormous potential, but its value doesn’t come automatically. Without guardrails, human oversight, and a clear view of quality, adoption can backfire — yielding more rework than reward. Workslop is a warning signal: organizations that don’t pay attention will find themselves spending more time sorting AI’s output than enjoying its promises.

