A recent joint study by Stanford University and BetterUp Labs, as reported in the Harvard Business Review, shows that many workers are drowning in what researchers call “workslop”: AI-generated content that looks polished but often lacks the meaningful substance needed to advance a task, forcing colleagues to spend extra time decoding, correcting, or redoing it. Meanwhile, a striking companion finding from MIT’s Project NANDA (The GenAI Divide: State of AI in Business 2025) reveals that despite $30-$40 billion invested by enterprises into generative AI tools, 95% of organizations report zero measurable return on these investments. These issues aren’t only about poor tools, but also about how AI is deployed: unclear use cases, lack of guardrails, weak feedback loops, and the shifting of cognitive labor onto workers.
Sources: Gizmodo, 404 Media, Fortune
Key Takeaways
– “Workslop” costs more than it seems: Polished AI outputs that lack substance aren’t just minor annoyances—they impose real productivity losses, rework time, and erode trust and collaboration among employees.
– Massive AI investment, minimal returns: Nearly all enterprise generative AI pilots—95%—fail to deliver measurable profit or productivity impact, often because of deployment issues rather than technical limitations.
– Alignment & execution matter more than hype or tool sophistication: Success tends to belong to organizations that define clear use cases, provide proper guidance, build feedback mechanisms, integrate AI into workflows, and avoid over-mandating AI usage without structure.
In-Depth
Over the past year, generative AI has become pervasive in workplaces: writing emails, creating presentations, summarizing reports, automating mundane tasks. But a growing body of research suggests that the reality of AI adoption often falls far short of its promise. The Harvard Business Review’s research with Stanford and BetterUp uncovered a phenomenon now being dubbed workslop—instances where AI-generated work looks serviceable but fails to meaningfully contribute. For many employees, this means spending extra time untangling or fixing outputs: identifying missing context, correcting errors, clarifying false or vague claims. These costs are both cognitive and interpersonal: workers view those who submit workslop as less trustworthy, creative, or capable, which undermines team morale and mutual respect.
Parallel to this, MIT’s GenAI Divide report paints a sobering picture: businesses are pouring tens of billions into generative AI, yet only about 5% of projects reach a stage where they deliver measurable value, especially in terms of profit or operational impact. Key barriers include mismatch between the generic nature of many AI tools and the specific needs of particular workflows, weak or non-existent feedback loops, lack of adaptation over time, and organizational structures that are unprepared to integrate AI beyond experimental or pilot phases.
What this all suggests is that success with workplace AI isn’t merely about buying the latest large language model or investing heavily; it’s about strategy, governance, and human oversight. Companies that define narrow, realistic use cases; give training and oversight; build in feedback and review mechanisms; and ensure that AI tools amplify human effort rather than shifting hidden costs downstream are far more likely to succeed. For those that don’t, the combination of workslop and failed pilots may degrade productivity, employee satisfaction, and possibly even the long-run return on AI investments.

