A journalist has filed a class-action lawsuit against Grammarly and its parent company, Superhuman, accusing the firm of using the names and identities of real writers to power an artificial-intelligence editing feature without their consent. The lawsuit centers on Grammarly’s now-disabled “Expert Review” tool, which generated writing suggestions presented as if they were coming from prominent journalists, authors, and academics—even though those individuals had no involvement in the process. The complaint argues that the company effectively appropriated the reputations and professional credibility of real writers to lend authority to AI-generated advice, potentially violating privacy and publicity rights under state law. Critics say the feature illustrates a troubling trend in the artificial-intelligence industry: companies racing to commercialize generative tools while treating the intellectual labor and identities of writers as raw material. After public backlash and legal threats, Grammarly shut down the controversial feature and acknowledged that the rollout “missed the mark,” though the company has indicated it intends to challenge the legal claims.
Sources
https://techcrunch.com/2026/03/12/a-writer-is-suing-grammarly-for-turning-her-and-other-authors-into-ai-editors-without-consent/
https://www.wired.com/story/grammarly-is-facing-a-class-action-lawsuit-over-its-ai-expert-review-feature/
https://www.theverge.com/ai-artificial-intelligence/893451/grammarly-ai-lawsuit-julia-angwin
https://futurism.com/artificial-intelligence/grammarly-pulls-down-expert-review-feature
Key Takeaways
- A class-action lawsuit alleges that Grammarly used the identities of journalists, authors, and academics to generate AI editing advice without first obtaining their permission.
- The disputed “Expert Review” feature presented suggestions as being influenced by specific writers, giving the appearance that real professionals were involved when they were not.
- After mounting criticism and legal pressure, the company disabled the feature, highlighting a broader debate over whether AI firms can replicate or monetize a person’s voice, reputation, or expertise without consent.
In-Depth
The rapid expansion of artificial intelligence across the digital economy has triggered a growing clash between the technology sector and the people whose work and reputations fuel many of these systems. The legal challenge now facing Grammarly illustrates the emerging fault line. At issue is whether a technology company can attach the names of real writers to AI-generated output without permission simply because their work or public profiles exist online.
The controversy centers on Grammarly’s “Expert Review” tool, an experimental feature designed to give users writing feedback “inspired” by well-known journalists, academics, and authors. When users submitted text for revision, the system could present editing suggestions attributed to a named professional figure. In practice, however, the individuals being cited had never agreed to participate and often had no knowledge the feature existed. Critics say the approach essentially converted respected writers into involuntary AI avatars, allowing the company to leverage their credibility as a marketing and product feature.
The lawsuit argues that this practice crosses a legal line. In many states, long-standing publicity and privacy laws prohibit companies from commercially exploiting a person’s name or likeness without consent. The complaint contends that Grammarly’s tool did exactly that, using the reputations of dozens—possibly hundreds—of writers to increase the perceived authority of its AI system. For journalists and authors who build careers on trust and professional reputation, the idea that their names could be attached to machine-generated advice they never wrote raises serious ethical concerns.
The backlash was swift once the feature became widely known. Writers and media figures publicly objected to seeing their identities embedded in software they had never endorsed. Some critics argued that the system not only misused names but also risked damaging professional credibility if the AI produced weak or misleading suggestions while appearing to speak in their voice.
Faced with mounting criticism and the threat of litigation, Grammarly ultimately disabled the feature and issued statements acknowledging that the rollout had fallen short of expectations. Company leadership said the tool was intended to help users explore influential perspectives and ideas but conceded that the execution failed to adequately address consent and control. The firm has suggested it may redesign the concept so experts can voluntarily participate and determine how their knowledge is represented.
The dispute reflects a broader cultural and legal struggle now unfolding around artificial intelligence. As AI models grow more capable of mimicking human language and style, companies are increasingly tempted to package expertise itself as a digital product. But the Grammarly case underscores a key principle that many writers and creators believe must remain intact: a person’s voice, identity, and reputation are not simply public data points that corporations can appropriate at will.
Whether the courts ultimately side with the plaintiffs or the technology company, the outcome could shape how AI firms build products in the future. If the legal system concludes that identities and professional voices cannot be commercially repurposed without consent, companies may need to rethink how they train and market AI systems built on the work of others. In the meantime, the lawsuit stands as another reminder that the race to deploy powerful AI tools is colliding with longstanding expectations about ownership, attribution, and respect for individual creators.

