A newly introduced “Expert Review” feature in the widely used AI writing platform Grammarly is drawing backlash after it surfaced that the system generates feedback allegedly “inspired by” well-known journalists, authors, and academics—often without their knowledge or consent. The tool analyzes a user’s document and then presents AI-generated suggestions framed as commentary from recognizable figures whose published work is widely cited online. Critics argue that the feature risks misleading users by presenting advice that appears to come from real individuals, particularly because the interface can resemble the comment system used by human editors in collaborative platforms like Google Docs. Some experts named in the system reportedly discovered their identities were being used only after the feature launched, and several have objected to the practice, saying it exploits their reputations and intellectual labor while potentially misrepresenting their editorial voice. Additional concerns have emerged about whether the AI is actually basing suggestions on the cited expert’s work or simply generating generic feedback while attaching a recognizable name. The dispute highlights broader debates about artificial intelligence companies mining publicly available writing to build commercial tools without explicit permission and raises questions about transparency, copyright, and the integrity of digital authorship in an era where algorithmic systems increasingly attempt to replicate human expertise.
Sources
https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews
https://www.wired.com/story/grammarly-is-offering-expert-ai-reviews-from-your-favorite-authors-dead-or-alive
https://www.techradar.com/ai-platforms-assistants/grammarly-has-rebranded-as-superhuman-launching-a-new-ai-assistant-that-works-across-100-apps
Key Takeaways
- AI writing platforms are increasingly attempting to replicate or simulate the voice and editorial judgment of real experts using publicly available work as training data.
- Critics argue that attaching real names to algorithmic suggestions—without permission—risks misleading users into believing they are receiving guidance from the actual individuals.
- The controversy underscores larger concerns about intellectual property, transparency, and the ethics of AI systems commercializing human knowledge and reputation.
In-Depth
Artificial intelligence is rapidly transforming the way people write, edit, and communicate, but the latest controversy surrounding Grammarly’s “Expert Review” feature demonstrates how quickly technological innovation can collide with questions of ethics, ownership, and credibility. At the center of the debate is a relatively simple premise: the platform analyzes a document and produces AI-generated feedback supposedly inspired by the writing style or perspective of well-known thinkers, journalists, and academics. The feature aims to give users the sense that their work is being evaluated through the lens of recognized authorities.
On paper, the concept sounds useful. Many writers, whether students, professionals, or researchers, want feedback from subject-matter experts. AI promises to democratize that process by synthesizing insights from large volumes of published material and delivering suggestions instantly. In theory, that could make expert-level editing accessible to millions of people who would otherwise never have access to such guidance.
The problem, critics say, is how the feature presents itself. Some individuals discovered that their names were being used as “experts” in the system without their knowledge or approval. Because the suggestions appear in familiar editing formats—similar to comments from a real editor—the experience can blur the line between human feedback and machine-generated analysis. Observers argue that the result risks giving users the impression that real experts are endorsing or participating in the tool when they are not.
Beyond the issue of permission lies a deeper concern: whether the AI can truly replicate the judgment of a real editor. Writing is not merely a collection of sentences; it reflects a person’s reasoning, priorities, and editorial instincts. Critics contend that a model trained on published text may mimic superficial stylistic elements but cannot replicate the nuanced decision-making that experienced editors apply to a document. When an AI suggestion is attributed to a recognizable figure, it can therefore misrepresent that person’s approach to writing.
The debate also touches on intellectual property. AI companies frequently train models using publicly available data, arguing that such material is part of the digital commons. But authors and scholars increasingly question whether corporations should be able to build commercial products on top of their work without compensation or consent. Some experts worry that allowing this practice could normalize the idea that human creativity can be freely harvested by algorithms and repackaged into subscription services.
The broader context is a technology industry racing to embed AI into everyday productivity tools. Writing assistants that once focused on correcting spelling or grammar now generate entire paragraphs, rewrite documents, and even simulate specialized reviewers. As companies compete to offer increasingly sophisticated features, the temptation to lean on recognizable names and reputations as marketing tools grows stronger.
For users, the lesson may be straightforward: AI tools can be helpful, but they should be treated as assistants rather than authorities. While automated suggestions can speed up drafting and editing, they cannot replace the judgment of real experts—or the responsibility of writers to think critically about the advice they receive.

