Grammarly’s latest artificial intelligence feature, designed to simulate feedback from well known experts, has triggered criticism from academics, journalists, and technology observers. Critics say the system risks misleading users by attaching the names of real scholars and writers to AI generated commentary even though those individuals were never involved in producing the feedback.
The controversy centers on Expert Review, one of several AI agents Grammarly introduced as part of its expanded generative AI platform in 2025. The feature analyzes a user’s writing and produces critique framed as if it came from prominent figures in the relevant field. While the company says the tool draws inspiration from publicly available works rather than direct expert participation, observers say the way it presents these voices may create the impression of authority that does not exist.
Expert Review is integrated directly into Grammarly’s writing interface. Users drafting a document such as a university essay, research paper, or marketing proposal can click an “expert” button in the sidebar to activate the tool.
Once activated, the system analyzes the text and produces feedback framed in the voice of a relevant authority. For example, a scientific essay may receive commentary attributed to well known scientists, while a technology piece may generate responses styled after prominent technology journalists or analysts.
According to Grammarly, the feature runs on its core large language model and draws inspiration from publicly available scholarship and writing. The company says the AI highlights ideas associated with influential thinkers rather than producing literal quotes or representing real time participation by those individuals.
Company documentation states that references to experts are informational. Grammarly notes that the names appearing in the feedback do not indicate endorsement, affiliation, or direct involvement with the product.
The company describes the tool as a way to help users explore perspectives from influential voices whose work they may want to study further.
The controversy began when historians and journalists examined how the feature presented expert feedback.
German historian Verena Krebs was among the first to question the system publicly after discovering that the AI generated comments attributed to scholars who had died years earlier. Krebs described the experience as unsettling and said it felt as if the software was metaphorically bringing deceased academics back to critique modern writing.
Additional scrutiny came when reporters at The Verge tested the feature. In one example the tool generated feedback labeled with the names of Verge journalists including editor in chief Nilay Patel and reporters David Pierce, Sean Hollister, and Tom Warren.
None of those journalists had participated in the feature or granted permission for their names to appear in the feedback.
Coverage from TechCrunch later highlighted the same issue and reported that none of the individuals whose names were used appeared to be involved in the creation of the Expert Review system.
These discoveries quickly sparked questions about how AI systems should represent real people’s identities and intellectual work.

Grammarly and its parent company Superhuman say the feature does not claim to involve real experts directly.
Company representatives explain that the system generates insights inspired by the public writings of well known thinkers. According to the company the AI highlights perspectives associated with those figures because their work is widely cited and publicly available.
Alex Gay, a marketing executive at Superhuman, said the feature references these individuals because their scholarship has shaped the fields users are writing about.
Grammarly also points to its documentation which clarifies that the references are intended to provide context and inspiration rather than represent actual expert participation.
Another spokesperson described the system as surfacing influential voices that users may want to explore through their original publications.
From Grammarly’s perspective the tool functions more like a recommendation layer that points users toward influential thinkers rather than a literal panel of experts reviewing a document.
Despite these explanations critics argue that the product’s name and interface create a misleading impression.
Historian C.E. Aubin told WIRED that labeling the system as “Expert Review” suggests real expert involvement when none actually exists.
“These are not expert reviews,” Aubin said. “There are no experts involved in producing them.”
Researchers and ethicists highlight several concerns.
By attaching comments to recognizable names the feature may give AI generated feedback the appearance of expert validation. In reality the commentary is produced by a predictive model rather than a human specialist.
Using the names and reputations of scholars and journalists without permission raises concerns about name, image, and likeness rights. Critics argue that people’s identities are being used to enhance the credibility of a commercial product without consent.
The use of deceased scholars in the system has been particularly controversial. Some academics argue that invoking historical figures as if they were providing contemporary advice is disrespectful, especially when families or estates have not approved such use.
Cybernews described the practice as AI “dabbling in black magic,” suggesting that the system metaphorically resurrects scholars to act as reviewers.
The Grammarly controversy has also become part of a larger discussion about AI persona design.
Many generative AI products simulate recognizable styles or personalities. Chatbots frequently imitate historical thinkers, celebrities, or fictional characters to make interactions feel more engaging.
Critics warn that when these personas are tied to real individuals, especially living people, the distinction between inspiration and impersonation can become blurred.
In the case of Grammarly’s Expert Review feature the AI does not simply mimic writing style. Instead it attaches the names of specific experts directly to the feedback generated for a user’s document.
Critics say that design choice creates the strongest impression that real experts are participating in the review.
The dispute also raises emerging legal questions around AI training and identity use.
As generative AI models are trained on massive collections of public data including books, articles, and academic papers, companies face increasing scrutiny over whether these systems rely on individuals’ intellectual work without compensation.
When those models also generate output tied to real names the issue becomes even more complex.
Some legal scholars say the practice may fall into the broader category of name, image, and likeness rights, which regulate how a person’s identity can be used commercially.
While Grammarly argues that the system references widely cited work critics say the interface may cross ethical boundaries by presenting the feedback as if it were authored by those individuals.
The feature has attracted significant attention across technology media.
WIRED framed the issue as an example of how far AI tools are pushing persona based interactions and questioned whether attaching expert names to machine generated commentary is inherently deceptive.
The Verge’s coverage focused on the use of journalists’ identities without permission while TechCrunch criticized the feature for presenting the appearance of expert involvement without actual experts.
Other outlets note that disclaimers buried in documentation may not be enough to prevent users from interpreting the results as authoritative commentary.
The debate reflects a broader shift in how AI companies design their products as they increasingly add personality layers that make generative systems appear more human and authoritative.
So far Grammarly has not announced plans to remove the Expert Review feature. Instead the company continues to emphasize that the system does not claim real expert participation and that references to scholars are informational.
Critics argue that the core issue remains unresolved.
The name “Expert Review,” they say, suggests human participation that does not actually exist. At the same time the use of real scholars’ and journalists’ names raises questions about consent, attribution, and intellectual ownership.
As AI writing tools become more integrated into education and professional work the debate surrounding features like Expert Review may influence how companies design future systems and how they represent authority in AI generated feedback.
For now the controversy highlights a growing challenge in the AI era. Companies must find ways to draw on influential thinkers without creating the impression that those thinkers are actually speaking.
Be the first to post comment!
Google is preparing to commit up to $40 billion to AI startu...
by Vivek Gupta | 2 days ago
Google Cloud has unveiled a new generation of custom AI chip...
by Vivek Gupta | 6 days ago
Google has introduced one of its most ambitious productivity...
by Vivek Gupta | 6 days ago
Google is rolling out one of the most significant upgrades t...
by Vivek Gupta | 1 week ago
Google is rolling out a major upgrade to Gemini by connectin...
by Vivek Gupta | 1 week ago
DeepL is expanding beyond the text translation business that...
by Vivek Gupta | 1 week ago