Grammarly Accused of Identity Theft, Faces Lawsuit

Grammarly faces a class-action lawsuit and backlash after allegedly using journalists' identities to train its AI without consent. The feature has been rolled back.

2 weeks ago
4 min read

Grammarly’s AI Feature Sparks Outrage and Lawsuit Over Identity Misuse

In a move that has sent shockwaves through the tech and journalism communities, Grammarly, the popular writing assistant, is facing serious accusations of identity theft and misuse of personal data. A controversial new feature, intended to enhance its AI writing capabilities, allegedly used the likenesses and voices of prominent journalists without their explicit consent, leading to a class-action lawsuit and a swift rollback of the feature.

The Genesis of the Controversy

Grammarly’s latest innovation aimed to provide users with a more personalized and sophisticated AI writing review experience. The idea was to train its AI models on the writing styles and potentially the voices of well-known figures, including journalists like Casey Newton and Julie Anguin, and even members of The Vergecast. The goal, it seems, was to imbue the AI with the perceived authority and nuance of these respected personalities, offering users a seemingly expert-level critique of their work.

Journalists’ Identities Hijacked

The core of the controversy lies in Grammarly’s alleged execution of this feature. Reports emerged that the company began incorporating the identities of journalists into its AI, presenting it to users as if they were being reviewed by these real individuals. This was reportedly done without the prior knowledge or consent of the journalists involved. When confronted, Grammarly’s initial response was described as a “nothing burger statement,” failing to adequately address the severity of the accusations. Furthermore, the company offered an “email in opt-out” option, a process widely criticized as insufficient and not in line with standard data privacy practices, implying users had to actively request removal rather than being excluded by default.

Legal Repercussions and Public Backlash

The situation quickly escalated. Friends at other major publications began inquiring about the feature, and legal counsel reportedly started investigating the implications. The gravity of the situation was underscored when Julia Anguin, a respected reporter known for her investigative work, filed a class-action lawsuit against Grammarly. This legal action brought significant pressure to bear on the company, signaling a clear stance from those whose identities were allegedly misused.

Grammarly’s Retreat and Apology

Faced with mounting backlash and a formal lawsuit, Grammarly eventually capitulated. The company issued a full rollback of the controversial feature, accompanied by an apology. “We’re very sorry, blah blah blah,” the statement, though initially dismissive, was followed by a more comprehensive retraction, acknowledging the error in judgment and the impact on the individuals involved. This swift reversal highlights the significant reputational damage and legal risks associated with such data privacy and identity misuse practices.

Who Should Care and Why?

This incident is a critical wake-up call for both consumers and technology companies. For consumers, it underscores the importance of understanding how their data, and potentially the data of others, is being used by AI services. Features that leverage personal identities, even for seemingly benign purposes, carry significant ethical and legal implications. Users should remain vigilant about privacy policies and opt-out options.

For technology companies, particularly those developing AI, this serves as a stark reminder of the ethical boundaries that must be respected. The rush to innovate and create more compelling AI experiences cannot come at the expense of individual rights and consent. Transparent data usage, robust consent mechanisms, and a proactive approach to privacy are paramount. Failure to do so can lead to severe legal consequences, loss of user trust, and significant brand damage.

Journalists and public figures are particularly vulnerable in this new AI landscape. Their established reputations and public personas can be exploited without their permission, potentially diluting their brand and even being used to lend false credibility to AI-generated content. This case emphasizes the need for stronger legal protections and industry-wide ethical guidelines regarding the use of public figures’ identities in AI training and deployment.

The Road Ahead for Grammarly

Grammarly’s quick retraction suggests they understand the severity of their misstep. However, the incident leaves a lingering question about the internal processes that allowed such a feature to be developed and deployed in the first place. The company will need to rebuild trust with its user base and the wider public by demonstrating a renewed commitment to ethical data handling and transparent AI development. The outcome of the class-action lawsuit will also be a significant factor in shaping Grammarly’s future and setting precedents for AI development in the industry.


Source: Grammarly used our identities without permission. #Vergecast (YouTube)

Written by

Joshua D. Ovidiu

I enjoy writing.

10,961 articles published
Leave a Comment