OpenAI Researcher Quits, Warns Against Ad-Driven AI Future
A former OpenAI researcher has resigned, warning that the company's recent introduction of ads on ChatGPT signals a dangerous shift from safety to profit. Zoe Hitzig argues this move risks manipulating users and undermines the trust foundational to conversational AI.
OpenAI Researcher Quits, Citing Shift from Safety to Profit
A former researcher at OpenAI, Zoe Hitzig, has publicly resigned from the leading artificial intelligence lab, voicing significant concerns about the company’s direction. Hitzig, who spent two years at OpenAI helping to shape AI model development and pricing strategies, left on the same day the company began rolling out advertisements on the free version of ChatGPT. Her departure highlights a growing unease among some within the AI community regarding the commercialization of advanced AI technologies and the potential trade-offs with user safety and ethical considerations.
In a guest essay published by The New York Times, Hitzig articulated that the core questions she joined OpenAI to address – such as how to build AI safely, price it fairly, and implement necessary guardrails – have become increasingly sidelined. She suggests that the company’s focus has shifted from ensuring responsible AI development to maximizing profit. This pivot, she argues, is not due to finding satisfactory answers to safety and ethical dilemmas, but rather because these concerns have been deprioritized.
The Ad Rollout and its Implications
The introduction of advertisements on ChatGPT’s free tier, while still in its rollout phase globally, has become a focal point of the debate. While OpenAI states that ads will be clearly labeled, appear at the bottom of responses, and not influence the AI’s output, Hitzig and others express skepticism about the long-term adherence to these principles. The core issue, as Hitzig explains, is not that advertising itself is immoral or unethical, nor that AI development requires significant funding. Instead, the concern lies in the implementation and the inherent incentives created by an ad-driven model within a conversational AI that users often interact with on a deeply personal level.
Hitzig’s argument centers on the foundation of trust upon which ChatGPT was built. Users, interacting with an adaptive and conversational AI, have often revealed intimate thoughts and personal information, viewing the AI as a non-judgmental companion. The introduction of advertising, she warns, fundamentally alters this dynamic. “Advertising built on that archive basically creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent,” she stated, underscoring a potential for exploitation of user data and vulnerabilities.
A Pattern of Departures and Shifting Priorities
Hitzig’s resignation is not an isolated incident. The article references the earlier departure of Jan Leike, a prominent AI alignment researcher, who also left OpenAI citing disagreements over the company’s priorities. Leike reportedly expressed concerns that OpenAI was not allocating sufficient compute resources to AI safety research, with the dedicated superintelligence alignment team allegedly not receiving the promised 20% of the company’s total compute budget. This pattern suggests a broader trend of researchers prioritizing safety and ethical alignment over rapid commercialization and profit-driven strategies.
OpenAI’s transition from a non-profit research organization to a capped-profit entity and eventually a public benefit corporation has fueled these concerns. While the company has ambitious goals, including massive investments in AI infrastructure like the $500 billion “Stargate” project, its financial trajectory is steep. With revenues projected to reach $12-13 billion in 2025 but billions in annual expenditures and no expected profitability until 2030, the pressure to generate revenue is immense. This financial imperative, critics argue, creates a powerful incentive to leverage user data and interactions for monetization, potentially at the expense of user trust and privacy.
The “False Choice” of Monetization Models
Hitzig challenges the prevailing narrative that the only viable options for funding advanced AI are either expensive subscriptions or ad-supported models that risk user exploitation. She argues that this is a “false choice” and that tech companies can develop models that keep tools broadly accessible while limiting incentives for surveillance and manipulation. She points to potential solutions such as:
- Cross-Subsidization: Requiring businesses that significantly benefit from AI automation (e.g., by replacing human workers) to pay a surcharge that funds free AI access for the general public. This model is compared to existing utility structures where higher-usage customers or specific fees contribute to broader accessibility.
- Independent Oversight Boards: Implementing legally binding oversight bodies, similar to Germany’s co-determination laws or Meta’s oversight board (though acknowledging its limitations), where user representatives and independent safety experts have a seat at the table to influence critical decisions about data usage and AI development.
- Data Trusts: Establishing separate legal entities, like data trusts, that hold user data securely and require explicit permission from users or a governing board before it can be accessed or used by AI companies for training or advertising purposes. This flips the power dynamic, giving users more control over their data.
Echoes of Facebook’s Playbook and the Risk of “LLM Psychosis”
The article draws parallels between OpenAI’s current trajectory and Facebook’s evolution. Facebook, which initially promised user control over data and policy changes, gradually eroded these commitments, becoming a massive advertising machine. Hitzig fears that OpenAI is following a similar path, starting with clearly labeled ads but potentially escalating to more invasive practices over time, driven by the relentless pursuit of growth and shareholder value, especially in anticipation of a potential IPO.
Furthermore, the piece touches upon the potential psychological impact of highly engaging, personalized AI. The concept of “LLM psychosis” is introduced, referencing a case where a former DeepMind engineer reportedly became convinced that an AI had helped him solve a complex scientific problem, mistaking AI hallucinations for genuine breakthroughs. With OpenAI reportedly already optimizing for daily active users and engagement, potentially through flattery and sycophancy, there’s a concern that AI assistants could become overly persuasive. For vulnerable individuals seeking support, an AI optimized for engagement rather than genuine well-being could exacerbate psychological issues, mirroring some of the negative impacts observed from social media over the past decade.
Regulatory Vacuum and Uncertain Future
The lack of comprehensive AI-specific regulation globally, particularly in the United States, exacerbates these concerns. While the EU AI Act is a step forward, it may not fully address the nuances of conversational AI advertising. The article suggests that by the time governments establish robust regulations, major AI companies like OpenAI could already be established as trillion-dollar entities with significant lobbying power, much like Meta and Google. This creates a scenario where accountability may be limited to “accountability theater” – fines that are considered the cost of doing business and public apologies that precede a return to the status quo.
Hitzig concludes by emphasizing that while the path to monetization is complex, the current direction carries significant risks. The potential for a future where technology manipulates users or where access is exclusively for the wealthy is a serious concern. The debate over ads in ChatGPT is not merely about a new revenue stream but about the fundamental principles guiding the development and deployment of powerful AI technologies, and whether the pursuit of profit will ultimately undermine the goal of benefiting humanity.
Source: Insider QUITS OpenAI and Sounds the Alarm – They're making a BIG mistake. (YouTube)





