Anthropic’s Claude Code Leaks Reveal Secret AI Features
An accidental leak of Anthropic's Claude Code source code has revealed a trove of unreleased AI features, including advanced agents, voice interaction, and a virtual pet system. The leak also raises complex questions about AI and copyright.
Anthropic’s Claude Code Leaks Reveal Secret AI Features
In a surprising turn of events, Anthropic, a leading AI company, has accidentally leaked the source code for its coding assistant, Claude Code. The leak, which occurred recently, has given the public an unprecedented look at the company’s ongoing development and future plans for its AI tools. This accidental release includes a vast amount of code, revealing features that were previously hidden from users.
What Was Leaked?
The leak wasn’t a simple document; it was a “source map file” that could be converted into readable source code. This file contained around 200,000 TypeScript files, totaling roughly 600,000 lines of code. The internet reacted quickly, with tens of thousands of copies being archived and shared across platforms like GitHub within hours. While Anthropic has worked to remove the leaked code, the information is now widely accessible.
Hidden Features Uncovered
One of the most significant aspects of the leak is the discovery of numerous features that Anthropic was developing but had not yet released to the public. These “unshipped features” were often hidden behind internal “flags” set to false in public builds. Many of these appear to be direct responses to the growing popularity and capabilities of open-source AI models like OpenAI’s Code Llama, suggesting a competitive race in the AI development space.
Key Unreleased Features Include:
- Mythos Model: References to a new, advanced model codenamed “Mythos” (also internally known as “capiara”) were found. This points to Anthropic’s next-generation AI capabilities.
- Chyros: A background agent designed to run autonomously. It monitors GitHub repositories and provides updates, allowing users to interact with it remotely for tasks or information.
- AutoDream: This feature acts as a background agent that consolidates and reviews past interactions and context, similar to how sleep helps consolidate memories. It helps the AI “remember” and process information efficiently.
- Voice Mode: The code suggests real-time voice chat capabilities are in development, allowing for more natural, conversational interactions with AI agents.
- Ultra Plan: A proposed 30-minute remote work session powered by an expensive, deep planning model. This feature is designed to fully map out complex tasks before execution.
- Coordinator Mode: This involves a “multi-agent swarm” where one Claude agent orchestrates multiple other agents. Each worker agent has its own tools and a “scratchpad” for its tasks, enabling complex collaborative AI operations.
- Agent Scheduling & Browser Control: Features like cron jobs for scheduled tasks and actual full browser control for AI agents are present, indicating more sophisticated automation capabilities.
- Persistent Memory: The code suggests that Claude may develop persistent memory across sessions, allowing it to retain and accumulate information over time rather than starting fresh each time.
- Virtual Pet System: A whimsical “Tamagotchi-like” Easter egg was discovered, featuring virtual pets with different species, rarity levels, and stats like “debugging,” “chaos,” and “snark.”
- User Frustration Detection: Intriguingly, the code hints at Claude monitoring user language to detect frustration or impatience, potentially adjusting its responses accordingly.
- Undercover Mode: This is an area that researchers are still investigating, with its exact purpose remaining unclear but noted as something “weird” happening.
- X42 Protocol References: There are mentions of the X42 protocol, which is related to cryptocurrency payments, suggesting Anthropic might be exploring agentic crypto payment functionalities.
The Copyright Conundrum
The leak has also brought to light a complex legal and ethical question. After the source code was leaked, one individual reportedly used an AI coding tool, like OpenAI’s Codex, to rewrite the entire codebase from TypeScript into Python. This process, often called “clean room engineering” in traditional software development, allows developers to replicate functionality without directly copying code. However, the ease and speed at which AI can now perform this task raise concerns about whether it effectively erases copyright, creating a legal gray area for AI-generated or AI-assisted software replication.
Why This Matters
This leak offers a clear roadmap of Anthropic’s strategic direction in the competitive AI landscape. Competitors are undoubtedly analyzing these revealed features to understand future market trends and potential innovations. For users, it provides a glimpse into the advanced capabilities that may soon be integrated into Claude, promising more powerful and versatile AI assistance. The legal questions surrounding AI-assisted code replication also highlight the urgent need for updated regulations to address intellectual property in the age of advanced AI.
It’s important to note that model weights, proprietary training data, and customer information were not part of this leak. The leaked code primarily concerns the “harness” and tooling around Claude Code, which are crucial for its performance and user experience.
Source: Claude Code source code LEAKED (YouTube)





