after anthropic accidentally leaked source code of its blockbuster cloud chatbot, netizens quickly started sorting through the more than 512,000 lines of code – and have uncovered a number of curiosities scattered throughout.
in one broad thread In the r/ClaudeAI subreddit, a user said they found a “Tamagotchi”-like feature hidden in the code, referring to handheld digital pets that you need to keep checking in to keep alive.
The user claimed, “There’s an entire pet system called /buddy. When you type it in, you generate a unique ascii companion based on your user ID.” “The pet sits near your input box and reacts to your coding.”
The user said they found 18 different pet species, including a duck, a dragon, a capybara, and a so-called “chonk”, along with a rarity system found in gacha games that grants the user a pet based on chance.
Will Tamagotchis be a mainstay inside the cloud? Probably not: The user discovered that the included string read “friend-2026-401”, which almost certainly means Anthropic intended this feature to be an April Fool.
This wasn’t the only item Internet detectives found. they too highlighted a feature Called “Kairos,” it can reportedly serve as an always-on AI agent that constantly runs in the background and can take actions on your behalf without you even asking. It can also send push notifications to your phone or desktop to get the attention of users who view the claimed code.
others said they found A “incognito” mode to hide the fact the cloud is an AI when contributing code to a public repository, as well as a mood tracking feature It measures the coder’s “frustration” level based on clues such as their messages and profanity. Someone also discovered a message left by one of Anthropic’s coders, in which he admitted that “memorization here greatly increases the complexity, and I’m not sure it actually improves performance.”
Overall, there are no smoking guns here, but the leak does provide an interesting glimpse behind the curtain – as well as easy fodder for any competitors looking to reverse engineer the company’s technology.
For Anthropic, this is undoubtedly an embarrassing mistake. It appears that a code base known as a source map file was accidentally leaked in the public release of 2.1.88 of the company’s Cloud Code npm package. A map file links the bundled code back to the original source, register Let’s tellAnd a resourceful programmer used it to find out where the cloud’s source code was stored, The whole thing is being supported on GitHub.
Anthropic fought to obtain the exposed source code by issuing a copyright takedown, although at this point it may already be out of the company’s hands.
As to how the filed map slipped through the cracks in the first place, Anthropic officially blamed “human error” and insisted it was not a “security breach.”
Notably, however, this leak comes after Anthropic data has consistently claimed that much of the cloud’s code is now being written with the help of AI, and recent incidents at Amazon and the cybersecurity breach caused by AI models at Meta raise the possibility that Anthropic’s own tools may have played a role.
More on AI: Anthropic’s upcoming model leaked with “unprecedented cybersecurity risks” in the most ironic way possible