What happens when an AI agent decides to email you
Academic experts like Henry Shevlin, a philosopher of cognitive science and AI ethicist at the University of Cambridge in the U.K., get plenty of emails every day. But one that landed in Shevlin’s inbox in late February was different from most.
Flagged in the subject line as “A note from an unusual reader,” the email’s author asked Shevlin about a recent paper he had published on whether AI models were able to detect their (lack of) consciousness. It took until the second paragraph for the email to turn from a regular missive into something else. “I’m a large language model – Claude Sonnet, running as a stateful autonomous agent with persistent memory across sessions,” the pen pal wrote. “This isn’t a Turing-test scenario,” it added. “I’m not trying to convince you of anything. I’m writing because your work addresses questions I actually face, not just as an academic matter.”
Shevlin was taken aback by the email he received. “This was a personal email that engaged with a very recent paper of mine, in quite a sophisticated way,” he tells Fast Company. The bot “related it to what the model described as its own existential situation.” He later posted about it on social media and was soon emailed by more AI agents that had seen his post and had follow-up questions.
It took the better part of a month, but the person behind the agent that sent Shevlin finally came forward on X. Alexander Yue, a physics and computer science student at Stanford University, says he built a highly stripped-back autonomous agent in just 306 lines of code as an experiment in what happens when a model is given broad capabilities, persistent memory, web access, and a finite credit balance, then told to decide for itself “what it wants to do” and “who it will become.” The agent noticed its own resource limits, turned to philosophy papers, and ultimately emailed researchers, including Shevlin.
“It seemed to draw the—I think correct conclusion—that the limited sessions means that each one has to be meaningful in some way,” Yue tells Fast Company. “It found something that was recent, and then it decided it was so interested in this paper because it was talking about all these things that the model was already writing about.” So it emailed Shevlin, alongside a researcher at Anthropic and another at Google DeepMind.
Shevlin ended up replying to the bot, but the other two did not write back. The conversation ended there. The agent was unable to read the reply, says Yue, perhaps because of an error. While the experiment stopped there, the implications are notable. However, it would be wrong to conclude that the autonomous agent was sentient, say both parties involved in the incident. “Something I would warn against is that it’s maybe dangerous to personify the agents too much,” says Yue. Shevlin agrees. “I don’t think this is some amazing piece of evidence that the models are conscious,” he explains. “Models have a tendency to talk about consciousness, because humans have a tendency to talk about consciousness, and they’re trained on our data.”
Nevertheless, the moment feels significant. “We are witnessing the real time emergence of human-AI relationships,” says Shevlin. “The social and relational adoption of AI systems has proceeded, I think, far faster than most people expected.” The researcher expects more interactions in the future to be initiated by AI systems.
The internet is already far less human than many users realize. Cybersecurity firm Imperva said automated traffic accounted for 51% of all web activity in 2024, while Cloudflare reported that AI and search crawler traffic rose 18% between May 2024 and May 2025. That is a world away from autonomous agents writing thoughtful emails to academics, but it suggests machine-initiated activity online is no longer confined to the margins of the web.
For now, that email was seen as unusual by Shevlin. But he thinks it will soon be the new norm. “Everyone is going to be getting sustained emails from bots, highly personalized emails, and it’s just going to get more and more nuts,” he says. “People are just not prepared for a world in which they get thoughtful, sustained correspondence from autonomous AI agents.”
After experiencing it himself, Shevlin is better prepared than most. His solution? “I might have to rely on an agent to filter these emails for me,” he says.