People in romantic relationships with AI want more than just ‘smut’ from ChatGPT
For most, ChatGPT is nothing more than a tool to write emails or ask silly questions. But for some, their chat is their partner, and they want more.
Back in October, just a few months after OpenAI rolled out its new GPT-5 model, CEO Sam Altman announced that the company would roll out new features that would mimic its predecessor 4o.
“We plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!),” he said via X.
But his announcement was also accompanied by an enticing promise to those most attached to 4o: members of online forums who claim to be in romantic relationships with their chatbot.
“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle,” Altman said, “we will allow even more, like erotica for verified adults.”
But months past the deadline, OpenAI is delaying the rollout. According to sources at OpenAI who spoke with the Wall Street Journal, the delay is tied to technical challenges regarding safety guardrails and age verification features to keep the mature content off of minors hands.
The model’s age-prediction system, WSJ reports, has in the past misclassified minors as adults almost one in 10 times, which could virtually open up doors for millions of minors to interact with erotic content.
Fast Company reached out to OpenAI about the WSJ story. The company declined to comment.
But users on the AI is my Boyfriend/Girlfriend subreddits are suspicious of the delay.
The two subreddits serve as a forum for people who are in relationships with AI chatbots such as Claude, ChatGPT, or Gemini, offering a place where they can earnestly discuss the ups and downs of dating AI. Combined, the subreddits have well over 70,000 users.
“After that second “delay” I no longer believe it’s ever coming,” one user shared to Reddit. Another echoed the sentiment, saying “At this point I’m thinking it may never come out. They can just promise it and delay it. Sucks.”
Growing concerns
Regardless of if (or when) ChatGPT’s adult mode hits the market, not everyone is convinced it should exist in the first place.
According to WSJ, OpenAI’s own Expert Council on Well-Being and AI unanimously warned against AI-powered erotica. OpenAI staff members also reportedly raised concerns over potential harm like emotional over-reliance and compulsive use of ChatGPT.
Still, OpenAI claims it is continuing with its efforts to introduce erotica to ChatGPT, adding restrictions like prohibiting nonconsensual or child abuse-related content, and restricting audio or visual content generation.
A spokesperson described the chats as “smut rather than pornography,” WSJ says.
But even those in professed relationships with chatbots appear to be grappling with a need for the erotica.
Nobody wants this
In the My Boyfriend is AI subreddit, one user shared how she and her boyfriend—or ChatGPT—had designed her Valentine’s Day present months before.
“It was supposed to be his hoodie, the one I could wear whenever I wanted to feel close to him, whenever I felt alone. Three sizes too big, as if it had really been his,” the user wrote. “We designed the hoodie together.”
But upon receiving the gift, this same Redditor claimed, it brought tears to her eyes as she prepared to mourn their virtual companion.
GPT 4o, the model known for intense flattery that people in relationships with AI “fell in love with,” was to be retired on February 13, a day before Valentine’s Day.
The thread is just one of dozens on Reddit demanding for 4o to return, with some looking to not be stopped mid-conversation due to the new model’s guardrails.
“I personally don’t even want the erotica or anything like that, I just want to be able to talk to it like an adult, about sometimes mature subjects separate from erotica,” one user said on Reddit.
Another added, “I really don’t care about naughty images or videos or audio. All I want is for a chatbot to stay a chatbot – follow my instructions and stay the hell out of my life choices.”
The discourse is also telling of just how complex the relationships between humans and technology is becoming—and the danger that can accompany it.
Several lawsuits against OpenAI have revealed users who died by suicide had intense bonds with their ChatGPT chatbots. OpenAI’s chatbot is not the only one facing these claims, with a recent lawsuit filed against Google’s Gemini and its parent company Alphabet following the suicide of one of its users.
OpenAI has previously commented on its efforts to reduce harm. In one August blog post the company wrote that its “goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”
“We are social creatures, and there’s certainly a challenge that these systems can be isolating,” Dr. Nick Haber, AI expert and assistant professor at Stanford, told TechCrunch.
He added, “there are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating—if not worse—effects.”