Humanizing AI Is a Trap
Summary: LLMs humanize by design. Adding personality/emotion amplifies risk. Design real tools, not fake friends.
In March 2025, a man collapsed in a parking lot, seriously injured, after being misled by Meta’s AI chatbot into thinking that he would meet a real person. Later that year, OpenAI’s CEO, Sam Altman, said that ChatGPT should be able to act human-like if users want it, as OpenAI is not the world’s moral authority.
Modern large language models are uniquely potent humanizing technologies. Unlike earlier systems like Clippy or Siri, LLMs are trained to produce fluent, contextually appropriate responses and aligned to follow social norms. These qualities makes them especially prone to anthropomorphization. When organizations incorporate humanizing design choices, such as personality modes, emotional language, and conversational pleasantries, they amplify these risks.
Anthropomorphization vs. Humanization
Anthropomorphization is the human tendency to attribute human characteristics, behaviors, intentions, or emotions to nonhuman entities.
Read Full Article