Holographic AI is going to happen. But this bizarre ‘friend in a jar’ is not it
Gaming peripheral company Razer is betting that people want AI holograms. So much so that it introduce a perplexing new product at CES 2026 that early critics have dubbed a “friend in a bottle.” Project AVA, is a small glass cylinder that features a 5.5-inch animated desk buddy that can interact with you, coach you, or offer gaming advice on demand—all powered by xAI’s Grok.
Project AVA uses a technology Razer calls “PC Vision Mode” that watches your screen, allowing its 3D animated inhabitant to offer real-time commentary on your gameplay, track your mood, or simply hang out. It attempts to sell the illusion of presence—a companion that isn’t just an app you close, but a physical object that lives in your room.
It’s not a bad idea, in theory. Giving AI a face is not just a marketing ploy but a biological inevitability. Yet Project AVA marks a strange new milestone in our march toward AI companions.
The inevitability of holographic AI
When OpenAI’s introduced ChatGPT 4o voice chats in the summer of 2024, humanity entered a new form of computer interaction. Suddenly, we could interact with AI voices that were smart and natural enough for humans to maintain a conversation. Since then, we have seen other voice AIs like Gemini Live, which introduce pauses, breathing, and other elements that cross the uncanny valley and allow many to suspend disbelief and even form a bond with these assistants.
Research has shown that for deep emotional venting, users currently prefer voice-only interfaces because they feel safer and less judgmental. Without a face to scrutinize, we avoid the social anxiety of being watched. However, some neuroscientists argue that this preference may just be a temporary work-around for bad technology.
Our brains are evolutionarily hardwired for face-to-face interaction. The “Mirror Neuron System” in our brains—which allows us to feel empathy by watching others—remains largely dormant during voice-only chats. A 2024 study on “Generation WhatsApp” confirmed that neural synchrony between two brains is significantly weaker during audio-only exchanges compared to face-to-face ones. To feel truly “heard,” we need to see the listener.
Behavioral science also tells us that up to 93% of communication is nonverbal. Trust is encoded in micro-expressions: a pupil dilating, a rapid blink, an open posture. A voice assistant transmits 0% of these signals, forcing users to operate on blind faith. Humans still find them very engaging because our brain fills the gaps, imagining faces like when we read a book. Furthermore, according to a 2025 brain scan study, familiar AI voices activate emotional regulation areas, suggesting neural familiarity builds with repeated interaction.
The human connection deepens the younger you go (Razer’s demographic). While older generations largely use AI for productivity, Gen Z frequently reports using advanced voice modes for emotional regulation and companionship. One report noted that 72% of teenagers have engaged with AI companions, with a significant subset treating them as confidants.
This comes with a dark side too: Since the prefrontal cortex responsible for impulse control and reality testing is not fully mature in teens, the empathy simulated by current voice AIs can feel dangerously real, leading to isolation from human peers or emotional manipulation by the model.
But, while voice AIs are powerful user experience, they will never match direct facial interaction. Studies suggest that once the “uncanny valley” is solved—that eerie feeling of revulsion we get when someone looks almost, but not quite, human—our brains will naturally gravitate toward them. At that point, holographic personas could become a common user experience in businesses and in intimate relationships.
Project Ava is not it
We are far from that point, though. We will need hyper-realistic AI faces that are algorithmically optimized to look exactly like humans, with every muscle, tick, and imperfection.
To avoid the uncanny valley in Project AVA Razer has opted for stylized avatars. There is “Kira,” an anime-style character, “Zane,” a strategic gaming coach, and even a licensed avatar of esports legend “Faker.” The goal is to lean into the parasocial dynamics already common in gaming culture, where digital characters often feel more like friends than software.
Early hands-on experiences suggest the illusion is fragile. Reviews describe the interaction as “wonky,” with the hologram frequently going on irrelevant tangents. Critics have been quick to label it a “friend in a jar,” questioning whether it fulfills a genuine social need or simply exploits loneliness with a visual gimmick. It’s also, and this is important, not a real hologram. These are just 3D animated characters projected in a flat transparent film. Move to the side and your friend will be reduced to 2D invisibility.
The ethical concerns deepen when you look under the hood. While Razer claims the hardware is model-agnostic, the current iteration is powered by xAI’s Grok—Elon Musk’s ethically challenged AI model. This instantly tethers the device to the darker side of AI development.
Just this week, Grok has been embroiled in a global scandal involving its new “edit” feature, which users have exploited to generate non-consensual sexual imagery at a rate of one image per minute. While Project AVA’s avatars are stylized and locked by Razer, the association with an engine currently under investigation by EU and U.K. regulators for facilitating “mass digital undressing” casts a long shadow over the product. There is no guarantee that users won’t find ways to jailbreak AVA to behave in similarly controversial ways, turning a gaming coach into something far more unsavory.
Still, it’s a first step. The market is already betting on this shift. The holographic AI assistant sector is projected to grow from roughly $600 million in 2024 to nearly $4 billion by 2031. We are already seeing successful applications in healthcare, where users rate AI avatars dressed as doctors as more competent than voice-only agents, as well as in customer service kiosks where a digital face helps maintain user attention.
AVA is hinting at this future, but it hasn’t solved the fundamental problems yet. To become natural, a holographic AI must be more than a gimmick in a glass. It must be a lot smarter than Grok, and it will need to possess the subtle social cues that signal safety to our nervous system. Until then, AVA will be just a crude toy.
I’m not dismissing it, though. Given our biological imperative, it’s only logical to think that humans will gravitate to holographic manifestations of AI in future UX experiences. But while the holographic AIs are inevitable, Ava doesn’t seem to be it. With luck, it will be remembered as a proto-form of what’s to come. Something truly advanced that, hopefully, will not be powered by the descendants of a techno-fascist AI.