Qwen3.5-Omni Debuts as Alibaba’s Most Advanced Multimodal AI Model Yet
Alibaba has unveiled Qwen3.5-Omni, the latest model in its Qwen family.
Released this week, the model can process text, images, audio, and video simultaneously and spit responses back in kind. Qwen3.5-Omni is what the industry calls a “fully omnimodal” large language model. That’s a fancy way of saying it doesn’t need four different AI systems to handle a video call, a document, a voice note, and a photo; it handles all of them, together, in one go.
It comes in three sizes: Plus, Flash, and Light, catering to different needs and computing budgets. The flagship Plus version supports a massive 256,000-token context window, enough to process over ten hours of audio or more than 400 seconds of 720p video at one frame per second.
The model was pre-trained from the ground up on more than 100 million hours of audiovisual data, which is native multimodal training rather than a text model with audio bolted on as an afterthought.
The benchmarks: Taking shots at Gemini
Alibaba is making bold claims, and the numbers at least back them up.
The Plus version reportedly set new state-of-the-art results across 215 audio and audiovisual benchmarks, according to Alibaba. This covers everything from general audio comprehension and speech recognition to translation tasks spanning 156 language pairs.
On the MMAU audio comprehension benchmark, Qwen3.5-Omni-Plus scored 82.2 against Google’s Gemini 3.1 Pro at 81.1. The gap widens in music comprehension, with Qwen scoring 72.4 and Gemini 59.6 on the RUL-MuchoMusic benchmark.
On voice dialogue, the model scored 93.1 on VoiceBench compared to Gemini’s 88.9.
Speech generation is another headline moment. On the notoriously difficult “seed-hard” test, which evaluates how naturally a model reads aloud under pressure, Qwen3.5-Omni-Plus achieved a word error rate of just 6.24, compared to GPT-Audio’s 8.19, Minimax’s 8.62, and ElevenLabs’ 27.70. In voice cloning across 20 languages, it hit a word error rate of 1.87 and a cosine similarity score of 0.79, both leading figures in the comparison.
That said, it’s not a clean sweep. Gemini 3.1 Pro still holds advantages on certain audiovisual benchmarks, including WorldSense, VideoMME with audio, and tool-use tasks.
The language leap
One of the most striking upgrades over its predecessor, Qwen3-Omni, is the multilingual jump.
The previous model handled eleven languages and eight Chinese dialects for speech recognition. Qwen3.5-Omni now covers 74 languages and 39 Chinese dialects, for a total of 113 languages and dialects. Speech output supports 36 languages, with 50 available speakers, including user-defined, dialectal, and multilingual options.
A new trick nobody programmed: ‘Audio-visual vibe coding’
The standout headline for the newly released Qwen3.5-Omni isn’t just its speed; it’s a surprise skill the developers didn’t even specifically train it for. Alibaba’s Qwen team observed a new “emergent capability” where the model writes functional code based solely on watching a video and listening to spoken instructions. They’ve dubbed this “Audio-Visual Vibe Coding.”
In real-world tests, the model could take a rough, hand-drawn sketch held up to a camera and turn it into a working React webpage. As the user spoke to it, asking for larger buttons or a different layout, the AI updated the code in real time.
According to Qwen, this allows for: “directly performing coding based on audio-visual instructions, which we call Audio-Visual Vibe Coding; all of the above features are available through the Offline API.”
Under the hood: Thinkers and talkers
To handle text, images, audio, and video all at once, Alibaba uses a unique “Thinker-Talker” setup. The Thinker is the brain that processes what is being seen and heard, while the Talker handles how the AI speaks back to you.
One of the biggest technical hurdles in voice AI is “stuttering” or making mistakes when reading numbers and complex text. Alibaba claims to have solved this with a new technology called ARIA (Adaptive Rate Interleave Alignment). This tech syncs up text and speech units so the AI doesn’t trip over its words during a live conversation.
The model also supports semantic interruption. This means if you cough or say “um,” the AI is smart enough to keep talking, but if you actually start a new sentence to correct it, the model stops and listens.
For more on Alibaba’s AI shake-up, check out our coverage of the Qwen tech lead’s unexpected departure.
The post Qwen3.5-Omni Debuts as Alibaba’s Most Advanced Multimodal AI Model Yet appeared first on eWEEK.