{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21
22
23
24
25
26
27
28
29
30
News Every Day |

Qwen3.5-Omni Debuts as Alibaba’s Most Advanced Multimodal AI Model Yet

6

Alibaba has unveiled Qwen3.5-Omni, the latest model in its Qwen family.

Released this week, the model can process text, images, audio, and video simultaneously and spit responses back in kind. Qwen3.5-Omni is what the industry calls a “fully omnimodal” large language model. That’s a fancy way of saying it doesn’t need four different AI systems to handle a video call, a document, a voice note, and a photo; it handles all of them, together, in one go.

It comes in three sizes: Plus, Flash, and Light, catering to different needs and computing budgets. The flagship Plus version supports a massive 256,000-token context window, enough to process over ten hours of audio or more than 400 seconds of 720p video at one frame per second. 

The model was pre-trained from the ground up on more than 100 million hours of audiovisual data, which is native multimodal training rather than a text model with audio bolted on as an afterthought.

The benchmarks: Taking shots at Gemini

Alibaba is making bold claims, and the numbers at least back them up.

The Plus version reportedly set new state-of-the-art results across 215 audio and audiovisual benchmarks, according to Alibaba. This covers everything from general audio comprehension and speech recognition to translation tasks spanning 156 language pairs. 

On the MMAU audio comprehension benchmark, Qwen3.5-Omni-Plus scored 82.2 against Google’s Gemini 3.1 Pro at 81.1. The gap widens in music comprehension, with Qwen scoring 72.4 and Gemini 59.6 on the RUL-MuchoMusic benchmark.

On voice dialogue, the model scored 93.1 on VoiceBench compared to Gemini’s 88.9.

Speech generation is another headline moment. On the notoriously difficult “seed-hard” test, which evaluates how naturally a model reads aloud under pressure, Qwen3.5-Omni-Plus achieved a word error rate of just 6.24, compared to GPT-Audio’s 8.19, Minimax’s 8.62, and ElevenLabs’ 27.70. In voice cloning across 20 languages, it hit a word error rate of 1.87 and a cosine similarity score of 0.79, both leading figures in the comparison.

That said, it’s not a clean sweep. Gemini 3.1 Pro still holds advantages on certain audiovisual benchmarks, including WorldSense, VideoMME with audio, and tool-use tasks.

The language leap

One of the most striking upgrades over its predecessor, Qwen3-Omni, is the multilingual jump. 

The previous model handled eleven languages and eight Chinese dialects for speech recognition. Qwen3.5-Omni now covers 74 languages and 39 Chinese dialects, for a total of 113 languages and dialects. Speech output supports 36 languages, with 50 available speakers, including user-defined, dialectal, and multilingual options.

A new trick nobody programmed: ‘Audio-visual vibe coding’

The standout headline for the newly released Qwen3.5-Omni isn’t just its speed; it’s a surprise skill the developers didn’t even specifically train it for. Alibaba’s Qwen team observed a new “emergent capability” where the model writes functional code based solely on watching a video and listening to spoken instructions. They’ve dubbed this “Audio-Visual Vibe Coding.”

In real-world tests, the model could take a rough, hand-drawn sketch held up to a camera and turn it into a working React webpage. As the user spoke to it, asking for larger buttons or a different layout, the AI updated the code in real time. 

According to Qwen, this allows for: “directly performing coding based on audio-visual instructions, which we call Audio-Visual Vibe Coding; all of the above features are available through the Offline API.”

Under the hood: Thinkers and talkers

To handle text, images, audio, and video all at once, Alibaba uses a unique “Thinker-Talker” setup. The Thinker is the brain that processes what is being seen and heard, while the Talker handles how the AI speaks back to you.

One of the biggest technical hurdles in voice AI is “stuttering” or making mistakes when reading numbers and complex text. Alibaba claims to have solved this with a new technology called ARIA (Adaptive Rate Interleave Alignment). This tech syncs up text and speech units so the AI doesn’t trip over its words during a live conversation.

The model also supports semantic interruption. This means if you cough or say “um,” the AI is smart enough to keep talking, but if you actually start a new sentence to correct it, the model stops and listens.

For more on Alibaba’s AI shake-up, check out our coverage of the Qwen tech lead’s unexpected departure.

The post Qwen3.5-Omni Debuts as Alibaba’s Most Advanced Multimodal AI Model Yet appeared first on eWEEK.

Ria.city






Read also

‘Excited to build something from scratch’: Portland Fire open training camp

Airbnb Adds Independent Hotels to List of Lodging Options

Kemi on consequences

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости