{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27
28
News Every Day |

Alibaba's new open source Qwen3.5-Medium models offer Sonnet 4.5 performance on local computers

Alibaba's now famed Qwen AI development team has done it again: a little more than a day ago, they released the Qwen3.5 Medium Model series consisting of four new large language models (LLMs) with support for agentic tool calling, three of which are available for commercial usage by enterprises and indie developers under the standard open source Apache 2.0 license:

  • Qwen3.5-35B-A3B

  • Qwen3.5-122B-A10B

  • Qwen3.5-27B

Developers can download them now on Hugging Face and ModelScope. A fourth model, Qwen3.5-Flash, appears to be proprietary and only available through the Alibaba Cloud Model Studio API, but still offers a strong advantage in cost compared to other models in the West (see pricing comparison table below).

But the big twist with the open source models is that they offer comparably high performance on third-party benchmark tests to similarly-sized proprietary models from major U.S. startups like OpenAI or Anthropic, actually beating OpenAI's GPT-5-mini and Anthropic's Claude Sonnet 4.5 — the latter model which was just released five months ago.

And, the Qwen team says it has engineered these models to remain highly accurate even when "quantized," a process that reduces their footprint further by reducing the numbers by which the model's settings are stored from many values to far fewer.

Crucially, this release brings "frontier-level" context windows to the desktop PC. The flagship Qwen3.5-35B-A3B can now exceed a 1 million token context length on consumer-grade GPUs with 32GB of VRAM. While not something everyone has access to, this is far less compute than many other comparably-performant options.

This leap is made possible by near-lossless accuracy under 4-bit weight and KV cache quantization, allowing developers to process massive datasets without server-grade infrastructure.

Technology: Delta force

At the heart of Qwen 3.5's performance is a sophisticated hybrid architecture. While many models rely solely on standard Transformer blocks, Qwen 3.5 integrates Gated Delta Networks combined with a sparse Mixture-of-Experts (MoE) system.The technical specifications for the Qwen3.5-35B-A3B reveal a highly efficient design:

  • Parameter Efficiency: While the model houses 35 billion parameters in total, it only activates 3 billion for any given token.

  • Expert Diversity: The MoE layer utilizes 256 experts, with 8 routed experts and 1 shared expert helping to maintain performance while slashing inference latency.

  • Near-Lossless Quantization: The series maintains high accuracy even when compressed to 4-bit weights, significantly reducing the memory footprint for local deployment.

  • Base Model Release: In a move to support the research community, Alibaba has open-sourced the Qwen3.5-35B-A3B-Base model alongside the instruct-tuned versions.

Product: Intelligence that 'thinks' first

Qwen 3.5 introduces a native "Thinking Mode" as its default state. Before providing a final answer, the model generates an internal reasoning chain—delimited by <think> tags—to work through complex logic.The product lineup is tailored for varying hardware environments:

  • Qwen3.5-27B: Optimized for high efficiency, supporting a context length of over 800K tokens.

  • Qwen3.5-Flash: The production-grade hosted version, featuring a default 1 million token context length and built-in official tools.

  • Qwen3.5-122B-A10B: Designed for server-grade GPUs (80GB VRAM), this model supports 1M+ context lengths while narrowing the gap with the world's largest frontier models.

Benchmark results validate this architectural shift. The 35B-A3B model notably surpasses much larger predecessors, such as Qwen3-235B, as well as the aforementioned proprietary GPT-5 mini and Sonnet 4.5 in categories including knowledge (MMMLU) and visual reasoning (MMMU-Pro).

Pricing and API integration

For those not hosting their own weights, Alibaba Cloud Model Studio provides a competitive API for Qwen3.5-Flash.

  • Input: $0.1 per 1M tokens

  • Output: $0.4 per 1M tokens

  • Cache Creation: $0.125 per 1M tokens

  • Cache Read: $0.01 per 1M tokens

The API also features a granular Tool Calling pricing model, with Web Search at $10 per 1,000 calls and Code Interpreter currently offered for a limited time at no cost.

This makes Qwen3.5-Flash among the most affordable to run over API among all the major LLMs in the world. See a table comparing them below:

Model

Input

Output

Total Cost

Source

Qwen 3 Turbo

$0.05

$0.20

$0.25

Alibaba Cloud

Qwen3.5-Flash

$0.10

$0.40

$0.50

Alibaba Cloud

deepseek-chat (V3.2-Exp)

$0.28

$0.42

$0.70

DeepSeek

deepseek-reasoner (V3.2-Exp)

$0.28

$0.42

$0.70

DeepSeek

Grok 4.1 Fast (reasoning)

$0.20

$0.50

$0.70

xAI

Grok 4.1 Fast (non-reasoning)

$0.20

$0.50

$0.70

xAI

MiniMax M2.5

$0.15

$1.20

$1.35

MiniMax

MiniMax M2.5-Lightning

$0.30

$2.40

$2.70

MiniMax

Gemini 3 Flash Preview

$0.50

$3.00

$3.50

Google

Kimi-k2.5

$0.60

$3.00

$3.60

Moonshot

GLM-5

$1.00

$3.20

$4.20

Z.ai

ERNIE 5.0

$0.85

$3.40

$4.25

Baidu

Claude Haiku 4.5

$1.00

$5.00

$6.00

Anthropic

Qwen3-Max (2026-01-23)

$1.20

$6.00

$7.20

Alibaba Cloud

Gemini 3 Pro (≤200K)

$2.00

$12.00

$14.00

Google

GPT-5.2

$1.75

$14.00

$15.75

OpenAI

Claude Sonnet 4.5

$3.00

$15.00

$18.00

Anthropic

Gemini 3 Pro (>200K)

$4.00

$18.00

$22.00

Google

Claude Opus 4.6

$5.00

$25.00

$30.00

Anthropic

GPT-5.2 Pro

$21.00

$168.00

$189.00

OpenAI

What it means for enterprise technical leaders and decision-makers

With the launch of the Qwen3.5 Medium Models, the rapid iteration and fine-tuning once reserved for well-funded labs is now accessible for on-premise development at many non-technical firms, effectively decoupling sophisticated AI from massive capital expenditure.

Across the organization, this architecture transforms how data is handled and secured. The ability to ingest massive document repositories or hour-scale videos locally allows for deep institutional analysis without the privacy risks of third-party APIs.

By running these specialized "Mixture-of-Experts" models within a private firewall, organizations can maintain sovereign control over their data while utilizing native "thinking" modes and official tool-calling capabilities to build more reliable, autonomous agents.

Early adopters on Hugging Face have specifically lauded the model’s ability to "narrow the gap" in agentic scenarios where previously only the largest closed models could compete.

This shift toward architectural efficiency over raw scale ensures that AI integration remains cost-conscious, secure, and agile enough to keep pace with evolving operational needs.

Ria.city






Read also

Republican Senator Markwayne Mullin Shuts Down Bernie Sanders at Confirmation Hearing: ‘You’re Part of the Problem’ (VIDEO)

Pentagon Threatens Retaliation If Anthropic Bars Use of AI for Mass Surveillance

Kansas health officials warn of safety and compliance concerns with IV therapy

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости