Chinese tech company Alibaba on Monday released Qwen3, a family of AI models that the company claims can match and, in some cases, outperform the best models available from Google and OpenAI.
Most of the models are — or soon will be — available for download under an “open” license on AI dev platform Hugging Face and GitHub. They range in size from 0.6 billion parameters to 235 billion parameters. (Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.)
The rise of China-originated model series like Qwen have increased the pressure on American labs such as OpenAI to deliver more capable AI technologies. They’ve also led policymakers to implement restrictions aimed at limiting the ability of Chinese AI companies to obtain the chips necessary to train models.
According to Alibaba, the Qwen3 models are “hybrid” models — they can take time to “reason” through complex problems, or answer simpler requests quickly. Reasoning enables the models to effectively fact-check themselves, similar to models like OpenAI’s o3, but at the cost of higher latency.
“We have seamlessly integrated thinking and non-thinking modes, offering users the flexibility to control the thinking budget,” the Qwen team wrote in a blog post. “This design enables users to configure task-specific budgets with greater ease.”
Some of the models also adopt a mixture of experts (MoE) architecture, which can be more computationally efficient for answering queries. MoE breaks down tasks into subtasks and delegates them to smaller, specialized “expert” models.
The Qwen3 models support 119 languages, Alibaba said, and were trained on a dataset of over 36 trillion tokens. (Tokens are the raw bits of data that a model processes; 1 million tokens is equivalent to about 750,000 words.) The company said Qwen3 was trained on a combination of textbooks, “question-answer pairs,” code snippets, AI-generated data, and more.
These improvements, along with others, greatly boosted Qwen3’s capabilities compared to its predecessor, Qwen2, Alibaba said. None of the Qwen3 models seem to be head and shoulders above the top-of-the-line recent models like OpenAI’s o3 and o4-mini, but they’re strong performers nonetheless.
On Codeforces, a platform for programming contests, the largest Qwen3 model — Qwen-3-235B-A22B — just beats OpenAI’s o3-mini and Google’s Gemini 2.5 Pro. Qwen-3-235B-A22B also bests o3-mini on the latest version of AIME, a challenging math benchmark, and BFCL, a test for assessing a model’s ability to “reason” about problems.
But Qwen-3-235B-A22B isn’t publicly available — at least not yet.

Alibaba’s internal benchmark results for Qwen3.Image Credits:Alibaba
The largest public Qwen3 model, Qwen3-32B, is still competitive with a number of proprietary and open AI models, including Chinese AI lab DeepSeek’s R1. Qwen3-32B surpasses OpenAI’s o1 model on several tests, including the coding benchmark LiveCodeBench.
Alibaba said Qwen3 “excels” in tool-calling capabilities as well as following instructions and copying specific data formats. In addition to the models for download, Qwen3 is available from cloud providers, including Fireworks AI and Hyperbolic.
Tuhin Srivastava, co-founder and CEO of AI cloud host Baseten, said Qwen3 is another point in the trend line of open models keeping pace with closed-source systems such as OpenAI’s.
“The U.S. is doubling down on restricting sales of chips to China and purchases from China, but models like Qwen 3 that are state-of-the-art and open […] will undoubtedly be used domestically,” he told TechCrunch. “It reflects the reality that businesses are both building their own tools [as well as] buying off the shelf via closed-model companies like Anthropic and OpenAI.”