Key Takeaways
Alibaba has launched its latest generation of AI, Qwen 2.5, with an expanded array of models catered to different applications.
Alongside improvements to its flagship proprietary model Qwen-Plus, Alibaba has released over 100 open-source Qwen2.5, Qwen2.5-Coder, and Qwen2.5-Math models, with each family consisting of a range of different model sizes.
With the open versus closed AI debate increasingly dividing technology firms into rival camps, Alibaba seems to be leaning more toward open-source development, mirroring Meta’s approach.
However, with Qwen-Plus, the company hasn’t completely abandoned the API-centric approach pursued by OpenAI and Chinese peer Baidu. (Only older versions of OpenAI’s GPT and Baidu’s Ernie ranges are freely available under permissive licenses.)
Alibaba’s plan appears to be to cover all its bases.
On the one hand, Qwen-Plus could provide an important revenue stream for the company, especially in the Chinese language market where US-made models underperform. Meanwhile, the new Qwen 2.5 open-source range will help foster developer interest and cultivate a wider AI ecosystem with Alibaba at its center.
In general, the company’s increasingly diverse model library points to a broad AI strategy that incorporates various different-sized models and specialized options.
The smallest Qwen2.5 models have just 0.5 billion parameters. By comparison, the smallest version of Gemini has 1.8 billion parameters. OpenAI has not revealed how big GPT—4o mini is, but there has been speculation that it could have tens of billions of parameters.
Baidu shares this emphasis on lightweight AI, whose smallest Ernie Tiny models range from 100 million to 1 billion parameters.
Acknowledging the shift toward small language models (SLMs), Alibaba noted that “although SLMs have historically trailed behind their larger counterparts (LLMs), the performance gap is rapidly diminishing.” For instance, it pointed out that among the latest additions, Qwen2.5-3B is one of the smallest models ever to score above 65 in the Massive Multitask Language Understanding (MMLU) benchmark.
Progress in SLM development is crucial for on-device AI, as even high-end modern smartphones can’t support anything much larger than 7 billion parameters. With multiple options in the sub-7B range, Alibaba is well-positioned in the burgeoning AI smartphone space.
Alongside its latest language models, Alibaba also unveiled a new addition to its computer vision offering. Just two months after it released the Tongyi Wanxiang image generator, Alibaba has added a text-to-video model to its product suite.
While it falls short of the realism showcased by OpenAI’s Sora and Adobe Firefly, the Tongyi Wanxiang video model has a unique selling point: it is explicitly oriented toward the Chinese market.
Not only does it support Mandarin prompts, but the video generator is also “optimized for […] Chinese aesthetics”.
This regional focus is in line with Alibaba’s wider AI game plan. Rather than trying to compete with large American and European AI labs at the cutting edge of research and development, the Chinese Big Tech firm seems intent on replicating its successes better and more effectively than local rivals like Baidu and Tencent.