
Introduction
Qwen, developed by Alibaba Cloud, stands as a globally leading open-source large language model. Anchored in a comprehensive "full-modality, full-scale" strategy and continuous technological breakthroughs, Qwen has garnered extensive adoption and collaborative contributions from developers worldwide. With over 170,000 derivative models and more than 600 million downloads, it has emerged as the most active and influential large model within the global open-source ecosystem.
Establishing a New Paradigm for Efficient and Adaptive Large Model Training
To overcome the critical bottlenecks in data quality, training algorithms, and training efficiency, the Qwen team pioneered an integrated "Semantics-Driven, Evaluation-Guided, Resource-Coordinated" training framework.
In terms of data, the team built a self-evolving data engine that leverages intelligent curation, value-aware analysis, and automated generation to transform raw data from a passive input into an active driver of model evolution—boosting data utilization efficiency by over 40%.
In terms of algorithms, an evaluation-driven, adaptive training flywheel was developed to dynamically identify model weaknesses, intelligently reconfigure internal module responsibilities, and refine optimization strategies for key components. This has yielded performance improvements of over 12% on specialized tasks such as mathematical reasoning and code generation.
In terms of computing infrastructure, a resource-aware, high-efficiency training architecture was engineered to enable intelligent coordination across the full stack—from model parallelization and data scheduling to hardware adaptation. This architecture has increased resource utilization by 22% in thousand-GPU clusters and significantly reduced overall training costs.
Accelerating AI Adoption Across Industries at Scale
The Qwen series has achieved mature, large-scale industrial deployment, serving over 1.05 million enterprise customers across key sectors including scientific research, finance, automotive, consumer electronics, government services, and media—generating substantial economic and societal impact.
In scientific research, Qwen empowers institutions such as the National Astronomical Observatories and the Institute of Tibetan Plateau Research, enabling breakthroughs in frontier domains like solar flare prediction and ecological monitoring, thereby driving an intelligent transformation of scientific methodologies.
In industry, "Tongyi Lingma" has generated over 3 billion lines of code, establishing itself as a widely used AI programming assistant. Use cases such as China Merchants Bank and Chongqing's government AI assistant "Yu Xiao Zhi" have demonstrably enhanced both operational efficiency and service quality. Leading automakers—including BYD, BMW, and FAW—are leveraging Qwen to build intelligent cockpits and AI engines, redefining human-vehicle interaction.
In consumer electronics, brands like RayNeo and Honor have deeply integrated Qwen into their devices, elevating intelligent interaction for hundreds of millions of users.
In media and culture, applications such as AI-powered visual generation for CCTV's Spring Festival Gala and intelligent editing platforms for major media outlets highlight Qwen's capacity to empower cultural innovation and creative expression.
By significantly lowering the barrier to AI adoption, Qwen has become a foundational platform for industrial intelligence. Its open-source model accelerates technology democratization, creating broad social value and expansive market opportunities.
Global Recognition Through Continuous Innovation and Open Collaboration
Guided by a strategy of continuous innovation and open-source empowerment, we have established Qwen as a leading global large model, fostering a vibrant developer ecosystem with extensive international adoption. Its "full-scale, full-modality" open-source strategy is reshaping global collaboration in large model development. To date, Alibaba Cloud has released over 300 open-source models, amassed more than 600 million downloads, and inspired over 170,000 derivative models—solidifying Qwen's position as the world's leading open-source base model and the preferred choice for developers globally.
This open approach has dramatically lowered R&D barriers and catalyzed global innovation: top-tier institutions such as Stanford and UC Berkeley have built low-compute, high-performance models based on Qwen, which have been cited thousands of times in academic literature and serve as key baselines in multimodal and long-context research.
The release of Qwen3 garnered 17,000 GitHub stars within three hours—a record-breaking milestone that underscores its global developer endorsement.
On the ecosystem front, Qwen has achieved deep hardware-software co-optimization with global leaders including NVIDIA, Qualcomm, and Apple. Support for Apple's MLX framework enables efficient on-device inference on iPhones and Macs, accelerating the deployment of large models in consumer electronics and edge devices. Furthermore, Qwen3 supports 119 languages and dialects, unlocking new possibilities for international applications and ensuring users worldwide can benefit from its advanced capabilities.
International research firms such as Gartner have recognized Alibaba Cloud as a global leader in generative AI—highlighting the significant contribution of China's open-source initiatives to the advancement of global AI.
The World Internet Conference (WIC) was established as an international organization on July 12, 2022, headquartered in Beijing, China. It was jointly initiated by Global System for Mobile Communication Association (GSMA), National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT), China Internet Network Information Center (CNNIC), Alibaba Group, Tencent, and Zhijiang Lab.