QWEN CHAT GITHUB HUGGING FACE MODELSCOPE DISCORD

We release Qwen2.5-VL, the new flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL. To try the latest model, feel free to visit Qwen Chat and choose Qwen2.5-VL-72B-Instruct. Also, we open both base and instruct models in 3 sizes, including 3B, 7B, and 72B, in both Hugging Face and ModelScope.

The key features include:

  • Understand things visually: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.

  • Being agentic: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.

  • Understanding long videos and capturing events: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of capturing event by pinpointing the relevant video segments.

  • Capable of visual localization in different formats: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.

  • Generating structured outputs: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.

Performance

We evaluate our models with the SOTA models as well as the best models of similar model sizes. In terms of the flagship model Qwen2.5-VL-72B-Instruct, it achieves competitive performance in a series of benchmarks covering domains and tasks, inlcuding college-level problems, math, document understanding, general question answering, math, video understanding, and visual agent. Notably, Qwen2.5-VL achieves significant advantages in understanding documents and diagrams, and it is capable of playing as a visual agent without task-specific finetuning.

In terms of smaller models, Qwen2.5-VL-7B-Instruct outperforms GPT-4o-mini in a number of tasks, and Qwen2.5-VL-3B, which is a solution for edge AI, even outperforms the 7B model of our previous version Qwen2-VL.

Model Capabilities

1. World-wide Image Recognition

Qwen2.5-VL has significantly enhanced its general image recognition capabilities, expanding the categories of images to an ultra-large number. It not only includes plants, animals, landmarks of famous mountains and rivers, but also IPs from film and TV series, as well as a wide variety of products.

Example: Attractions Identification Next
User
What are these attractions? Please give their names in Chinese and English. image
Qwen2.5-VL

These attractions are:

  1. Top left: The Great Pyramids of Giza (埃及的吉萨大金字塔)
  2. Top right: The Great Wall of China (中国的长城)
  3. Bottom left: The Statue of Liberty (美国的自由女神像)
  4. Bottom right: The Terracotta Army (中国的秦始皇兵马俑)

2. Precise Object Grounding

Qwen2.5-VL utilizes bounding boxes and point-based representations for grounding, enabling hierarchical positioning and standardized JSON output. This enhanced localization capability serves as a foundation for visual reasoning.

Example: Precise Object Grounding Next
User
Detect all motorcyclists in the image and return their locations in the form of coordinates. The format of output should be like {“bbox_2d”: [x1, y1, x2, y2], “label”: “motorcyclist”, “sub_label”: “wearing helmat” # or “not wearing helmat”}. image
Qwen2.5-VL
[
	{"bbox_2d": [341, 258, 397, 360], "label": "motorcyclist", "sub_label": "not wearing helmat "},
	{"bbox_2d": [212, 332, 274, 448], "label": "motorcyclist", "sub_label": "not wearing helmat "},
	{"bbox_2d": [66, 124, 112, 198], "label": "motorcyclist", "sub_label": "not wearing helmat "},
	{"bbox_2d": [5, 235, 63, 320], "label": "motorcyclist", "sub_label": "wearing helmat "}
]

3. Enhanced Text Recognition and Understanding

Qwen2.5-VL has upgraded its OCR recognition capabilities to a new level, with enhanced multi-scenario, multi-language and multi-orientation text recognition and text localization performance. Furthermore, it has been significantly enhanced in information extraction to meet the growing digitalized and intelligent demands in areas such as qualification review and financial business.

Example: Vertical Text Recognition Next
User
Read all texts in the image, output in lines. image
Qwen2.5-VL
平安喜樂 八方來財 順風順水順財神 暴富 大吉 諸事皆順 福 時來運轉 發財 如意 有錢有勢有前程 好運來

4. Powerful Document Parsing

Qwen2.5-VL has designed a unique document parsing format called QwenVL HTML format, which extracts layout information based on HTML. QwenVL HTML can perform document parsing in various scenarios, such as magazines, research papers, web pages, and even mobile screenshots.

Example: QwenVL HTML Next
User
QwenVL HTML image
Qwen2.5-VL
<html>
<body>
<h1 data-bbox="879 283 1605 348">Qwen2.5 Technical Report</h1>
<p data-bbox="1124 406 1360 456"></p>
<address class="author"><p>Qwen Team</p></address>
<div class="image" data-bbox="778 491 850 541"><img data-bbox="778 491 850 541"/></div>
<p data-bbox="885 491 1460 541">https://huggingface.co/Qwen</p>
<div class="image" data-bbox="778 541 850 589"><img data-bbox="778 541 850 589"/></div>
<p data-bbox="885 541 1708 589">https://modelscope.cn/organization/qwen</p>
<div class="image" data-bbox="792 589 850 639"><img data-bbox="792 589 850 639"/></div>
<p data-bbox="885 589 1584 639">https://github.com/QwenLM/Qwen2.5</p>
<h2 data-bbox="1143 681 1344 733">Abstract</h2>
<p data-bbox="434 785 2050 1252">In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning, including offline learning DPO and online learning GRPO. Post-training techniques significantly enhance human preference, and notably improve long text generation, structural data analysis, and instruction following.</p>
<p data-bbox="434 1262 2050 1587">To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich configurations. The open-weight offerings include base models and instruction-tuned models in sizes of $0.5 \mathrm{~B}, 1.5 \mathrm{~B}, 3 \mathrm{~B}, 7 \mathrm{~B}, 14 \mathrm{~B}, 32 \mathrm{~B}$, and $72 \mathrm{~B}$ parameters. Quantized versions of the instruction-tuned models are also provided. Over 100 models can be accessed from Hugging Face Hub, ModelScope, and Kaggle. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio.</p>
<p data-bbox="434 1587 2050 2052">Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math (Yang et al., 2024b), Qwen2.5-Coder (Hui et al., 2024), QwQ (Qwen Team, 2024d), and multimodal models.</p>
<div class="image" data-bbox="408 2275 2086 2800"><img data-bbox="408 2275 2086 2800"/></div>
<p data-bbox="289 2864 2202 3058">Figure 1: In the iterative development of the Qwen series, data scaling has played a crucial role. Qwen 2.5, which leverages 18 trillion tokens for pre-training, has demonstrated the most advanced capabilities within the Qwen series, especially in terms of domain expertise, underscoring the importance of scale together with mixture in enhancing the model’s capabilities.</p>
</body>
</html>

5. Enhanced Video Comprehension Ability

Qwen2.5-VL’s video comprehension capabilities have been comprehensively upgraded. In terms of temporal processing, we have introduced dynamic frame rate (FPS) training and absolute time encoding technology. As a result, the model can not only support the understanding of ultra-long videos on an hourly scale but also achieve second-level event localization. It is capable of accurately comprehending content from long videos spanning hours, searching for specific events within videos, and summarizing key points from different time segments. This allows users to quickly and efficiently extract crucial information embedded in the videos.

Example: Information Extraction from Videos Next
User
Watch the video and list the paper titles in a table.
Qwen2.5-VL

Here is a table listing the paper titles from the video:

Paper Title
A New Sequential Prediction Framework with Spatial-temporal Embedding
NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation
Deep Unified Representation for Heterogeneous Recommendation
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
KoMen: Domain Knowledge Guided Interaction Recommendation for Emerging Scenarios Yiqing Xie
Vision-Language Pre-Training for Boosting Scene Text Detectors

6. Superior Computer and Mobile Agent

Example: Mobile Use Next
Please help me book a one-way ticket in the bookings app. The starting point is Chongqing Jiangbei Airport and the ending point is Beijing Capital Airport on January 28th

Model Updates

Compared to Qwen2-VL, Qwen2.5-VL has enhanced the model’s perception of temporal and spatial scales, and further simplified the network structure to improve model efficiency.

  • Perception of Time and Image Size

In the spatial dimension, Qwen2.5-VL not only dynamically converts images of different sizes into tokens of varying lengths but also directly represents coordinates such as detection boxes and points using the actual size scale of the image, without performing traditional coordinate normalization. This allows the model to directly learn the scale of the images. In the temporal dimension, dynamic FPS (Frames Per Second) training and absolute time encoding have been introduced, aligning mRoPE ids directly with the speed of time. This enables the model to learn the pace of time through the intervals of temporal dimension ids.

  • More Concise and Efficient Visual Encoder

The visual encoder plays a crucial role in multimodal large models. We trained a native dynamic resolution ViT from scratch, including stages for CLIP, vision-language model alignment, and end-to-end training. To address the issue of load imbalance in ViT during the training and testing phases of multimodal large models, we introduced Window Attention to effectively reduce the computational load on the ViT side. In our ViT setup, only four layers are Full Attention layers, while the rest use Window Attention. The maximum window size is 8x8, and regions smaller than 8x8 do not require padding; instead, they retain their original scale, ensuring that the model maintains native resolution. Additionally, to simplify the overall network structure, we made the ViT architecture more consistent with LLMs by adopting RMSNorm and SwiGLU structures.

What’s Next

In the near future, we will further enhance the model’s problem-solving and reasoning capabilities, while incorporating more modalities. This will make the model smarter and move us towards an integrated omni-model that can handle multiple types of input and tasks.