GITHUB HUGGING FACE MODELSCOPE DEMO DISCORD

Introduction

In early April, we introduced CodeQwen1.5, which garnered significant attention from the community. Since then, we have been working to enhance the coding model. Today, we are excited to announce the release of the next generation of open-source coding models, Qwen2.5-Coder, and officially rename CodeQwen to Qwen-Coder. We think “Coder” is more human-like and agile, reflecting our vision of it becoming a true coding partner in the future. Qwen2.5-Coder is part of the Qwen2.5 series, available in three model sizes: 1.5B, 7B, and a 32B version (coming soon).

This update focuses on two main improvements: scaling up the code training data and enhancing coding capabilities while maintaining strong performance in other core areas like math and general tasks.

💻 Code More: Qwen2.5-Coder builds on the strong Qwen2.5 and continues training on a larger scale of code data, including source code, text-code grounding data, and synthetic data, totaling 5.5 trillion tokens. This leads to significant improvements in code-related tasks.

📚 Learn More: While enhancing coding abilities, we aimed to retain strengths in math and general capabilities from base model. Therefore, Qwen2.5-Coder incorporates additional data on mathematics and general abilities, providing a comprehensive foundation for real-world applications like Code Agent.

Qwen2.5-Coder: Base Models

Qwen2.5-Coder supports up to 128K tokens of context, covers 92 programming languages, and has achieved remarkable improvements across various code-related evaluation tasks, including code generation, multi-programming code generation, code completion, and code repair. Notably, the open-source 7B version of Qwen2.5-Coder has even outperformed larger models like DeepSeek-Coder-V2-Lite and Codestral, making it one of the most powerful base code models available. Beyond code tasks, Qwen2.5-Coder also demonstrates competitive math capabilities in evaluations such as GSM8K and Math. For general tasks, evaluations on MMLU and ARC show that Qwen2.5-Coder has retained the general ability performance of Qwen2.5.

Qwen2.5-Coder-Instruct: Instruction-Tuned Models

Building on Qwen2.5-Coder, we fine-tuned it with instruction data, creating Qwen2.5-Coder-Instruct. This instruction-tuned model not only further improves task performance but also demonstrates exceptional generalization across various benchmarks.

Qwen2.5-Coder-Instruct excels in several key areas:

  1. Outstanding Multi-programming Expert: We expanded the multi-language evaluations using McEval, covering more than 40 programming languages. The results show that Qwen2.5-Coder-Instruct performs remarkably well across many languages, including niche ones.
  1. Code Reasoning: We believe code reasoning is closely tied to general reasoning skills. We used CRUXEval as a benchmark, and the results show Qwen2.5-Coder-Instruct excels in code reasoning tasks. Interestingly, as code reasoning improves, the model’s ability to follow complex instructions also gets better, encouraging us to further explore how code can enhance general skills.
  1. Math Reasoning: Math and code are often discussed together: math is the foundation of code, and code is a key tool for math. Qwen2.5-Coder-Instruct shines in both code and math tasks, proven to be a “science student”.
ModelMathGSM8KGaoKao2023enOlympiadBenchCollegeMathAIME24
DeepSeek-Coder-V2-Lite-Instruct61.087.656.126.439.86.7
Qwen2.5-Coder-7B-Instruct66.886.760.529.843.510.0
  1. Basic capabilities: We also assessed the general capabilities, and the results indicate that Qwen2.5-Coder-Instruct maintains the advantages of Qwen2.5 in terms of general abilities.
ModelAMC23MMLUMMLU-ProIFEvalCEvalGPQA
DeepSeek-Coder-V2-Lite-Instruct40.442.560.638.660.127.6
Qwen2.5-Coder-7B-Instruct42.545.668.758.661.435.6

License

Qwen2.5-Coder is released under the Apache 2.0 license. We hope this increased openness will accelerate its application in code intelligence.

What’s Next for Qwen2.5-Coder?

We are preparing the 32B version of Qwen2.5-Coder, aiming to challenge proprietary models. Stay tuned—it’s coming soon! Additionally, we’re exploring powerful code-centric reasoning models to push the boundaries of code intelligence.

Citation

@article{qwen2,
  title={Qwen2 Technical Report},
  year={2024}
}