koalazf99 / CodeQwen1.5

CodeQwen1.5 is the code version of Qwen, the large language model series developed by Qwen team, Alibaba Cloud.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

🤗 Hugging Face   |   🤖 ModelScope   |    📑 Blog    |   📖 Documentation
🖥️ Demo   |   💬 WeChat (微信)   |   🫨 Discord  

Visit our Hugging Face or ModelScope organization (click links above), search checkpoints with names starting with CodeQwen1.5-, and you will find all you need! Enjoy!

Introduction

CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.

  1. Strong code generation capabilities and competitve performance across a series of benchmarks;
  2. Supporting long context understanding and generation with the context length of 64K tokens;
  3. Supporting 92 coding languages;
['ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bluespec', 'c', 'c#', 'c++', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir', 'elm', 'emacs-lisp', 'erlang', 'f#', 'fortran', 'glsl', 'go', 'groovy', 'haskell', 'html', 'idris', 'isabelle', 'java', 'java-server-pages', 'javascript', 'json', 'julia', 'jupyter-notebook', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell', 'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'objectc++', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog', 'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme', 'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'swift', 'systemverilog', 'tcl', 'tcsh', 'tex', 'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'vue', 'xslt', 'yacc', 'yaml', 'zig']
  1. Excellent performance in text-to-SQL, bug fix, etc.

Detailed performance and introduction are shown in this 📑 blog.

Performance

EvalPlus (HumanEval, MBPP)

Model Size
HumanEval
0-shot
HumanEval+
0-shot
MBPP
0-shot
MBPP+
0-shot
MBPP
3-shot
Base Model
CodeLlama-Base 7B 33.5 25.6 52.1 41.6 38.6
StarCoder2 7B 35.4 29.9 54.4 45.6 51.0
DeepSeek-Coder-Base 6.7B 47.6 39.6 70.2 56.6 60.6
CodeQwen1.5 7B 51.8 45.7 72.2 60.2 61.8
Chat Model
GPT-3.5-Turbo - 76.8 70.7 82.5 69.7 70.8
GPT-4-Turbo (Nov 2023) - 85.4 81.7 83.5 70.7 80.0
DeepSeek-Coder-Instruct 6.7B 73.8 70.1 73.2 63.4 65.4
CodeQwen1.5-Chat 7B 83.5 78.7 77.7 67.2 70.6

LiveCodeBench

Model Size
Code Generation
All Time
Pass@1
Code Generation
2023/9/1 ~ 2024/4/1
Pass@1
Base Model
CodeLlama-Base 7B 6.5 7.6
StarCoder2 7B 11.3 12.7
DeepSeek-Coder-Base 6.7B 19.1 13.7
CodeQwen1.5 7B 21.8 19.3
Chat Model
CodeLlama-Instruct 7B 10.6 12.4
DeepSeek-Coder-Instruct 6.7B 21.6 19.2
CodeQwen1.5-Chat 7B 25.0 23.2

MultiPL-E

Model Size Python C++ Java PHP TS C# Bash JS Avg
Base Model
CodeLlama-Base 7B 31.7 29.8 34.2 23.6 36.5 36.7 12.0 29.2 29.2
StarCoder2-Base 7B 35.3 40.9 37.3 29.2 37.7 40.5 9.4 36.0 33.3
DeepSeek-Coder-Base 6.7B 49.4 50.3 43.0 38.5 49.7 50.0 28.5 48.4 44.7
CodeQwen1.5 7B 52.4 52.2 42.4 46.6 52.2 55.7 36.7 49.7 48.5
Chat Model
GPT-3.5-Turbo - 76.2 63.4 69.2 60.9 69.1 70.8 42.4 67.1 64.9
GPT-4 - 84.1 76.4 81.6 77.2 77.4 79.1 58.2 78.0 76.5
DeepSeek-Coder-Instruct 6.7B 78.6 63.4 68.4 68.9 67.2 72.8 36.7 72.7 66.1
CodeQwen1.5-Chat 7B 83.2 71.2 70.1 73.5 75.4 75.9 41.1 78.2 71.1

Text-to-SQL

Model Size
Spider
Execution Accuracy
Dev Set
Bird
Execution Accuracy
Dev Set
GPT-3.5-Turbo - 70.1 37.2
GPT-4 - 85.3 50.7
CodeLlama-Instruct 7B 59.5 22.4
DeepSeek-Coder-Instruct 6.7B 70.1 39.4
CodeQwen1.5-Chat 7B 77.9 42.0

Requirements

  • transformers>=4.37.0 for Qwen1.5 dense models.

Warning

🚨 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`.

You can install the required packages with the following command:

pip install -r requirements.txt

Quick Start

You can just write several lines of code with transformers to chat with CodeQwen1.5-7B-Chat. Essentially, we build the tokenizer and the model with from_pretrained method, and we use generate method to perform chatting with the help of chat template provided by the tokenizer. Below is an example of how to chat with CodeQwen1.5-7B-Chat:

from transformers import AutoTokenizer, AutoModelForCausalLM

device = "cuda" # the device to load the model onto

# Now you do not need to add "trust_remote_code=True"
tokenizer = AutoTokenizer.from_pretrained("Qwen/CodeQwen1.5-7B-Chat")
model = AutoModelForCausalLM.from_pretrained("Qwen/CodeQwen1.5-7B-Chat", device_map="auto").eval()

# tokenize the input into tokens

# Instead of using model.chat(), we directly use model.generate()
# But you need to use tokenizer.apply_chat_template() to format your inputs as shown below
prompt = "write a quick sort algorithm."
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

# Directly use generate() and tokenizer.decode() to get the output.
# Use `max_new_tokens` to control the maximum output length.
generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=2048
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

The apply_chat_template() function is used to convert the messages into a format that the model can understand. The add_generation_prompt argument is used to add a generation prompt, which refers to <|im_start|>assistant\n to the input. Notably, we apply ChatML template for chat models following our previous practice. The max_new_tokens argument is used to set the maximum length of the response. The tokenizer.batch_decode() function is used to decode the response. In terms of the input, the above messages is an example to show how to format your dialog history and system prompt.

Citation

If you find our work helpful, feel free to give us a cite.

@article{qwen,
  title={Qwen Technical Report},
  author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
  journal={arXiv preprint arXiv:2309.16609},
  year={2023}
}

Contact Us

If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups!

About

CodeQwen1.5 is the code version of Qwen, the large language model series developed by Qwen team, Alibaba Cloud.


Languages

Language:Python 98.1%Language:Shell 1.9%