ChrisLiu6 / LLaMA2-Accessory

An Open-source Toolkit for LLM Development

Home Page:https://llama2-accessory.readthedocs.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLaMA2-Accessory: An Open-source Toolkit for LLM Development ๐Ÿš€


๐Ÿค— HF Repo โ€ข ๐Ÿ‘‹ join our WeChat โ€ข ๐Ÿš€ Demo

๐Ÿš€LLaMA2-Accessory is an open-source toolkit for pretraining, finetuning and deployment of Large Language Models (LLMs) and multimodal LLMs. This repo is mainly inherited from LLaMA-Adapter with more advanced features.๐Ÿง 

โœจWithin this toolkit, we present SPHINX, a versatile multimodal large language model (MLLM) that combines a diverse array of training tasks, data domains, and visual embeddings.

News

  • [2023-12-08] We release OneLLM which aligns eight modalities to language using a unified framework!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023-11-17] We release SPHINX-V2, featuring the same architecture but with enhanced and broader capabilities! ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.10.17] We release the demo, code, and model of SPHINX!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.09.15] We now support Falcon 180B!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.09.14] WeMix-LLaMA2-70B shows excellent performance on the OpenCompass benchmark!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.09.02] We now support InternLM๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.08.28] We release quantized LLM with OmniQuant, which is an efficient, accurate, and omnibearing (even extremely low bit) quantization algorithm. Multimodal version is coming soon๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.08.27] We now support CodeLLaMA and instruction finetuning on evol-code-alpaca๐Ÿ”ฅ๐Ÿ”ฅ
  • [2023.08.27] We release our documentation in a webbook format ๐Ÿ”—Check it out here
  • [2023.08.21] We release the Quantization codes and Evaluation result๐Ÿ”ฅ
  • [2023.08.05] We release the multimodel finetuning codes and checkpoints๐Ÿ”ฅ
  • [2023.07.23] Initial release ๐Ÿ“Œ

Features

Setup

โš™๏ธ For environment installation, please refer to Environment Setup.

Model Usage

๐Ÿค– Instructions for model pretraining, finetuning, inference, and other related topics are all available in the document.

Frequently Asked Questions (FAQ)

โ“ Encountering issues or have further questions? Find answers to common inquiries here. We're here to assist you!

Demos

๐Ÿ’ก Now, our model SPHINX supports generating high-quality bounding boxes and then present masks created by SAM for all objects within an image driven by input prompts. Give it a try here! ๐Ÿš€

Core Contributors

Chris Liu, Ziyi Lin, Guian Fang, Jiaming Han, Yijiang Liu, Renrui Zhang

Project Leader

Peng Gao, Wenqi Shao, Shanghang Zhang

Hiring Announcement

๐Ÿ”ฅ We are hiring interns, postdocs, and full-time researchers at the General Vision Group, Shanghai AI Lab, with a focus on multi-modality and vision foundation models. If you are interested, please contact gaopengcuhk@gmail.com.

Citation

If you find our code and paper useful, please kindly cite:

@article{zhang2023llamaadapter,
  title = {LLaMA-Adapter: Efficient Finetuning of Language Models with Zero-init Attention},
  author={Zhang, Renrui and Han, Jiaming and Liu, Chris and Gao, Peng and Zhou, Aojun and Hu, Xiangfei and Yan, Shilin and Lu, Pan and Li, Hongsheng and Qiao, Yu},
  journal={arXiv preprint arXiv:2303.16199},
  year={2023}
}
@article{gao2023llamaadapterv2,
  title = {LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model},
  author={Gao, Peng and Han, Jiaming and Zhang, Renrui and Lin, Ziyi and Geng, Shijie and Zhou, Aojun and Zhang, Wei and Lu, Pan and He, Conghui and Yue, Xiangyu and Li, Hongsheng and Qiao, Yu},
  journal={arXiv preprint arXiv:2304.15010},
  year={2023}
}

Acknowledgement

Show More

License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

About

An Open-source Toolkit for LLM Development

https://llama2-accessory.readthedocs.io/

License:Other


Languages

Language:Python 92.9%Language:Shell 6.8%Language:Batchfile 0.4%