zhangjiekui / FlagEmbedding

Dense Retrieval and Retrieval-augmented LLMs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FlagEmbedding

Build License Build Build

English | 中文

Hiring: We're seeking experienced NLP researchers and intern students focusing on dense retrieval and retrieval-augmented LLMs. If you're interested, please feel free to reach out to us via email at zhengliu1026@gmail.com.

FlagEmbedding focus on retrieval-augmented LLMs, consisting of following projects currently:

News

  • 10/12/2023: Release LLM-Embedder, a unified embedding model to support diverse retrieval augmentation needs for LLMs. Paper 🔥
  • 09/15/2023: The technical report of BGE has been released
  • 09/15/2023: The massive training data of BGE has been released
  • 09/12/2023: New models:
    • New reranker model: release cross-encoder models BAAI/bge-reranker-base and BAAI/bge-reranker-large, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
    • update embedding model: release bge-*-v1.5 embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
More
  • 09/07/2023: Update fine-tune code: Add script to mine hard negatives and support adding instruction during fine-tuning.
  • 08/09/2023: BGE Models are integrated into Langchain, you can use it like this; C-MTEB leaderboard is available.
  • 08/05/2023: Release base-scale and small-scale models, best performance among the models of the same size 🤗
  • 08/02/2023: Release bge-large-*(short for BAAI General Embedding) Models, rank 1st on MTEB and C-MTEB benchmark! 🎉 🎉
  • 08/01/2023: We release the Chinese Massive Text Embedding Benchmark (C-MTEB), consisting of 31 test dataset.

Projects

BGE embedding is a general Embedding Model. We pre-train the models using retromae and train them on large-scale pair data using contrastive learning. You can fine-tune the embedding model on your data following our examples. We also provide a pre-train example. Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. For more training details for bge see baai_general_embedding.

BGE-v2 is in progress, which will support multilingual and long text.

LLM Embedder is fine-tuned based on the feedback from LLMs. It can support the retrieval augmentation needs of large language models, including knowledge retrieval, memory retrieval, examplar retrieval, and tool retrieval. It is fine-tuned over 6 tasks: Question Answering, Conversational Search, Long Conversation, Long-Range Language Modeling, In-Context Learning, and Tool Learning. For more details please refer to ./FlagEmbedding/llm_embedder/README.md

Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our example. For more details please refer to ./FlagEmbedding/reranker/README.md

Model List

bge is short for BAAI general embedding.

Model Language Description query instruction for retrieval
BAAI/llm-embedder English Inference Fine-tune a unified embedding model to support diverse retrieval augmentation needs for LLMs See README
BAAI/bge-reranker-large Chinese and English Inference Fine-tune a cross-encoder model which is more accurate but less efficient
BAAI/bge-reranker-base Chinese and English Inference Fine-tune a cross-encoder model which is more accurate but less efficient
BAAI/bge-large-en-v1.5 English Inference Fine-tune version 1.5 with more reasonable similarity distribution Represent this sentence for searching relevant passages:
BAAI/bge-base-en-v1.5 English Inference Fine-tune version 1.5 with more reasonable similarity distribution Represent this sentence for searching relevant passages:
BAAI/bge-small-en-v1.5 English Inference Fine-tune version 1.5 with more reasonable similarity distribution Represent this sentence for searching relevant passages:
BAAI/bge-large-zh-v1.5 Chinese Inference Fine-tune version 1.5 with more reasonable similarity distribution 为这个句子生成表示以用于检索相关文章:
BAAI/bge-base-zh-v1.5 Chinese Inference Fine-tune version 1.5 with more reasonable similarity distribution 为这个句子生成表示以用于检索相关文章:
BAAI/bge-small-zh-v1.5 Chinese Inference Fine-tune version 1.5 with more reasonable similarity distribution 为这个句子生成表示以用于检索相关文章:
BAAI/bge-large-en English Inference Fine-tune 🏆 rank 1st in MTEB leaderboard Represent this sentence for searching relevant passages:
BAAI/bge-base-en English Inference Fine-tune a base-scale model but with similar ability to bge-large-en Represent this sentence for searching relevant passages:
BAAI/bge-small-en English Inference Fine-tune a small-scale model but with competitive performance Represent this sentence for searching relevant passages:
BAAI/bge-large-zh Chinese Inference Fine-tune 🏆 rank 1st in C-MTEB benchmark 为这个句子生成表示以用于检索相关文章:
BAAI/bge-base-zh Chinese Inference Fine-tune a base-scale model but with similar ability to bge-large-zh 为这个句子生成表示以用于检索相关文章:
BAAI/bge-small-zh Chinese Inference Fine-tune a small-scale model but with competitive performance 为这个句子生成表示以用于检索相关文章:

Contributors:

Citation

If you find this repository useful, please consider giving a star ⭐ and citation

@misc{bge_embedding,
      title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, 
      author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
      year={2023},
      eprint={2309.07597},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

@misc{llm_embedder,
      title={Retrieve Anything To Augment Large Language Models}, 
      author={Peitian Zhang and Shitao Xiao and Zheng Liu and Zhicheng Dou and Jian-Yun Nie},
      year={2023},
      eprint={2310.07554},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

License

FlagEmbedding is licensed under the MIT License. The released models can be used for commercial purposes free of charge.

About

Dense Retrieval and Retrieval-augmented LLMs

License:MIT License


Languages

Language:Python 99.3%Language:Shell 0.7%