SeanLee97 / AnglE

Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard

Home Page:https://arxiv.org/abs/2309.12871

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fine-tune LLM for WhereIsAI/UAE-Large-V1 embeddings First ?

sergiosolorzano opened this issue · comments

Hi,

To use the generated embeddings from WhereIsAI/UAE-Large-V1 in an LLM model , do I first need to fine tune a pre-trained LLM model with AnglE so that WhereIsAI/UAE-Large-V1 embeddings are compatible with an LLM? e.g.

angle = AnglE.from_pretrained('NousResearch/Llama-2-7b-hf', pretrained_lora_path='SeanLee97/angle-llama-7b-nli-v2')

Thank you !