π¦ LLAMA embeddings
jobergum opened this issue Β· comments
Will you add support for LLAMA? π¦ embeddings are not normalized to unit length like the OpenAI embeddings, meaning you can both represent direction and magnitude.
I like the direction where this is headed, but I like alpaca's a magnitude more.
I maybe mistaken - but it seems like there's no examples using this with chatgpt?
https://github.com/search?q=hyperdb+chatgpt&type=code&p=4