PotatoSpudowski / fastLLaMa

fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backend.

Home Page:https://potatospudowski.github.io/fastLLaMa/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is Alpaca 13B and 30B tested?

imranraad07 opened this issue · comments

commented

I tried to run setting the path from huggingface, didn't work for 13B and 30B version but worked for 7B version.

I will try and get it integrated tonight ;)

Hi,
I was able to get 30B param model working.
13B should work fine too and 65B (If someone releases it xD)

You can look at this branch
https://github.com/PotatoSpudowski/fastLLaMa/tree/alpaca-lora

You will have to follow the build steps and convert the model again.

The issue with LoRA models are their embedding size. Based on how LoRA method works (It creates low rank decomposition matrices and freezes the pertained weights), I suspect that is why we have have different embedding sizes compared to non LoRA models.

Will need to sort out a few things before merging to main but feel free to use this and let me know if you face any issues :)

Merged to main.

Structure of fastLlama.Model() is updated. Please change accordingly!

Hi @PotatoSpudowski . I was curious how alpaca models are handled differently. For example, llama.cpp requires alpaca models to have n_parts and ins flags. are those things accounted for ?
My C/C++ skills are not good enough to navigate your code.

Yup, That's why why require users to specify the ModelIdentifier when initialising the model.
Based on the identifier, we chose the config from the backend (Which tells us about parts, vocab size etc). It is an underrated feature of fastLLaMa which imo is the right way to go about it.

The ins flag if I am not right is supposed to specify that it is in instruction mode is it? Either ways we have example files for Alpaca and LLaMA models which show how to use these models for either text completion or QNA tasks.

Finally we also are working on redesigning our save and load feature and optimising it for latency and size in the feature/save_load branch. Extremely GOATED implementation!

Developers should be allowed to implement their own workflows using the features that were developed using first principles thinking rather than us deciding workflows for them. Will document everything extensively so it is easier for everyone!!!