manyoso / haltt4llm

This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious current problem in widespread adoption of LLM's for many real purposes.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issues running with gpt4-x-alpaca-native

MarkSchmidty opened this issue · comments

This is a 13B full finetune, not a peft, usinf a large GPT-4 dataset on top of a previous full finetune of Alpaca (cleaned) 13B.

It can be found in GPTQ 4bit .pt format here: https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g/tree/main

I ran into a lot of issues trying to get it to work. But I figure @manyoso can probably easily swap out the above for Alpaca peft for a quick test and post the results. I'm curious to see how this one performs. 

(Original gpt4-x-alpaca in 16bit can be found here: https://huggingface.co/chavinlo/gpt4-x-alpaca on the creator's HuggingFace)

commented

What issues did you find? I can try and run it in the next week of so

Nothing major. I tried renaming it to llama7b and launching it as llama7b. But llama needs to be prompted like Alpaca, which currently expects a peft. I'm not quite sure how to do that.