Unable to produce the result between LLaMA-Adapter V1 and Alpaca
vicissitude1999 opened this issue · comments
vicissitude1999 commented
I used the provided trained weights of LLaMA-Adapter V1 and compared its performance with Alpaca. I wasn't able to get the same result as in figure 6 of the LLaMA-Adapter V1 paper. As shown in the image below, there are a lot of ties.
For Alpaca weights, I followed the official guide from https://huggingface.co/tatsu-lab/alpaca-7b-wdiff. Could you please detail the exact steps to reproduce figure 6?
Jiaming Han commented
The generation params have a great impact on the results. In our setting, LLaMA-Adapter uses top-p=0.1 and temprature=0.75 for generation.