OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to produce the result between LLaMA-Adapter V1 and Alpaca

vicissitude1999 opened this issue · comments

I used the provided trained weights of LLaMA-Adapter V1 and compared its performance with Alpaca. I wasn't able to get the same result as in figure 6 of the LLaMA-Adapter V1 paper. As shown in the image below, there are a lot of ties.

For Alpaca weights, I followed the official guide from https://huggingface.co/tatsu-lab/alpaca-7b-wdiff. Could you please detail the exact steps to reproduce figure 6?

微信图片_20231123113224

The generation params have a great impact on the results. In our setting, LLaMA-Adapter uses top-p=0.1 and temprature=0.75 for generation.