CStanKonrad / long_llama

LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Code for zero-shot arxiv evaluation

bronyayang opened this issue · comments

Hi,

Can you provide the code or more detail into how you zero-shot evaluate Arxiv dataset?
I cannot get a good result when trying the arxiv summarization. I guess it is because I don't know the prompt or the model size is not 7B?

Hi,

Thanks for interest in our work! In our paper, the only results we give on arxiv are language modeling perplexity numbers for small models. We do not evaluate LongLLaMA on arxiv summarization downstream task. Note that our model is not instruction tuned, which means that it cannot really do zero-shot summarization. You could try few-shot summarization (not quite sure if a 3B model could really do that), or prompt engineering to match the format of your target document. Also, please stay tuned for the upcoming instruction-tuned models which will definitely be able to do some summarization!