Which model is used in the paper to compute self-information?
XiaoFengbing opened this issue · comments
Hi, thanks for your very interesting work for CONTEXT COMPRESSION!
I want to know which model is used in the paper to compute self-information.
For LLaMA family, this paper uses meta-llama/Llama-2-7b
? (https://huggingface.co/meta-llama/Llama-2-7b)
Another simple question, in this paper, we only compress the context/demonstration/document
(and instruction
if it exists), meanwhile not compress the question/query
??? In other words, we only input the all contents from a prompt to compress except its question/query?
Thank YOU!
- For llama experiment in the paper, I think huggyllama/llama-7b it's the right one.
- You're correct. In the paper, I only send context/passage for the compression. But you're free to try inlcude query in your experiments.