liyucheng09 / Selective_Context

Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Loading data in reproducing the result?

rabi-fei opened this issue · comments

commented

Thank you for the useful work. However, I get confused when I try to reproduce the experiments. The provided code python main.py encounters many problems. Can you give a more detailed introduction on how to load the data provided in huggingface hub into your code?

Thank you for your time and support. I appreciate your attention to this matter and eagerly await your response.

Could you provide your error messages.

close as no further interaction.