collin-burns / discovering_latent_knowledge

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Discrepancy between accuracy from scripts and notebook.

fabrahman opened this issue · comments

Hi,

Thanks much for putting this useful repo together.

I realized using the scripts (generate and evaluate) on amazon_polarity with deberta gives a random chance accuracy both for LR and CCS while this is not the case with the notebook.

Do you think this is due to the format of prompts? your default prompt idx is 0 but also tried with 2.

I would appreciate any pointers.