Chain of thought and annotation
ljvmiranda921 opened this issue · comments
Lj Miranda commented
Lj Miranda commented
Maybe...follow how the google paper constructs their chain of thought prompts?
Current unknowns:
- What's the difference between CoT and few-shot? Former is a process and latter are i.i.d. examples?
- What's the benefit of CoT in annotation? Maybe there's an HCI angle? (find decryzinski paper)
- Can I do this without using ChatGPT? Like use GPT-J or BLOOM or any open-source LLM?
Lj Miranda commented
I remember an idea that annotator disagreements are also nice signals (of what?). Maybe there's something there.
Lj Miranda commented
How about, in the future, use langchain to feed an annotation guideline (ACE2004/The Guardian), then let the LLM show the exact passage as to why that was the case. Then you can kinda verify it