There are 0 repository under llm-as-evaluator topic.
This is the repo for the survey of Bias and Fairness in IR with LLMs.
Code and data for ACL ARR 2024 paper "Benchmarking Cognitive Biases in Large Language Models as Evaluators"