DPR evaluation in Evaluator
Timoeller opened this issue · comments
Currently DPR is evaluated on in batch metrics.
Basing off of PR #715 we want to create a DPR evaluation that transform a whole set of documents and retrieves n candidates based on their embedding similarity. We want to put this evaluation inside our Evaluator class, so each evaluate_every steps the eval is called during training .
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 21 days if no further activity occurs.