openai / gpt-2

Code for the paper "Language Models are Unsupervised Multitask Learners"

Home Page:https://openai.com/blog/better-language-models/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to reproduce the reported F-score for the CoQA benchmark?

eirinistamatiou opened this issue · comments

I have a question about how you evaluated GPT-2 on the CoQA dataset.
We are struggling to reproduce the results reported in the paper (55 F1). We evaluated gpt2-xl from HuggingFace on CoQA and got an F1 of 28.7.

We used the official dev set and evaluation script, which we downloaded from here. Although we get good answers, these answers get a lower score due to the way the original CoQA benchmark evaluator is set up. Did you evaluate it differently?