salesforce / factCC

Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper

Home Page:https://arxiv.org/abs/1910.12840

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Usage for consistency prediction

michelbotros opened this issue · comments

Hi,

I'm looking to use factCC to try and evaluate summaries produced by a summarization system.
I've seen the scripts for evaluation (factCC-eval.sh) and it reports eval_results (bacc, f1 and loss).

But is there a way to get the output of the algorithm, so for each of the inputs a label "consistent" or "inconsistent" and the span that either supports the claim (in case of a correct claim) or the span that contradicts the claim (in the case that the claim was incorrect).

I could not find this in the Usage part of the readme.
I'd be happy to hear how to do this.

Regards,

Michel Botros

Hi Michel,
You have to edit the evaluation code to collect the model outputs. Then you can process them as you wish.

  • Wojciech

@muggin
I suppose that was th equestion to which you answered to edit the code of evaluation script, may you knidly point out what change needs to be done