peteanderson80 / coco-caption

Adds SPICE metric to coco-caption evaluation server codes

Home Page:http://panderson.me/spice

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Microsoft COCO Caption Evaluation

Evaluation codes for MS COCO caption generation.

No longer maintained. The SPICE metric has been incorporated into the official COCO caption evaluation code, so this repo is no longer maintained.

Requirements

  • java 1.8.0
  • python 2.7

Files

./

  • cocoEvalCapDemo.py (demo script)

./annotation

  • captions_val2014.json (MS COCO 2014 caption validation set)
  • Visit MS COCO download page for more details.

./results

  • captions_val2014_fakecap_results.json (an example of fake results for running demo)
  • Visit MS COCO format page for more details.

./pycocoevalcap: The folder where all evaluation codes are stored.

  • evals.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
  • tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
  • bleu: Bleu evalutation codes
  • meteor: Meteor evaluation codes
  • rouge: Rouge-L evaluation codes
  • cider: CIDEr evaluation codes
  • spice: SPICE evaluation codes

Setup

  • You will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE. To do this, run: ./get_stanford_models.sh

References

Developers

  • Xinlei Chen (CMU)
  • Hao Fang (University of Washington)
  • Tsung-Yi Lin (Cornell)
  • Ramakrishna Vedantam (Virgina Tech)

Acknowledgement

  • David Chiang (University of Norte Dame)
  • Michael Denkowski (CMU)
  • Alexander Rush (Harvard University)

About

Adds SPICE metric to coco-caption evaluation server codes

http://panderson.me/spice

License:Other


Languages

Language:Jupyter Notebook 93.3%Language:Python 6.6%Language:Shell 0.1%