CIDEr
dr-costas opened this issue · comments
Hi,
The CIDEr metric calculated with this package is the CIDEr or the CIDEr-D?
Thnx!
The metric calculated here is the CIDEr-D metric. In case you need the
CIDEr code, following repo contains code for both CIDEr and CIDEr D:
https://github.com/vrama91/cider
On Mon, Nov 28, 2016 at 4:41 AM dr-costas ***@***.***> wrote:
Hi,
The CIDEr metric calculated with this package is the CIDEr or the CIDEr-D?
Thnx!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#18>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AILuF30I8DXjG9CM6gdttj9zmrTrPBGFks5rCr3dgaJpZM4K9r5M>
.
--
Sent from a mobile device
Is it correct in "cider.py" ?
....
...
def compute_score(...):
....
for id in imgIds:
hypo = res[id]
ref = gts[id]
confusing..
Hi! Yes, hypo stands for hypotheses, which are the 'results' from methods that we want to evaluate, and ref stands for references which are the ground truth sentences we compare against.
Thanks! I just feel a little confused.