malllabiisc / RESIDE

EMNLP 2018: RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About The evaluation P@N

CrisJk opened this issue · comments

I read your paper and the paper Lin et al., 2016. I found that you all used p@n as a evaluation . I noticed that p@one refers to predicting by randomly taking a sentence from the bag of each entity pair. However, this may result in different results for each experiment, so I am puzzled by this evaluation. Could you please help me understand it, thank you very much!

Hi @CrisJk,
The metric has been used for evaluation in previous papers so to compare against them we have reported the results on it.

Hi @svjan5 ,
Thank you for your reply, and what make I confused is that each time of experiment may got different results. Therefore I don't how to calculate the final result, use average or maxmium or other appropriate way?

Hi @CrisJk, I think using average across 5-10 runs will give us a decent estimate of the value.

@svjan5 Thank you!