athiyadeviyani / evalMetrics

On the Robustness and Discriminative Power of Information Retrieval Metrics for Top-N Recommendation

Repository from Github https://github.comathiyadeviyani/evalMetricsRepository from Github https://github.comathiyadeviyani/evalMetrics

Evaluating IR Metrics for Top-N Recommendation

Source code of the experiments of:

Daniel Valcarce, Alejandro Bellogín, Javier Parapar, Pablo Castells: On the Robustness and Discriminative Power of IR Metrics for Top-N Recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems, RecSys 2018, Vancouver, Canada, 2-7 October, 2018. DOI 10.1145/3240323.3240347.

Code

The code of the experiments can be found in the following Jupyter notebooks:

  • correlation-among-metrics.ipynb: measures ranking correlations among metrics.
  • discrimination-analysis.ipynb: measures discriminative power of metrics.
  • pop-correlation.ipynb: measures robustness to popularity bias of metrics.
  • sparse-correlation.ipynb: measures robustness to sparsity bias of metrics.

Data

We ran 21 recommender systems on three datasets (BeerAdvocate, LibraryThing and MovieLens 1M). The output of these recommenders was evaluated using rec_eval tool. We also measured statistically significant improvements using permutation test. The output of both tools can be found in data.

Author

The code was implemented by Daniel Valcarce of the Information Retrieval Lab of the University of A Coruña during his stay at the Information Retrieval Group of the Universidad Autónoma de Madrid. If you have any comment or question, do not hesitate to write an email to daniel [DOT] valcarce [AT] udc [DOT] es.

About

On the Robustness and Discriminative Power of Information Retrieval Metrics for Top-N Recommendation

License:Apache License 2.0


Languages

Language:Jupyter Notebook 100.0%