recommenders-team / recommenders

Best Practices on Recommendation Systems

Home Page:https://recommenders-team.github.io/recommenders/intro.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[ASK] Perfect MAP@k is less than 1

daviddavo opened this issue · comments

Description

I have a recommender that, for some users in some folds, has less than $k$ items in the ground truth. Therefore, the $precision@k$ is less than 1, even with a recommender that recommends the ground truth. For that reason, I calculate the results of a perfect recommender for multiple metrics.

By definition, the perfect $ndcg@k$ is 1. I thought this was the case for $MAP@k$ too, but it is not, the average $MAP@5$ of various folds of mine is 0.99, but I even have a fold with a $MAP@5$ of 0.7! I've also noticed that perfect $MAP@k$ is exactly equal to $recall@k$, but I haven't found any resources that explain this coincidence.

Keep in mind that I'm talking about implicit feedback, and the ideal recommender just assigns 1 in the prediction field.

Other Comments

I'll try and provide an example that causes this "issue".

For those that came here from Google.

I think it is easier to understand if we first explain recall at k:

If the number of items is greater than k, then the recall can never reach one, not even with a recommender that knows the test set. Let's say you have only one user that has 12 interactions with items. With k=5 the maximum recall is 5/12, as you will only get 5 recommendations.

The MAP uses the precision-recall curve, and in a perfect recommender is just equivalent to the number of recovered elements, which is the recall. Therefore, the maximum MAP achievable is equivalent to the maximum recall achievable, which probably is not 1.