Confusezius / Revisiting_Deep_Metric_Learning_PyTorch

(ICML 2020) This repo contains code for our paper "Revisiting Training Strategies and Generalization Performance in Deep Metric Learning" (https://arxiv.org/abs/2002.08473) to facilitate consistent research in the field of Deep Metric Learning.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The split of the Stanford Cars dataset.

ppanzx opened this issue · comments

Thank you for your commendable efforts in your work. I have a question regarding the split of the Stanford Cars dataset, which comprises 16,185 images representing 196 car models.

In most metric-learning literature, the dataset split is described as follows: "The first 98 classes (8,054 images) are used for training, and the remaining 98 classes (8,131 images) are held out for testing."

However, the split mentioned in the Torchvision documentation states that "The data is split into 8,144 training images and 8,041 testing images, with an approximately 50-50 split for each class.", the training and testing split of which is different from current metric-learning community.

Unfortunately, the official website is currently inaccessible, leaving me uncertain about the specific split used in this implementation.

Could you kindly provide me with a detailed split list (rather than the raw images) used in your implementation of the Stanford Cars dataset?

Thank you for your attention to this matter.