oscarknagg / few-shot

Repository for few-shot learning machine learning projects

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

why number of classes are not the same in train and test?

marzi-heidari opened this issue · comments

I used to believe in k-way-n-shot few-shot learning, k and n (number of classes and samples from each class respectively) must be the same in train and test phases. But you uses different numbers in the train and test phase (60 for train and 5 for test):

parser.add_argument('--dataset')
parser.add_argument('--distance', default='l2')
parser.add_argument('--n-train', default=1, type=int)
parser.add_argument('--n-test', default=1, type=int)
parser.add_argument('--k-train', default=60, type=int)
parser.add_argument('--k-test', default=5, type=int)
parser.add_argument('--q-train', default=5, type=int)
parser.add_argument('--q-test', default=1, type=int)

Are we allowed to do so?

you can see the article“https://arxiv.org/pdf/1703.05175.pdf”,
Section 2.6,it said:


Episode composition A straightforward way to construct episodes, used in Vinyals et al. [29] and
Ravi and Larochelle [22], is to choose Nc classes and NS support points per class in order to match
the expected situation at test-time. That is, if we expect at test-time to perform 5-way classification
and 1-shot learning, then training episodes could be comprised of Nc = 5, NS = 1. We have found,
however, that it can be extremely beneficial to train with a higher Nc, or “way”, than will be used
at test-time. In our experiments, we tune the training Nc on a held-out validation set. Another
consideration is whether to match NS, or “shot”, at train and test-time. For prototypical networks,
we found that it is usually best to train and test with the same “shot” number.

I also don't understand the principle。。