khanrc / swad

Official Implementation of SWAD (NeurIPS 2021)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about oracle metric

zhihou7 opened this issue · comments

Dear authors,

I'm not familiar with Domain Generalization. In the reported table. you have also provided oracle, iid, last metrics. What's those metrics meaning? Could you also give some explains?

Thanks in advance.

Regards,

Hi, the metrics indicate model selection methods in DomainBed paper.

image
(Image from DomainBed)

oracle uses the small split of the test domain as the validation set (test-domain validation), and iid uses the small splits of the train domains as the validation set (training-domain validation). The best performing model in this validation set is selected as a final model, and its test domain performance is reported as a DG performance (in each oracle and iid row). last simply uses last checkpoint as a final model. Note that oracle actually is a kind of cheating method, because it accesses to target domain in training time, which is originally forbidden.

It should be also noted that the three metrics are results of the base algorithm (e.g. ERM for default run), not the SWAD results. Since SWAD is a model selection-free method, its results are reported in only SWAD row. See the DomainBed (https://openreview.net/forum?id=lQdXeXDoWtI) paper for more details about model selection methods.

I get it.

Thanks for your quick reply.