automl / NASLib

NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Source of validation accuracy in zero-cost case

jr2021 opened this issue · comments

In the zero-cost branch optimizers Npenas and Bananas, the validation accuracy of architectures is being queried from the zero-cost-benchmark as follows:

model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']

The question is, whether this supports the case where the user wants to use the ZeroCost predictor because their dataset or search space is not supported by the zero-cost benchmark.

If this is a case that we want to support, one option would be to introduce a parameter use_zc_api and use it as follows:

if self.use_zc_api:
    model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']
else:
    model.accuracy = model.arch.query(
        self.performance_metric, self.dataset, dataset_api=self.dataset_api
    )

The code was written this way for the Zero-Cost NAS paper, where we consumed only search spaces for which the values were available in the zc_api. It would make more sense to give users the option to choose whether or not to query the zc_api, as you suggest.

Got it. Another sub-issue that came up is when to call query_zc_scores. The question is whether this function only be called under the following condition:

if self.zc and len(self.train_data) <= self.max_zerocost:
    ...

Or, is there a case where the zero-cost scores can be calculated after the self.max_zerocost parameter has been exceeded? We assume that this parameter refers to the maximum number of zero cost evaluations, so presumably the answer is no. What do you think?