f4nku4n / MF-NAS-fork

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MF-NAS: Multi-Fidelity Neural Architecture Search

MIT licensed

Quan Minh Phan, Ngoc Hoang Luong

In GECCO 2024.

Setup

  • Clone this repo
  • Install necessary packages and databases.
$ cd MF-NAS
$ bash install.sh

Reproducing the results

This repo have already implemented following NAS algorithms:

Our experiments are conducted on NAS-Bench-101, NAS-Bench-201, and NAS-Bench-ASR search spaces.

The configurations of algorithms are set in configs/algo_101.yaml, configs/algo_201.yaml, and configs/algo_asr.yaml. The configurations of problems are set in configs/problem.yaml.

To reproduce all main results in our paper, run the below scripts:

$ python /script/run_101.sh
$ python /script/run_201.sh
$ python /script/run_asr.sh

To reproduce the ablation studies, run the below scripts:

To experiment on NAS-Bench-201 with different zero-cost metrics

$ python /script/run_201_with_different_zc_metrics.sh

To compare the impact of Random Search and Local Search on the performance of MF-NAS

$ python /script/compare_RS_and_LS.sh

To replace val_acc with train_loss in MF-NAS

$ python /script/replace_val_acc_with_train_loss.sh

Note that you can search with other metrics. However, the using_zc_metric and metric hyperparameters must be set so that they do not conflict with each other.

For example, if you use the synflow/jacov metrics as the search objective, you need to set using_zc_metric to True. If you use val_acc/train_loss as the search objective, you must set using_zc_metric to False.

The table of performance metrics that are currently available for all networks in each search space.

Metric Type NAS-Bench-101 NAS-Bench-201 NAS-Bench-ASR
training accuracy training-based ✔️ ✔️
validation accuracy training-based ✔️ ✔️
training loss training-based ✔️
validation loss training-based ✔️
validation PER training-based ✔️
jacov training-free ✔️ ✔️ ✔️
plain training-free ✔️ ✔️
grasp training-free ✔️ ✔️
fisher training-free ✔️ ✔️ ✔️
epe_nas training-free ✔️
grad_norm training-free ✔️ ✔️ ✔️
snip training-free ✔️ ✔️ ✔️
synflow training-free ✔️ ✔️ ✔️
l2_norm training-free ✔️ ✔️
zen training-free ✔️
nwot training-free ✔️
params training-free ✔️ ✔️ ✔️
flops training-free ✔️ ✔️ ✔️

All metrics are logged as .pickle and .json files.

Here are our best results (performance & search cost) for each search space:

Algorithm NB-101 NB-201
(cifar10)
NB-201
(cifar100)
NB-201
(ImageNet16-120)
NAS-Bench-ASR
MF-NAS (synflow) $93.82 \pm 0.56$
$12,960$ seconds
($368$ epochs)
$94.36 \pm 0.05$
$20,000$ seconds
($1,192$ epochs)
$73.51 \pm 0.00$
$40,000$ seconds
($1,192$ epochs)
$46.34 \pm 0.00$
$120,000$ seconds
($1,192$ epochs)
$21.77 \pm 0.00$
$300$ epochs
MF-NAS (params) $93.89 \pm 0.25$
$14,088$ seconds
($368$ epochs)
$94.36 \pm 0.00$
$20,000$ seconds
($1,192$ epochs)
$73.51 \pm 0.00$
$40,000$ seconds
($1,192$ epochs)
$46.34 \pm 0.00$
$120,000$ seconds
($1,192$ epochs)
$21.81 \pm 0.26$
$300$ epochs
MF-NAS (FLOPS) $93.88 \pm 0.25$
$14,055$ seconds
($368$ epochs)
$94.36 \pm 0.00$
$20,000$ seconds
($1,192$ epochs)
$73.51 \pm 0.00$
$40,000$ seconds
($1,192$ epochs)
$46.34 \pm 0.00$
$120,000$ seconds
($1,192$ epochs)
$21.78 \pm 0.36$
$300$ epochs
Optimal (benchmark) $94.31$ $94.37$ $73.51$ $47.31$ $21.40$

Acknowledgement

We want to give our thanks to the authors of NAS-Bench-101, NAS-Bench-201, and NAS-Bench-ASR for their search spaces; to the authors of Zero-cost Lightweight NAS and NAS-Bench-Zero-Suite for their zero-cost metric databases.

About


Languages

Language:Python 97.7%Language:Shell 2.3%