melodyguan / enas

TensorFlow Code for paper "Efficient Neural Architecture Search via Parameter Sharing"

Home Page:https://arxiv.org/abs/1802.03268

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Understanding of the output of the NAS

JiahaoYao opened this issue · comments

Here is my question: after training, what should be the output of the nas? Is that the best one ever during the training or the last architecture produced by the model?

In my opinion, you can select the best one. The author chose the model from last epoch because they wanted to have a fair comparison with NAS.

Hey, I have a problem understanding the final discovered architecture. I can see there is a directory called output and some meta, data, and index files are saved in there. However, none of them is not the discovered architecture. does anybody have any idea where this optimum architecture is saved?

I can answer for the macro search. In the output directory as you say, all what you want is in the stdout file. At each epoch, ENAS gives 10 architectures with the corresponding valid accuracy (computed on the valid data set). What I do, is just search for the best valid accuracyand take the corresponding architecture.

To remind you, an architecture looks like that :

[1]
[4 0]
[2 0 0]
[0 0 0 0]
[4 0 0 0 0]
[2 1 0 1 1 0]
[0 0 1 1 0 0 1]
[2 0 0 1 0 1 0 0]
[4 0 0 0 0 1 1 1 0]
[0 0 0 0 0 0 0 0 0 1]
[5 1 0 0 0 1 1 0 1 1 0]
[0 0 1 0 0 0 0 1 1 0 0 0]
val_acc=0.9062

If it can be of any help, I made a script for analyzing the result of ENAS: https://gitlab.com/ElieKadoche/enas_game_of_go/blob/master/outputs_saved/graph_maker_script.py. It will create a picture like that one.
cifar_10_stdout_macro_search_310_epochs_gtx_1080_12_layers_other_graph

I can answer for the macro search. In the output directory as you say, all what you want is in the stdout file. At each epoch, ENAS gives 10 architectures with the corresponding valid accuracy (computed on the valid data set). What I do, is just search for the best valid accuracyand take the corresponding architecture.

To remind you, an architecture looks like that :

[1]
[4 0]
[2 0 0]
[0 0 0 0]
[4 0 0 0 0]
[2 1 0 1 1 0]
[0 0 1 1 0 0 1]
[2 0 0 1 0 1 0 0]
[4 0 0 0 0 1 1 1 0]
[0 0 0 0 0 0 0 0 0 1]
[5 1 0 0 0 1 1 0 1 1 0]
[0 0 1 0 0 0 0 1 1 0 0 0]
val_acc=0.9062

If it can be of any help, I made a script for analyzing the result of ENAS: https://gitlab.com/ElieKadoche/enas_game_of_go/blob/master/outputs_saved/graph_maker_script.py. It will create a picture like that one.
cifar_10_stdout_macro_search_310_epochs_gtx_1080_12_layers_other_graph

Hey, your answer was quite helpful. I have one more question. The macro-search script is discovering the best architecture. The discovered architectures have 12 layers, but while they are running the macro-final script to train their model, their architecture has 24 layers. why they have changed the number of layers and how they find the best operation and connections for layers 13 to 24? Also, they are not using pooling( neither max nor average) when they are training. This is contradicting to their discovered architecture.