xiaomi-automl / MoGA

MoGA: Searching Beyond MobileNetV3

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MoGA: Searching Beyond MobileNetV3

We propose the first Mobile GPU-Aware (MoGA) neural architecture search in order to be precisely tailored for real-world applications. Further, the ultimate objective to devise a mobile network lies in achieving better performance by maximizing the utilization of bounded resources. While urging higher capability and restraining time consumption, we unconventionally encourage increasing the number of parameters for higher representational power. Undoubtedly, these three forces are not reconcilable and we have to alleviate the tension by weighted evolution techniques. Lastly, we deliver our searched networks at a mobile scale that outperform MobileNetV3 under the similar latency constraints, i.e., MoGA-A achieves 75.9% top-1 accuracy on ImageNet, MoGA-B meets 75.5% which costs only 0.5ms more on mobile GPU than MobileNetV3, which scores 75.2%. MoGA-C best attests GPU-awareness by reaching 75.3% and being slower on CPU but faster on GPU.

MoGA Architectures

Requirements

Benchmarks on ImageNet

ImageNet Dataset

We use the standard ImageNet 2012 dataset, the only difference is that we reorganized the validation set by their classes.

Evaluation

To evaluate,

python3 verify.py --model [MoGA_A|MoGA_B|MoGA_C] --device [cuda|cpu] --val-dataset-root [path/to/ILSVRC2012] --pretrained-path [path/to/pretrained_model]

Citation

This repository goes with this paper, your citations are welcomed!

@article{chu2019moga,
    title={MoGA: Searching Beyond MobileNetV3},
    author={Chu, Xiangxiang and Zhang, Bo and Xu, Ruijun},
    journal={ICASSP},
    url={https://arxiv.org/pdf/1908.01314.pdf},
    year={2020}
}

About

MoGA: Searching Beyond MobileNetV3


Languages

Language:Python 100.0%