XWalways / Network-Pruning

Pruning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Rethinking the Value of Network Pruning

Rethinking the Value of Network Pruning(ICLR2019). [arXiv] [OpenReview]

Paper Summary

Fig 1: A typical three-stage network pruning pipeline.

The paper shows that for structured pruning, training the pruned model from scratch can almost always achieve comparable or higher level of accuracy than the model obtained from the typical "training, pruning and fine-tuning" (Fig. 1) procedure. Paper concluded that for those pruning methods:

  1. Training a large, over-parameterized model is often not necessary to obtain an efficient final model.
  2. Learned “important” weights of the large model are typically not useful for the small pruned model.
  3. The pruned architecture itself, rather than a set of inherited “important” weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.

The results suggest the need for more careful baseline evaluations in future research on structured pruning methods.

Fig 2: Difference between predefined and automatically discovered target architectures, in channel pruning. The pruning ratio x is user-specified, while a, b, c, d are determined by the pruning algorithm. Unstructured sparse pruning can also be viewed as automatic. The finding has different implications for predefined and automatic methods: for a predefined method, it is possible to skip the traditional "training, pruning and fine-tuning" pipeline and directly train the pruned model; for automatic methods, the pruning can be seen as a form of architecture learning.


The Paper also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. For more details please refer to our paper.

Implementation on Pytorch

The Paper evaluated the following seven pruning methods.

  1. L1-norm based channel pruning
  2. ThiNet
  3. Regression based feature reconstruction
  4. Network Slimming
  5. Sparse Structure Selection
  6. Soft filter pruning
  7. Unstructured weight-level pruning

The first six is structured while the last one is unstructured (or sparse). For CIFAR, the code is based on pytorch-classification and network-slimming. For ImageNet, The Paper uses the official Pytorch ImageNet training code. The instructions and models are in each subfolder.

For experiments on The Lottery Ticket Hypothesis, please refer to the folder cifar/lottery-ticket.

Implementation on MXNet/Gluon

About

Pruning


Languages

Language:Python 100.0%