wojciechmo / deep-compression

Compress neural network with pruning and quantization using TensorFlow.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep compression

TensorFlow implementation of paper: Song Han, Huizi Mao, William J. Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.

The goal is to compress the neural network using weights pruning and quantization with no loss of accuracy.

Neural network architecture:

Test accuracy during training:

1. Full trainig.

Train for number of iterations with gradient descent adjusting all the weights in every layer.

2. Pruning and finetuning.

Once in a while remove weights lower than a threshold. In the meantime finetune remaining weights to recover accuracy.

3. Quantization and finetuning.

Quantization is done after pruning. Cluster remainig weights using k-means. Ater that finetune centroids of remaining quantized weights to recover accuracy. Each layer weights are quantized independently.

4. Deployment.

Fully connected layers are done as sparse matmul operation. TensorFlow doesn't allow to do sparse convolutions. Convolution layers are explicitly transformed to sparse matrix operations with full control over valid weights.

Simple (input_depth=1, output_depth=1) convolution as matrix operation (notice padding type and stride value):

Full (input_depth>1, output_depth>1) convolution as matrix operation:

I do not make efficient use of quantization during deployment. It is possible to do it using TensorFlow operations, but it would be super slow, as for each output unit we need to create N_clusters sparse tensors from input data, reduce_sum in each tensor, multiply it by clusters and add tensor values resulting in output unit value. To do it efficiently, it requires to write kernel on GPU, which I intend to do in the future.

About

Compress neural network with pruning and quantization using TensorFlow.


Languages

Language:Python 100.0%