blyucs / bitslice_sparsity

Codes for our paper "Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment" [NeurIPS'19 EMC2 workshop].

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bit-slice Sparsity

This repo contains the codes for our preliminary study [paper][poster][presentation] which aims at improving bit-slice sparsity for efficient ReRAM deployment of DNN. Codes are tested with Pytorch 1.2.0 and Python 3.7.

The codes for MNIST and CIFAR-10 are within mnist/ and cifar/ respectively. The training routine mainly consists of three parts: pre-training, pruning, and fine-tuning.

First, pre-train a fixed-point model:

python pretrain.py

Then, load and prune the pre-trained model, and fine-tune with either normal l1 regularization, or bit-slice l1 regularization.

python finetune_l1.py or python finetune_bitslice.py

There are some arguments within the codes for which we have set up default values, but you may want to check it yourself and make some adjustments.

Acknowledgement

The codes are adapted from nics_fix_pytorch.

About

Codes for our paper "Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment" [NeurIPS'19 EMC2 workshop].


Languages

Language:Python 100.0%