selkerdawy / joint-pruning-monodepth

Lightweight Monocular Depth Estimation Model by Joint End-to-End Filter pruning.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

joint-pruning-monodepth

Lightweight Monocular Depth Estimation Model by Joint End-to-End Filter pruning.

Demo

Sample video showing the pruned vgg model and the baseline monodepth vgg running on GTX 1080 Ti with 60 and 33 frame per second respectively; The demo is slowed down for demonstration only. The demo shows that even with more than 80% compression rate, the pruned network shows both qualitatively and quantitatively small drop in accuracy compared to the baseline network.

Inference

Sample code for inference using the pruned vgg model trained on eigen split is provided, usage:

python sample_code.py --dir PATH/TO/KITTI/2011_09_26/2011_09_26_drive_0064_sync/image_02/data/ --checkpoint_path model/model-0.data-00000-of-00001

Environment virtualenv is recommended and install requirements from req.txt:

virtualenv -p python3 .env
source .env/bin/activate
pip install -r req.txt

Training code will be added soon.

Supplementary materials

Full depth metrics disrcarded from original paper due to space, details on the number of filters per layer in the pruned network, and comparison between weights sparsity vs masks sparsity.

Cite

If you find this code useful in your research, please consider citing:

@inproceedings{elkerdawy2019lightweight,
  title={Lightweight monocular depth estimation model by joint end-to-end filter pruning},
  author={Elkerdawy, Sara and Zhang, Hong and Ray, Nilanjan},
  booktitle={2019 IEEE International Conference on Image Processing (ICIP)},
  pages={4290--4294},
  year={2019},
  organization={IEEE}
}

About

Lightweight Monocular Depth Estimation Model by Joint End-to-End Filter pruning.


Languages

Language:Python 100.0%