Iamanorange / ImageCompression

This is the codes for paper "Learning Convolutional Networks for Content-weighted Image Compression"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ImageCompression

This is the codes for paper "Learning Convolutional Networks for Content-weighted Image Compression".

This project is based on a modified version of caffe framework. Currently, we only offer the complied pycaffe python package for test. The source codes would be release later. The pycaffe package is complied with the environment "Windows 10" "VS 2015" and "CUDA 8.0". The pycaffe package is available at "https://drive.google.com/open?id=0B-XAj3Bp3YhHbU5pdTNfc3l5OGs". After download it, just put it into the library of python and make sure the code "import caffe" works. Here, we recommend you to take use of Anaconda as the python environment and just move the "caffe" directory in the uncompressed folder into the path "AnacondaInstallPath/Lib/site-packages". After that, install python package "numpy","protobuf","lmdb" and "cv2" before runing the codes, otherwise you may meet with some errors. For installing these packages, you make take use of pip install or conda install. For example, just type "pip install numpy" in the command line to install the numpy package.

Now, I will show you how to take use of the test codes for our image compression model.

The file "test_imp.py" is used to test different models with different compression ratios. One model for one ratio. For each model, this file can generate the compressed image, calculate the PSNR metrics and put the compressed images under the directory "model/img". The importance map for each image is transformed as a black white picture and saved in the folder "model/img/imp".

The compression ratio of our model can be calculated in two parts. One part for the importance map, another for the binary codes. The file "create_lmdb_for binary_codes.py" is used to extract the context for each binary codes and put the context cubic into a lmdb database for further calculate the hit or miss possibility in the arithmetic coding. The database should be created for each model before testing the compression ratio. The file "create_lmdb_for_imp_map.py" is used to prepare the data for calculating the ratio of the binary codes. After preparing the data, we can run the file "test_entropy_encoder.py" to calculate the compression ratio of the binary codes and the importance map. Finally, by adding the importance map ratio and the binary codes ratio, we can get the final ratio of our model.

If you have problem in testing our model, please contact me at "csmuli@comp.polyu.edu.hk".

If you the codes, please cite the paper "Learning Convolutional Networks for Content-weighted Image Compression".

@article{li2017learning,

title={Learning Convolutional Networks for Content-weighted Image Compression},

author={Li, Mu and Zuo, Wangmeng and Gu, Shuhang and Zhao, Debin and Zhang, David},

journal={arXiv preprint arXiv:1703.10553},

year={2017}

}

About

This is the codes for paper "Learning Convolutional Networks for Content-weighted Image Compression"


Languages

Language:Python 100.0%