baidu-research / DeepBench

Benchmarking Deep Learning operations on different hardware

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deepbench with Theano/Keras on

aditbhrgv opened this issue · comments

Hello,

I want to use Deepbench for matrix multiplications and convolutions with low precision(8 bit, float16)with Theano and Keras. Can you please let me know how to integrate and use this library with Theano and Keras? What do I need to change in Theano and Keras ? I am only doing inference and not training with 8-bit etc. i.e I don't need to calculate backward propagations in low precisions.

Setup: A CNN trained in Keras/Theano with FP32 weights, bias and activations which is then quantized to 8-bit, 16-bit etc. That means I have a quantized model. I want to do inference or forward path by matrix multiplications of these 8-bit, 16-bit quantized weights, bias and activations (i.e these matrices have 8-bit values)... This means I have to do 8-bit multiply and 32-bit accumulate using Deepbench. I will use cuDNN v6 and CUDA8.0 on GTX 1080 TI Nvidia processor(Pascal architecture) which has support for DP4A.

DeepBench is a benchmarking library for measuring performance of underlying neural network libraries and hardware used for training and inference. You could use the neural network libraries like cudNN or MKL in deep learning frameworks like Theano or Keras. I suspect there is already support for 8 bit multiplication for inference. You should read the documentation for Theano/Keras to understand what precision the libraries support.