Akshay-Dongare / GPUvsCPU-TrainingTimeComparision

HPC Mini Project

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CPU-vs-GPU-benchmark-on-MNIST

System: i7 8550U (4 cores), 16 GB ram, Geforce MX150 (2GB), windows 10 using CUDA toolkit 8.0.16, CuDNN 8.0, python 3.5, keras 2.1.2 with tensorflow 1.4.0, visual studio 2015.

Training of neural networks can be accelerated by using the parallelism of calculations on GPUs. But by how much, for a typical small convolutional neural network (4 convolutional layers, one fully connected layer)? Let's find out! Here I compare training duration of a CNN with CPU or GPU for different batch sizes (see ipython notebook in this repo). The GPU load is monitored in an independent program (GPU-Z).

Here's the result:

alt text

We can see that the GPU calculations with Cuda/CuDNN run faster by a factor of 4-6 depending on the batch sizes (bigger is faster).

About

HPC Mini Project

License:GNU General Public License v3.0


Languages

Language:Jupyter Notebook 100.0%