haanjack / mnist-cudnn

CUDA for MNIST training/inference

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to inference based on saved parameters without training procedure.

HangJie720 opened this issue · comments

Hello, I want to inference based saved parameters without training.
When I comment out this code, inference result seems not correct. I'm not sure if just inference.

while (step < num_steps_train)
    {
       ....
        }
    }

Inference result as show as follows, load_pretrain is true.

== MNIST training with CUDNN ==
[TRAIN]
loading ./dataset/train-images-idx3-ubyte
loaded 60000 items..
.. model Configuration ..
CUDA: conv1
CUDA: pool
CUDA: conv2
CUDA: pool
CUDA: dense1
CUDA: relu
CUDA: dense2
CUDA: softmax
[INFERENCE]
loading ./dataset/t10k-images-idx3-ubyte
loaded 10000 items..
loss:    0, accuracy: 8.5%
Done.

I find if only inference based saved parameters, I need to set load_pretrain=true, and set freeze_=false simultaneously, which is equal to call model.train() during inference phase.

And why the loss is 0.00 when training.

== MNIST training with CUDNN ==
[TRAIN]
loading ./dataset/train-images-idx3-ubyte
loaded 60000 items..
.. model Configuration ..
CUDA: conv1
CUDA: pool
CUDA: conv2
CUDA: pool
CUDA: dense1
CUDA: relu
CUDA: dense2
CUDA: softmax
.. initialized conv1 layer ..
.. initialized conv2 layer ..
.. initialized dense1 layer ..
.. initialized dense2 layer ..
step:  200, loss: 0.000, accuracy: 77.051%
step:  400, loss: 0.000, accuracy: 95.852%
step:  600, loss: 0.000, accuracy: 96.299%
step:  800, loss: 0.000, accuracy: 96.305%
step: 1000, loss: 0.000, accuracy: 96.322%
step: 1200, loss: 0.000, accuracy: 96.336%
step: 1400, loss: 0.000, accuracy: 96.303%
step: 1600, loss: 0.000, accuracy: 96.301%
step: 1800, loss: 0.000, accuracy: 96.314%
step: 2000, loss: 0.000, accuracy: 96.312%
step: 2200, loss: 0.000, accuracy: 96.326%
step: 2400, loss: 0.000, accuracy: 96.320%
[INFERENCE]
loading ./dataset/t10k-images-idx3-ubyte
loaded 10000 items..
loss: 0.000, accuracy: 100.000%
Done.

Thank you for your contribution, this issue has been solved.