NVIDIA-AI-IOT / tf_trt_models

TensorFlow models accelerated with NVIDIA TensorRT

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TensorRT Mismatch

akhilvasvani opened this issue · comments

Linux distro and version: Ubuntu 18.04
GPU type: NVIDIA GeForce GTX 1080 Ti Founder's Edition
nvidia driver version: 418.56
CUDA version: 10.0
CUDNN version: 7.4.1
Python version [if using python]: 3.6.7
Tensorflow version:1.13.1
TensorRT version: 5.1.2

Following the instructions per the TensorRT manual, I used the frozen model to implement Tensorflow-RT:

import tensorflow.contrib.tensorrt as trt

trt_graph = trt.create_inference_graph(input_graph_def=frozen_graph, outputs=[out.op.name for out in model.outputs], max_batch_size=1,max_workspace_size_bytes=2 << 20, precision_mode="fp16")
tf.train.write_graph(trt_graph, "model", "tfrt_model.pb", as_text=False)

However, then I get the error:

WARNING:tensorflow:TensorRT mismatch. Compiled against version 5.0.2, but loaded 5.1.2. Things may not work.

What exactly is not lining up? Is my CUDA version too low? Is my cuDNN too low as well? Does it need to be upgraded to 10.1? Or should I downgrade my TensorRT? Am I missing something? Any help would be greatly appreciated.