baidu-research / DeepBench

Benchmarking Deep Learning operations on different hardware

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TensorRT support for optimized inference results on Nvidia?

oscarbg opened this issue · comments

Hi,
have been testing inference results on my GTX 970 card and seems from compilation of NV benchmarks inference benchmarks use also CUDNN.. wouldn't be getting better results if TensorRT was used? TensorRT isn't meant for optimized inference performance vs CUDDN..
thanks..

Thanks for this suggestion. We aren't planning to support TensorRT as of now. Happy to accept contributions from the community for this feature.

Thanks for info..

@oscarbg If you are interested, Collective Knowledge supports TensorRT benchmarking (with some older TX1 results publicly available).