TensorRT support for optimized inference results on Nvidia?
oscarbg opened this issue · comments
Oscar Barenys commented
Hi,
have been testing inference results on my GTX 970 card and seems from compilation of NV benchmarks inference benchmarks use also CUDNN.. wouldn't be getting better results if TensorRT was used? TensorRT isn't meant for optimized inference performance vs CUDDN..
thanks..
Sharan Narang commented
Thanks for this suggestion. We aren't planning to support TensorRT as of now. Happy to accept contributions from the community for this feature.
Oscar Barenys commented
Thanks for info..
Anton Lokhmotov commented
@oscarbg If you are interested, Collective Knowledge supports TensorRT benchmarking (with some older TX1 results publicly available).