jolibrain / deepdetect

Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Home Page:https://www.deepdetect.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DeepDetect builds with `TENSORRT_OSS` do require CUDA 10.2

beniz opened this issue · comments

Otherwise the build error is as follows:

deepdetect/build/tensorrt-oss/src/tensorrt-oss/samples/common/sampleDevice.h:189:56: error: ‘cudaStreamCaptureModeGlobal’ was not declared in this scope
         cudaCheck(cudaStreamBeginCapture(stream.get(), cudaStreamCaptureModeGlobal));
                                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
deepdetect/build/tensorrt-oss/src/tensorrt-oss/samples/common/sampleDevice.h:189:56: note: suggested alternative: ‘cudaStreamCaptureStatusNone’
         cudaCheck(cudaStreamBeginCapture(stream.get(), cudaStreamCaptureModeGlobal));
                                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                        cudaStreamCaptureStatusNone

FTR this also appears here: triton-inference-server/server#635

Using CUDA 10.2 fixes the issue. A check for CUDA 10.2 is probably needed in the CMakeFile to catch the issue much earlier.

commented

According to the issue on triton-infenrence-server it should work with cuda 10.1 no?

dk, 10.1 is deprecated for us, and I don't believe we are using it anymore.