dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

Home Page:https://developer.nvidia.com/embedded/twodaystoademo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

opencv Mat to Jetson-utils image

gyillikci opened this issue · comments

Hi I am trying to map cv::Mat to CudaImage. I confused because there is no documentation on C++ image format unlike python documentation.
I am try to achieve something as below
cv::Mat img_vid1; uchar3 imageCUDA; if( CUDA_FAILED(cudaMemcpy(imageCUDA, (uchar3*)img_vid1.data, 1280 * 720 * sizeof(uchar3), cudaMemcpyDeviceToDevice))) { return false; }

I would be very happy to get an idea.

Best

@gyillikci I believe since cv::Mat is CPU memory (not cv::GpuMat), it would need to be cudaMemcpyHostToDevice

Hi Dustin,

Thanks for the tip. it turned out that an additional cudamalloc needs to be used.

For the benefit of all, here is the snippet

           uchar3* imageCUDA1 = NULL;  // can be uchar3, uchar4, float3, float4

           CUDA_WARN(cudaMalloc((void**) &imageCUDA1, 1280 * 720 * sizeof(uchar3)));
          while(1){
           if( CUDA_FAILED(cudaMemcpy(imageCUDA1, img_vid1.data, 1280 * 720 * sizeof(uchar3), cudaMemcpyHostToDevice)))
            {
                return false;
            }
            if( output != NULL )
            {
                output->Render(imageCUDA1, 1280, 720);
                

                if( !output->IsStreaming() )  // check if the user quit
                    break;
            }
           }