serizba / cppflow

Run TensorFlow models in C++ without installation and without Bazel

Home Page:https://serizba.github.io/cppflow/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Memory leak in cppflow::decode_jpeg()?

resetpointer opened this issue · comments

Hi, I am creating a cpp tensor in two different ways. The first method is by using cppflow::decode_jpeg() to create a tensor on GPU memory, the second method is by creating a tensor manually on system memory. The first method causes memory leak, where as the second method does not. The code is as follow:

#define USE_TENSORFLOW_API_GPU_API
#ifdef USE_TENSORFLOW_API_GPU_API
    input = cppflow::decode_jpeg(req->image().content());
    input = cppflow::expand_dims(input, 0);
#else
    std::vector<uint8_t> data;
    for(int i=0; i < h; i++){
        for(int j=0; j < w; j++){
            QColor color = image.pixelColor(j,i);
            int red, green, blue;
            green = color.green();
            blue = color.blue();
            red = color.red();

            data.push_back(red);
            data.push_back(green);
            data.push_back(blue);
        }
    }

    input = cppflow::tensor(data, {1, h, w, 3});
#endif

I am not sure what causes the memory leak, would someone comment on this issue?

I read from this thread that GPU memory (of CUDA toolkit) does not get de-allocated until the process terminated. Please advise if you know anything about this. thanks

@resetpointer Could you please provide the minimal compilable code so that we can reproduce the leak? It would be better if you can provide the jpeg image. In addition, which version of the TF C API did you use?

Please also clarify that the leak happens on CPU, GPU or both. I am a bit confused.