serizba / cppflow

Run TensorFlow models in C++ without installation and without Bazel

Home Page:https://serizba.github.io/cppflow/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transferring data on the GPU

Zhaoli2042 opened this issue · comments

Hi!

I find cppflow very useful; however, I have some small questions for now (I may have more in the future :D).

I can use cppflow in a CUDA/C++ program, and cppflow can find my GPUs.

Since the model is making the predictions on the GPU, and all my data is stored on the GPU, is there a way to let the model read data directly from the device without transferring and preparing the data on the host?

And I am having issues when I try to put cppflow::model in std::vector. The program is able to run and make correct predictions, but it generates a "Segmentation fault" when it finishes. Is there a way to avoid this?

Thanks! I appreciate any advice you can give.

Hi @Zhaoli2042

Can you write here the code you are trying to run?

Hi @serizba ,

Thanks for your reply. Here is a simple example that I tested.
cppflow_cuda_example_ASK.tar.gz

I am using the nvidia-hpc compiler (nvc++), the version is 22.5

Hi @Zhaoli2042, were you able to make this work? I'm also interested in having a TF model sit inside a custom GPU pipeline.

I also have similar issue (feed input from GPU) right now and I'm also very interested.