How to do inference using your model and without using a GPU
codebugged opened this issue · comments
- I am not having GPU infrastructure
- I wish to run the inference on my local system using your model
- Is it possible to run it without using a GPU ? if yes then what approach should I try ?