Unable to use multiple gpu's in inference mode
santosh-shriyan opened this issue · comments
Santosh Shriyan commented
I have a multi-gpu system that I want to use for inference.
I have changed all the cuda id to correspond to a available gpu.
i.e with tf.device('/gpu:0'): ----> line 17
with tf.device('/gpu:1'): -------> line 46
with tf.device('/gpu:3'): -----> line 112
and so on.
I also changed the runGan.py 1 parameter to "cudaID", "2"
to distribute the load.
but for some reason only gpu:0 gets utilized as if it was hardcoded somewhere I cannot see.
Would really appreciate any help or possible suggestion.