thunil / TecoGAN

This repo contains source code and materials for the TEmporally COherent GAN SIGGRAPH project.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to use multiple gpu's in inference mode

santosh-shriyan opened this issue · comments

I have a multi-gpu system that I want to use for inference.
I have changed all the cuda id to correspond to a available gpu.

i.e with tf.device('/gpu:0'): ----> line 17
     with tf.device('/gpu:1'): -------> line 46
     with tf.device('/gpu:3'): -----> line 112

and so on.
I also changed the runGan.py 1 parameter to "cudaID", "2"
to distribute the load.

but for some reason only gpu:0 gets utilized as if it was hardcoded somewhere I cannot see.
Would really appreciate any help or possible suggestion.