Nellaker-group / TowardsDeepPhenotyping

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPU implementation

widedh opened this issue · comments

Hi,

I am trying to make a GPU implementation. So I modified tha train file:
parser.add_argument('--multi-gpu', help='Number of GPUs to use for parallel processing.', type=int, default=1)
and I did
python3 train.py ..... --multi-gpu-force --gpu GPU-ID
But nvidia-smi command shows:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.43 Driver Version: 418.43 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:01:00.0 On | N/A |
| 18% 29C P8 4W / 250W | 427MiB / 10988MiB | 1% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1353 G /usr/lib/xorg/Xorg 18MiB |
| 0 1390 G /usr/bin/gnome-shell 57MiB |
| 0 1587 G /usr/lib/xorg/Xorg 147MiB |
| 0 1720 G /usr/bin/gnome-shell 150MiB |
| 0 4191 G ...uest-channel-token=18265781320550942790 51MiB |
+-----------------------------------------------------------------------------+

As I see 1% means that the algorithm isn't using the GPU..

what I am doing wrong? and how I can use my GPU?
Thank you

it was a problem of tensorflow-gpu installation and then I decreased the number of batches.

Hi @widedh - All of our tensorflow code is for GPUs anyway.