huawei-noah / trustworthyAI

Trustworthy AI related projects

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about castle.algorithms.RL using GPU

Ethan-Chen-plus opened this issue · comments

image
When I use castle.algorithms.RL to fit my data, I use device GPU, but by using nvidia-smi to check, there is no running progress on gpus.

Hello,

could you try using RL(device_type='gpu') (in lowercase) to see if it works? You can also try to specify the device ID with the device_ids parameter, e.g., setting device_ids=0.

image
This works. However I still have 2 questions:

  1. Set 0~7 all the cards, it only runs on card 0.
  2. It seems that gpu can speed up compared to cpu, however it doesn't speed up much: 4.00s/it vs 4.25s/it.

Nice~

For your questions,

  1. Setting the device ids will change the CUDA_VISIBLE_DEVICES environment variable but I believe the code itself will only use a single device, so it's mainly useful for setting which device to use.
  2. This mainly depends on the problem size, for smaller problems there will be a lot of data overhead which makes running on the CPU faster. Another thing to note is that for the RL method here, it is slow on larger graphs. In the paper conclusion they mention that ~30 nodes is ok but larger than that can be problematic.

Thanks a lot for your help!