Question about castle.algorithms.RL using GPU
Ethan-Chen-plus opened this issue · comments
Hello,
could you try using RL(device_type='gpu')
(in lowercase) to see if it works? You can also try to specify the device ID with the device_ids
parameter, e.g., setting device_ids=0
.
Nice~
For your questions,
- Setting the device ids will change the
CUDA_VISIBLE_DEVICES
environment variable but I believe the code itself will only use a single device, so it's mainly useful for setting which device to use. - This mainly depends on the problem size, for smaller problems there will be a lot of data overhead which makes running on the CPU faster. Another thing to note is that for the RL method here, it is slow on larger graphs. In the paper conclusion they mention that ~30 nodes is ok but larger than that can be problematic.
Thanks a lot for your help!