Training on GPU

Hi everyone

I’m trying to train an agent using training_navigation.py from baselines repo.
I installed pytorch with CUDA support as described here: https://pytorch.org/get-started/locally/
I use lines 22-23 in dueling_double_dqn.py of baselines repo to switch between running on CPU and GPU.
Running on GPU speeds up the training about 1.5 times only comparing to running on CPU.
GPU seems to be in use (checked with nvidia-smi) but the load is very low (3-4%).
I’m new to pytorch and RL but I’ve trained some CNN using tensorflow.
In case of CNN GPU used to give me 10-50x increase in training speed.
Is it normal, that in case of the flatland training example the diference is so low, or am I missing something?

1 Like

Hi @maria_schoenholzer

If you are training on the vanilla implementation the speed up will be minor as the NN is very small in size. It uses the TreeObsForRailEnv which collects information along the possible routes of each agent and stores them in a tree. Thus no CNN are needed and the speed up for training the DDQN agent is minor.

In my experience the loading from CPU to GPU memory (Because flatland runs on the CPU) takes up to much time for learning speed up to be noticable. Using other approaches like A3C or using global observations and CNN’s would probably benefit from using the GPU.

Hope this helps.

Best regards,
Erik

3 Likes

Hi Erik

Thank you for the detailed explanation. It totally makes sense.