Can anyone tell me the proper way to train multiple agents simultaneously? Just launching multiple environments crashes and seems to not work because the socket for communication with python is already in use (or something else is wrong…).
I am asking this because i am unsure if it is intended that you can only run a single environment.
I am running Windows 10 - the v1.1.1 from the other thread and python 3.5
I am replying to myself because i am an idiot and maybe you are, too.
When creating an obstacle_tower_env you need to set the worker_id
@EliasB I thank you for explaining this. Yes, setting the worker id works. If you use multiple VMs/containers then each of them can also train their own agents, I suspect a few people will use that approach as well. Thank you for sharing this trick!
Hi, unfortunately, I’m still not quite sure how to set up multiple instances at a same time…
Do you run multiple experiments, or from where do you launch your function with multiple workers?
ObstacleTowerEnv, there is a
worker_id parameter which is an
int that corresponds to the unique port (5005+n) to be used for. Just be sure that each of these are unique every time you try to launch another instance at the same time. https://github.com/Unity-Technologies/obstacle-tower-env/blob/master/obstacle_tower_env.py#L25
Hi Elias, I’m running multiple parallel experiments as well, and looking to team up so we can share knowledge and run more experiments. Let me know
Hi. I’m running multiple environment everything good but after a while CPU usage reduced for each instance. Each instance use almost 20% of CPU power at beginning and it reduced to 1% after almost 100.000 step. I have only 4 agent and i see this behavior with just a simple loop with random action (no any learning)