Hi,
It is my first time to joint AIcrowd competition. I have tested my code on a local computer, and now trying to rewrite it for submission. But I got some questions :
(i) how can I put my trainer on cuda?
(ii) can I have another wrapper on the env? how can the agent parallel sampling form env? Is it just like CustomRandomAgent in the example?
(iii) did i have to put agent’s hyper-parameters in .yaml file?
many thanks.
Hello @lars12llt
Welcome to the AIcrowd community!
1, I believe your question is how you can assign a GPU for the evaluation. All the evaluations will be run on a Tesla P100 (16 GB).
2, Ideally, we expect the model/network related code to stay in models
directory while the training algorithm related code (like custom policy functions, etc.,) to go into algorithms
directory. For the wrapper part, can you have a look at the custom preprocessors in rllib
? https://docs.ray.io/en/master/rllib-models.html#custom-preprocessors
For the parallel sampling, you don’t need to do it, rllib does it for you. The num_workers
parameter in the experiment config file (.yaml file) controls the number of rollout workers that are spawned. You can set the number of envs each worker needs to run using num_envs_per_worker
.
3, Yes, we expect you to set the config file to set hyper-parameters. For why we want you to do it that way, please refer FAQ: Regarding rllib based approach for submissions
Some reference that might help:
- How custom algorithm (trainer) interacts with the rollout workers: https://docs.ray.io/en/master/rllib-training.html
- How models, preprocessors, envs interact: https://docs.ray.io/en/master/rllib-models.html#custom-preprocessors