While we have done quite a few challenges in the past where there was no restrictions in terms of which framework to use.
In the context of the Procgen challenge, we are enforcing the framework as an experiment, as it ofcourse helps us orchestrate the evaluations in a more stable way. At the same time, it also ensures that all the code from all the participants at the end of the competition can hypothetically be just merged into the starter kit in a simple way via a pull request - hence increasing the overall impact of all the activity that happened in this challenge.
Rllib is not a general framework like pytorch or gym, it is a specific implementation, such restriction may discourage to participate. E.g. I have some custom methods I wanted to use for the competition, but it’s really complicated to port the code to rllib. There are many good RL libraries, the competition will not benefit from forbidding them. I’m wondering if you consider an alternative.
Agreed. I think the challenge is fantastic, but I have worked with RLLib in the past, and would not want to revert back to it. Please consider relaxing this criteria.
Is there any kind of interface that could be used to dynamically tell each environment instance which seed to use? I’ve got some curriculum concepts to sample seeds during training.
From first sight, I think that this is way too cumbersome using RLlib.
I tried to submit a random agent, i created experiments/rand.yaml with run: custom/CustomRandomAgent and set export EXPERIMENT_DEFAULT="experiments/rand.yaml" in run.sh. But the submission got the “failed” status with unclear reason.
@shivam, It would be really helpful if organisers provide a minimal implementation with a random agent where rllib is used only as a gym wrapper for logging.
hello, I got the same question Do you have any solutions or any ideas I can follow? I debug it locally, and it reports “ray.tune.error.TuneError: Unknown trainable: CustomRandomAgent”.