Several questions about the competition

Hello everyone, it’s my first time to join a competition on AIcrowd. May I ask a few questions?

  1. Should we set num_levels=200 and distribution_mode=hard for submission?
  2. If I want to use my own code, which does not rely on any files in the starter kit except the procgen wrapper, for submission, can I change run.sh to run the code?

Hello @the_raven_chaser

You can change the environment configuration as you like and we will use the same during the training phase. During the rollouts, we will force the following configuration

{
    "num_levels": 0,
    "start_level": 0,
    "use_sequential_levels": false,
    "distribution_mode": "easy",
    "use_generated_assets": false
}

As long as your code uses rllib, things should work. You can add more environment variables or arguments to python train.py line in run.sh but we will still use the train.py wrapper that we provided to trigger the training. So any changes you make to train.py will be dropped.

This discussion should give you more context on why we want to enforce the use of a framework, FAQ: Regarding rllib based approach for submissions

All the best for the competition! :smiley:

I see, thanks. But aren’t we required to only use 200 levels for training? So why not set “num_levels=200” for evaluation?

@the_raven_chaser: the num_levels=200 setting would only be used in the final round when we are evaluating the generalization.

For the warm_up round and Round 1 of the competition, there are no restrictions on num_levels, and hence technically we are only measuring sample efficiency.

1 Like

Thank you for the explanations:-)

Hi @jyotish and @mohanty. If we set “num_levels=0” for training, which configuration should we use for evaluation on our local machine? And I’m wondering if the difficulty is set to “easy” throughout the competition?

Hi @the_raven_chaser,

When you are using num_levels=0 during training, the env samples from all the possible levels for the said env. So it would make sense to use num_levels=0 during your local evaluation too.

And yes, we use the difficult as easy throughout the competition.

Thanks for the explanations

@jyotish Above you mentioned that any changes to train.py will be dropped when submitted, can I assume that the same applies to rollout.py? I’m brand new to RLlib and Ray so this warm up phase is very helpful. If we have rollout specific logic, is there a place you’d suggest implementing?

Hello @tim_whitaker

Yes, any changes to train.py , rollout.py and envs/procgen_env_wrapper.py will be dropped during the evaluation. The custom algorithms (trainer class related ones) and the custom models will be used even during the rollouts. If the rollouts work locally without altering the provided rollout.py as expected, the same should work during evaluations as well.