Hi everyone,
It is easy to notice that top 6 teams on leaderboard have same marks on top-left of testing video . Is it a mark generated by the official code?
Hi everyone,
It is easy to notice that top 6 teams on leaderboard have same marks on top-left of testing video . Is it a mark generated by the official code?
It’s the paint_vel_info
flag that you can find under env_config
in the .yaml files. There are also some flags that are not in the .yaml files, but people are using (use_monochrome_assets
, use_backgrounds
). You can find all of them if you scroll down here: https://github.com/openai/procgen .
Should we actually be allowed to change the environment? Maybe these settings should be reset when doing evaluation?
Hello @lars12llt @karolisram
Yes, these values are not supposed to be changed. We will override these values from the next grader update.
damn, just when I find this I see that it’s not allowed
What values are allowed to be modified? The values listed in env_config
in impala-baseline.yaml
?
Is the grader open-source? So we know exactly what will be the difference between the submitted code and the one that is truly executed.
Hello @victor_le
What values are allowed to be modified? The values listed in
env_config
inimpala-baseline.yaml
?
We will set all the env config params (except rand_seed
) to the default values during evaluations, i.e. none of them is supposed to be changed by the participants.
Is the grader open-source?
We can’t opensource the grader at the moment. But you should be able to replicate the evaluation setup with the values mentioned in FAQ: Round 1 evaluations configuration.
Thank you for the clarification