Is it ok to directly wrap the gym env in `create_single_env` in local_evaluation.py when evaluating the model in aicrowd?

If I need to process the obs and reset some data when env.reset() is called, is it ok to directly wrap the env with my own-defined wrapper in create_single_env function in local_evaluation.py?

Hi @CH_do

Unfortunately we do not support this. Any changes in local_evaluation are not used for the actual evaluation.

Please add all wrapper related logic to the agent class you’re submitting in agents/user_config.py

Thanks for your reply.
Some feature processing methods like stacking the recent k frames need to clear some historical data when the env is reset. However, in the current evaluation framework, it is hard to do this. This problem can be solved if another parameter, such as a bool variable is_first_obs indicating whether the env is reset, is allowed to be passed into agent.act.

Hi @CH_do

The done parameter is the same as what the env outputs. You can use that to detect resets.

The done is always reset to False when one episode ends in evaluate in local_evalutor.py.

Oh, this is a bug. Thanks for pointing this out, I’ll fix it asap.

@CH_do

done will be True after the env resets now. Thanks again for checking this.

Hi, is it should be the observations (not observations_agent) here?

1 Like

Indeed, it should be observation, fixed it. Thanks.

Hi, sorry to bother you again.
What’s the difference between the local evaluator and the actual one? It seems that the performance evaluated in the actual has a huge drop than in local. (LIMIT_TASKS has been changed to None in LocalEvalConfig)

@CH_do

The score comes from a private set of tasks. Please check if you model is overfitting to the public tasks.