Running the evaluation worker during evaluations is now optional

Hello!

We are making the evaluation worker that we run during evaluations optional. We use this worker to generate videos during training.

What will happen if I disable the evaluation worker?

  • Videos will not be generated during training.
  • You can use one additional rollout worker in its place (by increasing num_workers by 1). This is useful for those who are experiencing low throughput.
  • rllib's ARS, APEX implementations should work. They need more than one worker to work. A single evaluation worker results in training failures.
  • The custom random agent code in the starter kit works with no additional modifications.

How can I disable the evaluation worker?

You should set disable_evaluation_worker to True in your experiment YAML file.

For example,

procgen-ppo:
    run: PPO
    env: procgen_env_wrapper
    disable_evaluation_worker: True
    stop:
        timesteps_total: 100000
3 Likes

Great! Although, is there any way to map our own metrics in the grafana dashboard (e.g. training mean return)?
Edit: Found it, there is an option in the dashboard to plot any metrics your code outputs in each training iteration. Really useful

1 Like

Hello @jyotish

I’m getting this error on local machine, is it some ray version issue or something else? Ray version installed is ray[rllib]==0.8.5

File "<...>/python3.7/site-packages/ray/tune/experiment.py", line 170, in from_json
exp = cls(name, run_value, **spec) 
TypeError: __init__() got an unexpected keyword argument 'disable_evaluation_worker'
1 Like

Hello @dipam_chakraborty

This is not a ray specific issue. Infact, there is no such flag in ray. We pop this flag before passing it to run_experiments . You can make this change in your train.py to run it locally.

Hello @joao_schapke

I’m a complete rllib noob, can you please share some code snippet or link of how to output the custom metrics.

Hello @dipam_chakraborty

You can add custom metrics using callbacks

Visualizing the custom metrics

Once you add your metrics here, we will collect them during evaluation and you can visualize them on the submission dashboard. To visualize your custom metrics,

  • Open the dashboard.
  • Hit esc key and you should see a few dropdowns at the top of the window.

  • Select the metric(s) you want to visualize

image

:tada: :tada: :tada:

4 Likes

Hi @jyotish

How should I change train.py to disable the evaluation worker locally?

Hello @the_raven_chaser

The evaluation worker won’t run locally unless you pass the evaluation config. If you are asking about dealing with disable_evaluation_worker flag, yes, you can pop it from the config in train.py so that it works locally.

Thank you @jyotish , I see now.