Great! Although, is there any way to map our own metrics in the grafana dashboard (e.g. training mean return)?
Edit: Found it, there is an option in the dashboard to plot any metrics your code outputs in each training iteration. Really useful
This is not a ray specific issue. Infact, there is no such flag in ray. We pop this flag before passing it to run_experiments . You can make this change in your train.py to run it locally.
Once you add your metrics here, we will collect them during evaluation and you can visualize them on the submission dashboard. To visualize your custom metrics,
Open the dashboard.
Hit esc key and you should see a few dropdowns at the top of the window.
The evaluation worker won’t run locally unless you pass the evaluation config. If you are asking about dealing with disable_evaluation_worker flag, yes, you can pop it from the config in train.py so that it works locally.