Current status of imitation agent in baseline repository

Greetings, dear organisers!
I have been trying to utilise the imitation agent resources in the baseline repository, especially imitation_agent/ and baselines/custom_imitation_learning_rllib_tree_obs/*.yaml s.
However, the looks not working or fully implemented, at least for now. I can’t even find any usage of graph-related codes in /libs though they apparently look like existing for generating expert demonstration.
So, what’s the current status of this? Is there any way to exploit these codes?

1 Like

Hey @milva,

So, we have some beautiful imitation learning machinery, with the ability to generate and persist expert demonstrations from top OR submissions, and also with the ability to figure out expert demonstrations on-the-fly (ie no need to create an expert demonstration dataset, you can just compute the best action dynamically). And there’s also a script to convert all that to RLlib format so you can scale up training.

Sadly, though, all this went through multiple versions and is very poorly documented as of right now! We’re aware of it and will try to improve this aspect as soon as we can…

You can maybe get some help from here: Recreating Malfunctions

Also if you tell us more precisely what you are trying to do (pure IL with RLlib?) we may be able to nudge you in the right direction in the meantime.

The imitation trainer works. We have generated results for them. You could do a training and simultaneous evaluation using the script -ief baselines/custom_imitation_learning_rllib_tree_obs/ppo_imitation_tree_obs.yaml --eager --trace
(drop -e flag if you don’t want to do evaluation)
The only thing is the OR expert solution uses was for an older flatland version where the malfunction rate was different. So if you are training with malfunctions, you can workaround it by doing the below changes in the flatland source code

change below line in method malfunction_from_file in the file

mean_malfunction_rate = 1/oMPD.malfunction_rate

The documentation here is a bit old , we will update it soon.
You can refer to this Google Colab notebook also which has the details along with the results
Let me know if you are facing any issues.


Thanks for your thorough replies @MasterScrat @nilabha.
A current goal is a mixed approach that uses IL as a baseline and enhances that with other RL methods.
The --eager argument works like a charm. It didn’t work and threw TF errors without it.

Thanks a lot!

edit) So I should change the formula if I want to use the imitation trainer. Right?

The above script was for doing PPO and IL alternately…

If you want a pure IL, you can try -ef baselines/custom_imitation_learning_rllib_tree_obs/pure_imitation_tree_obs.yaml --eager --trace