Using a trained agent in RLlib

Hi all, I am trying to make a submission that is based on RLlib.

Do you have experience in how to use a trained agent in RLlib? Have you ever used RLlib for your submissions?

My current approach is as follows, but fails when restoring the trainer from a checkpoint:

  1. get a trainer instance for the given environment and config
  2. restore the model (and full state) from the latest CHECKPOINT
  3. get trainer.policy
  4. execute policy.compute_actions(observations) to get actions

Do you know of an alternative solution? IMO, restoring the model would be sufficient because we do not care about the training anymore. All we want to use here is a trained agent…
Thanks for a hint.
Marco

you can refer to the rollout.py script in the AIcrowd baselines for flatland

And the corresponding script

Note that this runs small environments with a custom seed. You will have to change the environment logic for your purpose.

Your approach seems correct in principle … not sure why the trainer cannot restore from checkpoint. You could compare with the example provided.