Hi all, I am trying to make a submission that is based on RLlib.
Do you have experience in how to use a trained agent in RLlib? Have you ever used RLlib for your submissions?
My current approach is as follows, but fails when restoring the trainer from a checkpoint:
- get a trainer instance for the given environment and config
- restore the model (and full state) from the latest CHECKPOINT
- get trainer.policy
- execute policy.compute_actions(observations) to get actions
Do you know of an alternative solution? IMO, restoring the model would be sufficient because we do not care about the training anymore. All we want to use here is a trained agent…
Thanks for a hint.
Marco