Using dopamine trained model

Apologies for daft question but is there any help on using model once trained using dopamine?
Have read gazillions of tutorials on training, most don’t mention implementation.
Can successfully load the graph, but am tangled in creating the agent.
Should I be instantiating a Runner object? Its docs only refer to training …
A simple scenario to train a DQN agent is as follows:

  ```python
  import dopamine.discrete_domains.atari_lib
  base_dir = '/tmp/simple_example'
  def create_agent(sess, environment):
    return dqn_agent.DQNAgent(sess, num_actions=environment.action_space.n)
  runner = Runner(base_dir, create_agent, atari_lib.create_atari_environment)
  runner.run()

Feel like I’m missing out on the fun part :confused:

Dopamine has an evaluation mode. To run eval mode, set the training steps to 1 and set the evaluation steps to something reasonable. There is no explicit functionality that exports the trained model for inference.

That’s great, many thanks, really appreciate it. Personally finding these cryptic clues and getting things to run on windows has been the biggest challenge…

I really dislike Dopamine, Baselines and Spinning Up. These implementations are way too large by now and severly lack in features such as exporting and loading models for inference. It smells like that there was no Software Engineer involved at all.

RL-Adventure-2 is really nice, because it is much more accessible.

1 Like

How do you load the model? I’m not finding any references on how to do that

To clarify. Does one just move the saved models to the base_dir and enter the command as if they were going to train except set the training steps to 1?

You need to tailor some of the dopamine files to manually perform inference, at least that’s how I went about it. Try reading through some of the codes in discrete_domains folder as well as agents. The library is very well documented so reading through the files will give you a very good idea.

1 Like