Working on the examples given (flatland-examples)

I’ve tried the example models given. I managed to get them to work easily.

My issue is how do i get them ready for submission. I am trying to get the multi agent reinforcement learning one ready.
So far i copied the environments yml file, added torch and tensorboard since i had to install those for the ML. How do i test to make sure the project runs correctly?

I know i need the run.sh file, do i need to create a run.py or should i point the run.sh to one of the files?

Absolutely not sure about the correct setup for this.

Hi @AntiSquid,

You need to point correct file in run.sh and the entrypoint for all the submissions remain as /home/aicrowd/run.sh.

We will make it more clear in starter kit if this information isn’t clear right now.

This is what i am not sure about. Which file exactly would I point to in the run.sh ? For the ML multi agent.

(confused the question as Procgen competition one instead of Flatland competition, sorry for the wrong answer earlier)

The experiment file you want to submit as submission i.e. here

https://github.com/AIcrowd/neurips2020-procgen-starter-kit/blob/master/run.sh#L8

Edit: I see, the variable name choice probably isn’t the best one here, and could have caused confusion.

Hey, the best way is to start from the start kit repo: https://gitlab.aicrowd.com/flatland/neurips2020-flatland-starter-kit

Follow the getting started to see how to submit: https://flatland.aicrowd.com/getting-started/first-submission.html

Then integrate your own solution by copying over the code from flatland-examples.

You’ll have to:

  • Add any dependency you need to the environment.yml file (torch…).
  • Load the trained agent for your solution. In this competition, you submit pre-trained agents, no training happen on the evaluation side.
  • Use your own agent in the run.py file instead of the random my_controller one used by default. Basically, call your model using the obs instead of calling randint here.

You generally don’t have to touch the run.sh file if you write your solution in Python.

3 Likes

I am working with multi_agent_training.py from flatland-examples
Any pointers on how to load the trained agent?
I assume I need to load the checkpoint file “.pth”, however I couldn’t find any code in the example to load from checkpoint.

Thank you in advance

Indeed you need to load the .pth file corresponding to the checkpoint you want to use.

You can see an example of loading a checkpoint here: https://gitlab.aicrowd.com/flatland/flatland-examples/blob/master/reinforcement_learning/evaluate_agent.py#L28

# evaluation is faster on CPU, except if you have huge networks
parameters = {
    'use_gpu': False
}

policy = DDDQNPolicy(state_size, action_size, Namespace(**parameters), evaluation_mode=True)
policy.qnetwork_local = torch.load(checkpoint)

Then you can do policy.act(observation, eps=0.0) to get the action from your policy!

1 Like

Thank you much for the reply :slight_smile:

Hello MasterScrat,

I am a bit confused by the way DDDQN accepts a Namespace within the policy:

policy = DDDQNPolicy(state_size, action_size, Namespace(**parameters), evaluation_mode=True)

Namespace is formed by this:

env_params_dict = {

    # sample configuration

    "n_agents": 5,

    "x_dim": 35,

    "y_dim": 35,

    "n_cities": 4,

    "max_rails_between_cities": 2,

    "max_rails_in_city": 3,

    "seed": 42,

    "observation_tree_depth": 2,

    "observation_radius": 10,

    "observation_max_path_depth": 30

}

env_params = Namespace(**env_params_dict)

I am confused about two things.
1-While evaluating using run.py the environment variables will change from what I understand according to the environment which was created for the agent to be evaluated in. How should I approach this to be able to test the example Multi-agent?

2- When I run run.py with redis as a local test. I get the following error:

File “run.py”, line 162, in
action = my_controller(observation, number_of_agents)
File “run.py”, line 68, in my_controller
_action[_idx] = policy.act(observation, eps=0.0)
File “/mnt/c/Users/nvda/Desktop/AI_2/NeurIPS_flatland/neurips2020-flatland-starter-kit/reinforcement_learning/dddqn_policy.py”, line 56, in act
state = torch.from_numpy(state).float().unsqueeze(0).to(self.device)
TypeError: expected np.ndarray (got dict)

If you can give me any pointers about how to proceed, It would be of great help.
thank you

PS:
my controller is set up like this:
def my_controller(obs, number_of_agents):

_action = {}

print(_action)

for _idx in range(number_of_agents):

    _action[_idx] = policy.act(observation, eps=0.0)

    print(_action)

return _action

After having thought about it, I can ask the question a different way.

How can I get from remote_client the state size and also the dimensions of the map etc.

From what I can tell neither the “observation” nor the “info” contains them.

Maybe I am missing something…

Any help is appreciated

1 Like

During evaluation, you can use remote_client.env which behaves like a normal environment. So you can access it width or height attributes as usual.

I am not sure what you mean by state size?

While evaluating using run.py the environment variables will change from what I understand according to the environment which was created for the agent to be evaluated in. How should I approach this to be able to test the example Multi-agent?

In general, you would proceed in two steps:

  • First, you train your agent locally. For this you can use multi_agent_training.py, but it’s just an example, you can implement your own training method.

  • Second, you submit your agent. In this challenge, no training happen during submission. Your agent needs to be fully pre-trained when you submit it (as opposed to eg the ProcGen challenge).

If you use the multi_agent_training.py, then you don’t have to care about the dimensions of the evaluation environment, because it uses tree observations. The good thing with tree observations is that the observations are always the same size, no matter the size of the environment, so you can just use a neural network with a fixed size and it’ll work in all situations!

When I run run.py with redis as a local test. I get the following error

It looks like you are giving the policy an observation from all the agents, when it expects an observation from only one of the agents.

1 Like