[ANNOUNCEMENT] Submission wokring for Round 2

#1

:steam_locomotive::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car:

Dear Flatlanders,

We have resolved the performance issues that kept submissions from working properly!

:tada::confetti_ball: You can now submit your solutions using the starter kit.:tada::confetti_ball:

Be sure to download the latest version from PyPi by running pip install -U flatland-rl.

Have fun with the challenge and feel free to reach out or share your thoughts here in the Forums.

The Flatland Team

:steam_locomotive::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car:

#2

Dear Erik,

Thank you to your team for the hard work. It is great to hear that trains starts to departure again.
Just one small questions:

  1. how many trains in one evaluation environment and how big the size of the map can the second round be? like maximum…
  2. Is there any limited run-time for the second round like before (first round was 8 hours if my memory didn’t go wrong)?

Best,
Beibei

#3

Hi @beibei

We are happy the train is rolling again :steam_locomotive::railway_car::railway_car::railway_car::railway_car: .

  1. The current parameter set are only with environments with max 200 agents. We introduced this due to some performance issues we were experiencing. If this changes in future we will let you know and also re-evaluate all previous submissions on the new env parameters.
  2. There again is an upper limit on the time allowed to run. It is currently set to 12h and a time-out if nothing happens on the server for 15 minutes.

Even though performance has increased alot with the latest fixes we are still working on some further improvements. If we achieve the desired performance ther might be slight updates to time limit as well as number of agents. We will communicate this transparently when something changes.

Best regards,

Erik

#4

Dear Flatland-Team,

is it possible to submit for Round 2 without having participated in Round 1?

Best regards,

Lucy

#5

Hi @lcaubert

YES! :grinning::confetti_ball::tada:

Just use the startet kit linked above and read the official documenation to get started.

Best of luck and have fun

Best regards,

The Flatland Team

#6

Thx for the quick answer! :upside_down_face:

#7

Dear Flatland Team,

Has the timestep limit changed for the environment? Or is it still 1.5 * (H + W)?

Thanks,
Joji

1 Like
#8

Hi @jasako

Yes it has changed, we now allow for much more time:

max_time_steps = int(4 * 2 * (env.width + env.height + 20))

This you can find in the run.py file in the starter kit

Have fun with the challenge :slight_smile:

Best regards,

The Flatland GTeam

1 Like
#9
  1. And how large can env.width and env.height be ?
  2. Also another question, more as a clarification, to make sure I understood things correctly. Is it true that once an agent starts moving towards an adjacent cell, it won’t be able to make any other decisions until it reaches that cell? Even if reaching it may take longer than 1/speed turns (e.g. because that cell is occupied by other trains, etc.). In my local tests I’ve seen in some cases the position_fraction can increase beyond 1.0 (even a value of 1.0 can only occur if the agent can’t enter the new cell as soon as its speed allows). So I’m guessing that as long as position_fraction is strictly greater than zero, the agent can’t make any new decisions, is that correct?
#10

Dear @mugurelionut

Thank you for reaching out, hope this helps clarify the issue:

  1. Envs are currently never larger than (height, width) = (150, 150).
  2. Yes agents can only make decisions on cell entry. Once they have decided and have moved beyond the enty point there is no turning back. If they however chose to stop at cell entry they will again be allowed to chose an action.

Best regards,

The Flatland Team

#11

Actually, I find the max_time_steps formula to be a bit incorrect. When I generate local tests with different number of agents and different number of cities (starting from the example from the repository), I sometimes see the simulation ending earlier than expected. After running more such tests, it seems obvious that the actual formula is:

max_time_steps = int(4 * 2 * (env.width + env.height + number_of_agents / number_of_cities))

So the last term is only 20 when the ratio of agents to cities is 20. I don’t seem to find how to get the number of cities, and I also can’t find a function which returns the number of time steps (without being passed the actual ratio agents/cities as an argument).

I would really like to know the maximum number of time steps when making decisions - can you please suggest a way to achieve this?

#12

Hi @mugurelionut

The actual formula that you mentioned is correct. There is a problem when loading from files as we currently don’t store the numbers of cities in the pickle file. thus it is impossible for you to currently compute the appropriate max_time_steps for the pickle file without the associated generator parameters.
I will open an issue about this on gitlab and adress it in the coming days. Sorry for the caused inconvenience.

Best regards,
Erik

#13

Hi @mlerik,

I just wanted to add some behaviour which is related to this.

If one runs more steps on the environment and some of the agents are in a dead-lock, then the environment exits as soon as max_steps according to your formula has been reached. Additionally all information in done are set to True and the corresponding positive rewards get returned as well. This is unfortunate for training purposes.
Please correct me if I misunderstood something.

Best regards,

Fabian

#14

Hi @fabianpieroth

The agents done=True is necessary for training to indicate that the episode terminated and thus your ML-Approach is not expecting a next observation anymore.

The returned reward should be equal to the current state of the environment. Thus if not all agents have reached their target the reward is equal to the step reward of each agent. If you need a more negative reward for agents not terminating their task in time you could do reward shaping using the env information. env.agent.status will tell you whether or not the agent has finshed it’s individual task.

Looking at the code I see that if you continue the enviromnent beyond the time that it terminated it will return the positive reward to all agents. This is a bug on our side and we will fix this.

Hope this clarifies your question.

Best regards,
Erik

1 Like