🏁 Round 1 has finished, Round 2 is starting soon!

:trophy: Round 1 has finished! Here are the winners of this first round
RL solutions:

  • :1st_place_medal: Team MARMot-Lab-NUS with -0.611
  • :2nd_place_medal: Team JBR_HSE with -0.635
  • :3rd_place_medal: Team BlueSky with -0.852

Other solutions:

  • :1st_place_medal: Team An_Old_Driver with -0.104
  • :2nd_place_medal: Team MasterFlatland with -0.107
  • :3rd_place_medal: Participant Zain with -0.116

Congratulations to all of them! :tada::clap:

The competition is only getting started: anyone can still join the competition (Round 1 was not qualifying), and the prizes will be granted based on the results of Round 2.

:clock7: When will Round 2 start? Can I still submit right now?
We are still hard at work on Round 2, which is expected to start sometimes this week. In the meantime, you can keep submitting to Round 1 to try out new ideas.

Now that Round 1 has officially finished, the leaderboard is “frozen”, and the winners listed above will keep their Round 1 positions whatever happens. But you can still see how your new submissions would rank by enabling the “Show post-challenge submissions” filter on the leaderboard:

:infinity: Problem Statement in Round 2
In Round 1, your submissions had to solve a fixed number of environments within and 8 hours time limit.

In Round 2, things are a bit different: your submission will have to solve as many environments as possible in 8 hours. There are enough environments so that even the fastest of solutions couldn’t solve them all in 8 hours (and if that would happen, we’d just generate more).

The environments start very small, and have increasingly larger sizes. The evaluation stops if the percentage of agents reaching their targets drops below 25% (averaged over 10 episodes), or after 8h, whichever comes first. Each solved environment awards you points, and the goal is to get as many points as possible.

As in Round 1, the environment specifications will be publicly accessible.

This means that the challenge will not only be to find the best solutions possible, but also to find solutions quickly. This is consistent with the business requirements of railway companies: it’s very important for them to be able to re-route trains as fast as possible when a malfunction occurs!

:zap:Optimized Flatland environment
One of the most common frustration in Round 1 was the speed of the environment.

We have implemented a number of performance improvements. The pip package will be updated soon. You can already try them out by installing Flatland from source (master branch):

pip install git+https://gitlab.aicrowd.com/flatland/flatland.git

The improvements are especially noticeable in smaller environments. Here’s for example the time per episode while training a DQN agent in Test_0, using pip release 2.2.1 vs the current master branch:

(using DQN training code from here: https://gitlab.aicrowd.com/flatland/flatland-examples)

:railway_car::railway_car: Train Close Following
As some of you have noticed during Round 1, the current version of Flatland makes it hard to move trains too close from one another. You usually need to keep an empty cell between two trains, or to take their ID into account to make sure they can follow each other closely.

This limitation has been lifted. The new motion system is also available in the master branch. See here for a detailed explanation of what it means, how it can help you, and how it was implemented: https://discourse.aicrowd.com/t/train-close-following

:drum: More coming soon…


Is the evaluation portal down, until round 2 starts? I submitted a solution today which succeeded in the image building step, but the evaluation failed with the following error:

2020-09-01T08:16:14.853289803Z Collecting git+http://gitlab.aicrowd.com/flatland/flatland.git@325_eval_step_timeout
2020-09-01T08:16:14.854575078Z Cloning http://gitlab.aicrowd.com/flatland/flatland.git (to revision 325_eval_step_timeout) to /tmp/pip-req-build-f4qpaosv
2020-09-01T08:16:14.854850024Z Running command git clone -q http://gitlab.aicrowd.com/flatland/flatland.git /tmp/pip-req-build-f4qpaosv
2020-09-01T08:16:14.859971742Z ERROR: Error [Errno 2] No such file or directory: ‘git’: ‘git’ while executing command git clone -q http://gitlab.aicrowd.com/flatland/flatland.git /tmp/pip-req-build-f4qpaosv
2020-09-01T08:16:14.860988008Z ERROR: Cannot find command ‘git’ - do you have ‘git’ installed and in your PATH?
2020-09-01T08:16:15.280559332Z Traceback (most recent call last):
2020-09-01T08:16:15.280606385Z File “./run.py”, line 1, in
2020-09-01T08:16:15.280615333Z import numpy as np
2020-09-01T08:16:15.280621989Z ModuleNotFoundError: No module named ‘numpy’

Hi, @MasterScrat, any plan to start the Round 2?

Hey @harshadkhadilkar, no the submissions are still open, I gave more details in the issue!

Sorry for the delay, setting up Round 2 and deciding on the right parameters for the new evaluation format was more challenging than we had anticipated!

We are almost there, and plan to launch Round 2 this week :date:


Thanks @MasterScrat, looking forward to the Round 2.

Can you please share the environment specifications of Round 2, so that we can start to think about some possible directions?

1 Like

In the case that some teams can solve all environments in 8 hours, is there a deadline for environment change in the Round 2?
I think it may be helpful to keep the env unchanged for the last 3+ weeks, so that we can have time to finetune our algoriths, instead of searching for different directions…

Here they are! https://flatland.aicrowd.com/getting-started/environment-configurations.html

1 Like

The new test environments are now available for download from the Resource section!

The file test-neurips2020-round2-v0.tar.gz contains two environments per test for the first 41 Tests of Round 2.

1 Like

I doubt this will be a problem, as the 8 hours time limit takes into account not only the time the agent takes to select its actions, but also the environment stepping time. So I think we will quickly reach a point where even a perfect agent that acts instantly would not have enough time to go through all the environments, due only to the stepping time.

But this is a good point, we should check that we have generated enough environments a few weeks before the deadline so we don’t have to add to them anymore!

FYI currently 43 tests of 10 environments are available during evaluation.

1 Like

Yes, I feel current agent number is large enough…

It seems that generating large env is very slow, it may be a problem for large env’s offline RL training…

How can we use the latest Flatland environemnt, from master branch’s version, or pip release 2.x.x?
(Round 1 was using flatland-rl==2.2.1)

You should use the master version right now, new pip release coming soon

pip install git+https://gitlab.aicrowd.com/flatland/flatland.git

1 Like

I have a question about how the env parameters are calculated in round 2:

Are the formulas for the calculation only valid after the first 40 envs? Otherwise I dont get the same results as the listed envs:

n_agents_n_plus_1 = n_agents_n + ceiling(10^len(n_agents_n) - 1) * 0.75

  1. Whats the point of the ceiling if we get floats with * 0.75 again?
  2. If we solve for the second env: 1 + ceil(10^1 - 1) * 0.75 = 8.5 (assuming len(n_agents) refers to the index of the env, but if it is the previous number agents the numer is still not 2)

Thanks for your help :slight_smile:


Hey @wullli, we generated those from a Google Sheet, here are the original formulas:

It’s possible that we messed up the Google Sheet to Latex conversion, we’ll check it out.

Thanks, I just wanted to not have to type out all the parameters and tried to calculate them myself, but with the spreadsheet I should be set!