Can we still submit for round 1?


As you mentioned on your challenge website, Round 1 will open on Tuesday, 30th of July and close on Sunday, 13th of October 2019, 12 PM. UTC +1.

Can we still submit for round 1 and see the results?

Many thanks.

Hi @yizh9896

This was actually planned but we ran into some technical issues to run both rounds at the same time.
We would prefer to keep Rouind 2 open, as this is the most important round of the challenge.

Is there a reason for you to still test your code on round 1 instances? We are happy to support you and can test your code locally if you desire.

I will also update this information on the landing page.

Don’t hesistate to reach out if you have any further concerns or inputs.

Best regards,


Unfortunately, I entered the challenge rather late (because I didn’t know about it until recently). At the moment, I have only a “testsubmission” done for Round 1. But I would really like to submit for Round 1 once my algo is ready. (I was relying on the time left until Sunday for this.)

Another question is: can one still do “well” in the overall challenge if one has no “reasonable” submissions for Round 1?

Many thanks!


1 Like

Hi @algomia

Yes, we did quite some enhancements to the behavior of the environment. I think you stand an equal chance to find a good solution to the problem as participants who submitted to round 1. There will also be an update to the baseline in the coming days introducing some new conecpts that can be used to tackle the problem. So stay tuned for an announcement.

Did you try to test your submission locally as explained in the starter-kit?

We are also working on a submission testing script that will let you generate similar environments as used in the submission scoring locally.

Best regards,


Hello @mlerik,

We are using Flatland v 2.0.0

When doing the local evaluation, the program threw an exception saying that the “remote and local reward are diverging.”. I have checked our code and confirmed that our code didn’t change anything in the local environment. And our code can solve all the local copies correctly. I couldn’t figure out what caused the exception.

Then I read all the library code, including “” and “” (Flatland v2.0.0). It seems that the agents in the local copy and remote copy are having different start locations and target locations.

The reason for it is that, the env.reset() function (without any parameters passed to the function. i.e. using the default parameter values, regen_rail = True, replace_agents = True, according to your “”) is called whenever the remote environment is trying to pass a local copy to our code.

To see whether this reset() function would change the agent’s start and target locations, I wrote a short experiment code as follows:

  1. create a simple railway environment with complex_rail_generator
  2. Initialize a rendering tool to display the rail_environment
  3. a while loop:
    3.1. call env.reset()
    3.2. display the map via Render tool

It turns out that the agent’s start and target locations are changing whenever the env.reset() is called.

Thus, I am thinking that, if I am correct, the local copy and the remote copy are actually having different agent start and target locations. And our code gets the information from the local copy, and therefore it can produce actions that are correct for the local copy but not for the remote copy (due to different start and target locations).

Also, in your latest version of Flatland library 2.1.3, I noticed that the corresponding env.reset() function in the and (Flatland 2.0.0) had been changed to


So I am wondering whether it was a bug in Flatland 2.0.0 and it got fixed in Flatland 2.1.3.

I made the above assumption by only reading your code so that I might be wrong. If you don’t mind, could you tell me whether I am correct about this or not? I couldn’t figure out what caused the reward diverging exception when our code can find the correct path and actions for all the local copies of the railway environments.

This is the main reason why I still want to submit for round 1. Another reason is that we want to test our environment setting for the repo2docker, and check whether it works.

Many thanks.

Thank you @compscifan2019 for your detailed bug report.

We saw this error with an earlier submission as well but now updated everythin. Can you reproduce this error still with the updated Round 2 submission or did it only fail with the old environments?

Best regards,

Hi @mlerik,

I don’t have the exact answer now. I will try to use the updated flatland library with the local evaluation first and then try the round 2 submission. I will let you know once I have the results.

Many thanks.

Hi @mlerik,

I wanted to submit my program for round 1 on last Friday, however round 1 was closed earlier than original close date. As I plan to include the rank of round 1 in my paper, I am wondering when we can submit to round 1 again?

Many thanks.