[ANNOUNCEMENT] Start Round 2

#1

:steam_locomotive::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car:

Dear Flatlanders :wink:

We are excited to announce that Round 2 is open for submissions!! :slight_smile::confetti_ball::tada::jack_o_lantern:

There are quite some changes included in this new round and few features were already announced with the release of Flatland 2.0 some of the main new features include:

  • Agents start outside the environment and have to actively enter to be on time
  • Agents leave the environment when their target is reached
  • Networks are much sparser an less paths are possible for each agent
  • Stochastic events stop agents and cause disruptions in the traffic network
  • Agents travel at different speeds

The baselines repository has been updated to incorporate these new changes and highlights how training can be implemented.

To make your submissions head over to the updated starter kit.

There is also a new Example file introducing the concepts of Flatland 2.1..

We are currently actively working to update all the documentation to the new changes so keep checking back or reaching out to us if there are any uncertainties.

We wish you all lots of fun with this new challenging round.

The Flatland Team

:steam_locomotive::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car::railway_car:

5 Likes
#2

Hi!

It seems that the Example link is unavailable. Could you please fix that.

#3

merge was probably not fully done yet. Should work now.

#4

:warning::warning::warning:ATTENTION :warning::warning::warning:

Please don’t forget to update your Flatland version to the newest version before submitting. Older versions of Flatland will lead to divergences between the client and the server.

Best regards,

The Flatland Team

#5

Update

We are currently working on a few more bug fixes, and performance and stability related issues with flatland. These fixes will be made available as another patch to the current latest release of 2.1.6.

Much of the delay in reliably accepting submissions has been due to tuning the complexity of the test environments on which your submissions will be evaluated. Please expect a further update from us soon.

But please be assured that no key changes in the features or the environment interfaces will be introduced at this stage, hence you can reliably continue experimenting with the flatland library at your end before we start accepting submissions again.

And our apologies for not being more communicative about the updates and announcements related to the competition. But rest assured, the whole team is working really hard to ensure you all can have a great experience taking part in the competition.

Thanks,
Mohanty
(on behalf of the organizing team)

#6

I finally got a chance to look at the provided example and I have a few questions:

  1. can we use env.agents in our code in order to get the current agents’ positions, directions and targets? (like the example does) this seems much easier than somehow extracting them from observations (where they are encoded in some format)

  2. do we indeed have access to so much malfunction information? (e.g. if an agent will ever malfunction or not, and when the next malfunction will occur?) this information is definitely useful and I’d like to use it for making decisions, but I want to make sure we can indeed use it

  3. if an agent is already malfunctioning, malfunction_data[β€˜next_malfunction’] seems to indicate how many steps after the end of the current malfunction the next malfunction will occur - this is not obvious from its name (I initially expected it to always be relative to the current time step, but that’s not the case); is this intended?

  4. if an agent is malfunctioning from the start and the agent doesn’t enter the environment (i.e. it remains in the READY_TO_DEPART state), the malfunction duration is not decreased - is this intended? given that the agent will be penalized for every time step when it remains outside the environment (before entering), it seems unexpected to not allow its malfunction duration to also β€œexpire” while the agent is still outside the environment - so I’m asking: is this intended?

And thanks for all the work put into preparing Round 2. It looks indeed much more interesting than Round 1.