Round 2 Finished!

Hi everyone,

Round 2 has now finished as of yesterday, the extension sure has brought some impressive results on this harder challenge!

I believe the Leaderboard should be frozen as of yesterday 18:00 UTC, but submissions are still possible for anybody who’d like to test/improve (could you confirm @shivam?).

We will now do things similar to Round 1, I will contact the top 5 about their code, and we will verify that everything is in order. I quick look at the hidden scores tells me that there are no major discrepancies at this stage but we will of course take a look at this before posting the final results.

We will then give out the awards again and of course also release the code in our repository with a description and at some point also start with possible publications based on this competition.

I’d be interested in everybody’s post mortems again, what could we improve further? In particular, if we want to build a third round, what would be your suggestions for a “live setting” where localization cannot use data from the future?

Best,
Martin

6 Likes

Yes, the leaderboard will remain based on submissions done till the deadline.
NOTE: It is the public leaderboard, one with private scores will be released.

Everyone can continue to make submissions and improve their scores.
In order to view the leaderboard along with post-challenge submissions. You can enable the filter on the leaderboard.

3 Likes

Hi everyone,

First of all I would like to congratulate the winners and to thank organisers for such an interesting challenge! For my opinion, it has gone quite smooth and 5 submissions per day were just enough. I’m looking forward to seeing the private leaderboard scores even though Martin mentioned that positions seems remain the same. Congratulations to the ck.ua team who managed to achieve excellent result below 100m! The nwpu.i4Sky team and ZAViators showed also great scores and were very close.

Personally, I was surprised how many stations can potentially be used after synchronization, though it requires data “from the future”. Before starting to work on the round 2 solution, I estimated that 70% of test tracks is achievable with accuracy below 100m. It was fun to get about 71% coverage after synchronization of 240 stations - maximum that the method I developed allowed. For my opinion, 90% can still be reachable though with much lower accuracy.

Regarding the third round, the main challenge there would be to predict stations calibration in the future, I assume. A training dataset may contain tracks, for example, only for the first half an hour or one hour while predictions should be made for the other (half an hour?) part. In order to make sure that participants don’t use data from the future, organisers may provide a function which generates points for a given aircraft one by one. Such a function could be mandatory to use by participants and during solution verification at the end of the competition. There are only a couple of questions here:

  • whether it would be allowed to update previously predicted points for a given aircraft or not?
  • how not to use points for other aircrafts from the future for predictions for a given one?

Thanks,
Sergei

3 Likes