Round 2 - Recap and Extension

Hi Everyone,

First of all, many apologies it took so long to get back to you. In hindsight, the deadline was likely a bit close to our 8th OpenSky Symposium, which required a lot of immediate attention. Apologies in particular to @paramuttarwar, who was the only one coming close to the 5000m cutoff for the public score. Unfortunately, it was a bit worse for the complete score. Still congratulations for being the best so far!

Here is the full leaderboard with all information:

Rank Type Team Name Participants Submission ID RMSE - 2D Distance Coverage Complete Score Complete Score Coverage
1 Team dataWizard paramuttarwar 93687 4926.518 0.9 8400.460335 0.900000316
2 Participant - Chen_YC 89272 17732.394 0.945 17686.44302 0.9452105439
3 Participant - NJUPT_CHAO 86637 30579.237 0.983 30069.88235 0.9831909273
4 Participant - uni_leipzig 93506 55701.365 0.94 54959.87333 0.9403016438
5 Participant - ihskanajna 93667 82854.429 1 81528.8826 1
6 Participant - dongbiao321 83215 123867.401 1 134381.7148 1
7 Team ZAViators benoit_figuet, rmonstein 88878 164726.499 0.967 130545.3449 0.9669458962
8 Team PolymathAB aditya_bansal 93583 607148.086 1 575625.4395 1
9 Participant - tobi_pvamu 93539 633118.282 1 600079.8085 1
10 Participant - academiks 93377 679450.643 1 640308.9567 1

What happens now?

  1. Since nobody has reached the 5000m limit required, we will extend this round! Contestants have another two months until January 31, 2021 to compete in this round.

  2. The awards will remain the same, however, COVID-19 developments permitting an in-person event, the travel grant to our 9th OpenSky Symposium in Brussels in October/November 2021 is available again.

  3. We will reduce the coverage requirement to 70% in order to make it easier on the participants to pick the right measurements and reduce the RMSE.

  4. We will release a lot of material that should help interested contestants to improve their solutions:
    a) The code of the five winning teams of round 1 is available on our Github: https://github.com/openskynetwork/aircraft-localization These codes are licensed under the GPL and can be freely adapted for round 2 solutions in our new rules (please reference anything that you use though, that is just good and decent practice!)

b) More discussion from one of the winning teams is available from their talk at the OpenSky Symposium: https://www.youtube.com/watch?v=msBtF0Swfn4

c) Their paper should hopefully be available with the proceedings soon providing even more detailed insights than code & video.

d) We will release a pre-print paper discussing the creation of the datasets, which will hopefully answer any more questions you may have!

As a last note, having spoken to some participants who felt the data was difficult and somehow artificially corrupted. That is (unfortunately!) not the case, this is real-world data that has to be dealt with in order to track aircraft using crowdsourced communications. That is why we run this challenge, to solve a difficult problem! :slight_smile:

5 Likes

Hi all,

We have now a few more points that should be helping to improve things!

  1. As promised, the paper write up from @benoit_figuet et al. for the OpenSky Symposium is available. Open Access link: https://www.mdpi.com/2504-3900/59/1/2

  2. A full discussion of the training and test datasets is given in this new pre-print by us: https://arxiv.org/abs/2012.00116

  3. This includes yet more training data: https://zenodo.org/record/4302265
    Note: Subset 1 is the test set (so has the missing records for now), 2-4 have been provided so far. 5-8 are new!

  4. The requirement change to 70% still needs to be deployed, sorry for the delay.

Cheers,
Martin

NB: @sconfina is trying his luck purely non-competitively. But since he doesn’t know the test data, we can see that there’s definitely room for improvement for everyone!

1 Like

Hi @masorx ,

Thank you for the extension of round 2 !

However, the challenge page seems to be not properly parametrized:

  • Team creation is frozen
  • In your message, it is said that the required coverage is reduced to 70%. However, a recent submission with a 79% coverage from @sconfina was rejected with the message “Less than 90% coverage”.

Maybe @shivam and the aicrowd team can help fixing this.

Thank you for your help !

Best regards,
Richard

PS: I am not participating, I am just supervising a team of students.

2 Likes

Thanks, I think I managed to deactivate the team freeze but for the coverage issue I need the AICrowd team to activate the changed evaluator indeed.

2 Likes

Hi @richardalligier,

Thanks to bring it to our attention.

The reduced coverage change is live too, and the submissions which got affected have been re-evaluated with the new requirement.

Cheers,
Shivam

3 Likes