Thank you so much for your participation in round 1 of our challenge! We are very happy with the way it has been going and are excited to see the solutions the participants came up with. After this analysis, we will also announce the final winners of the awards.
This process will take a little bit of time, so expect round 2 to start a bit later. We will also aim to take lessons learned on board as well as feedback from the participants. But we will give ample notice again to make sure we will have an exciting competition with the harder problem of unsynchronized receivers!
As a feedback, I think the limit of 100 submissions a day was way too high. On the last day of the competition, I was basically fine-tuning my algorithms using the online test set as a validation set. This might also be the case for the other top 3 participants who also made a lot of submissions during the last day.
This is not satisfactory:
_for the organizers, the fine-tuned algortihms might overfit the online test set;
_for the participants, this incentivize them to perform a boring task (fine-tune using the online test set as a validation set) till the very end of the competition. With a limit of only 5 submissions, I would have done my 5 submissions and do something else of my day. But, here, the possibility to submit as many files as I want pushed me to fully use this so I would regret nothing.
For the next round, I think a hidden test set would be the best, if not possible a limit of 5 submissions per day would be good.
This is just my humble opinion, this was my first competition and maybe other participants feel different about it.
PS: That being said, I enjoyed participating this competition and I would like to thank all the people organizing and participating this competition, the final was tense/exciting and I learned a lot about multi-lateration!
Congratulations to the winners!
Feel bad that I missed out on this. Seems like an excellent opportunity to learn more about localization algorithms. I am assuming the winning solutions would be shared after end of round 2. Really looking forward to learn from them.
First of all I would like to say thank you to all organisers who prepared the dataset and the competition. I have never worked with ADS-B data before, so it was a unique opportunity to see the real data and to test different approaches. In addition, it was very exciting each time to achieve new records in accuracy - with different models from 11 kilometers to 33 meters!
As a feedback, I agree with Richard that it might be better to everyone if a test set will be hidden. In this case the final results on a hidden test set will better reflect predictive power of models. Also, I think that 3-5 submissions per day should be enough, so that participants will focus on developing new models / features rather than on fine-tuning not the best but existing ones.
Overall, it was very exiting to participate. I’m looking forward to seeing other winners solutions!
That is a really nice competition with the new field for our ck.ua team. Always interesting to see tasks which have direct application and build a solution that can help somebody.
I would like to thanks organizers, congratulate winners, and congratulate all competitors with an interesting experience.
We totally agree that the main mistakes in organizing this competition were 100 submissions per day and opened final leaderboard. You can easily reduce submissions to 5-10 per day, but you should exclude failed submission. And if this is possible you can show results for part of test aircrafts. And only after finishing show final results of the rest of the test aircrafts. Just check how public/private leaderboards work in Kaggle - that is a really nice solution.
And about submissions count. In the last hour of the competition I already saw one of the slides in our future presentation about competition. And the title of that slide "How we lost “overfitting challenge”
By the way, my separate congratulations to ZAviators (@benoit_figuet and @rmonstein ) with the impressive finish. I am looking forward to hearing from you a cool story about that last day of the competition.
Calling it “overfitting challenge” is a bit strong. With a total of six submission one week ago, I already was at 25.635. So I do not believe that the whole challenge was about over-fitting.
That being said, if you use it to describe the last competition hours, I totally agree with you.
Congratulation @richardalligier, it was impressive to see how quickly you managed to get a score below 40m and I think it is a well desserved victory. Congrats to ck.ua team as well, it was a quite intense finish and I almost feel sorry to take the second spot by fiddling with some parameters 50 minutes before the end of the round . And finally felicitations to Sergei which was very close to the podium with a very good score.
I was familiar with ADS-B data but not with localization algorithms and this challenge was a very good way to get a foothold in it. Thanks OpenSky Network and CYD Campus for organizing and I am looking forward to seeing what solutions other competitors came up with and maybe meeting you in person during the OpenSKy Symposium.
Concerning the general feedback, I agree that having a hidden test set and a limited number of daily submissions would help.
Very good points so far, which we will definitely take on board going forward!
Congratulate all winners for Round 1. Thanks to all organizers for the interesting competition. We have learned a lot about multilateration via this competition. Honestly, we don’t know the RMSE can go down to 40m without the first submission of @richardalligier, also thank you for that. It is a pity that we know about the competition quite late. We are really looking for Round 2 of the competition and also the solutions from winners after Round 2 end.
We totally agree with the suggestions of everyone: should have a hidden test set and limited submissions. However, 5 submissions per team or per member should be considered since the teams with more members can have some advantages over the teams with only one member.
Thanks and best,
@masorx, can you please inform how shall we provide source codes for verification?
I’ve sent a message using the newsletter function of the competition on Saturday but this may not have worked.
For the time being, as people are reading this here, please contact me at email@example.com with your email addresses.
Dear Martin (@masorx ),
Sorry for messaging here, but I not sure if my mails reached you.
Can you please confirm that you received source codes.
I have received them all just now and will begin with the evaluation ASAP! They look super exciting!
I have given access to the private repository (via github, on your username: @masorx ) which contains all the related stuff. Let me know if there is any issue