We are constantly trying to make this competition better for everyone and would really appreciate your feedback.
Feel free to reply to this thread with your suggestions and feedback on making the competition better for you!
- What have been your major pain points so far?
- What would you like to see improved?
In case you missed it, please make sure you set external dataset used properly in your
Many participants have struggled due to the time-out problem.
My team also got frustrated when we encountered inference failed at 100%.
Since some submissions failed at 100% inferences, I think the last track is the longest one.
So why don’t you add an additional phase for filtering time-out submission out with the longest track?
Then, participants will not have to wait for the entire inferences.
It will also reduce the evaluation system’s workload because it does not have to process all the tracks for time-out submissions.
I always appreciate your support.
Thanks. Completely agree with your suggestion.
We will add one longer song in the validation phase itself for fail-fast asap. Plus, it will provide logs to everyone for debugging purposes as well.
It will be great if organizers reproduce training of winning models from Leaderboard A at the end of the competition. Otherwise, participants can hide usage of extra data.
Will this be the longest song in the entire test dataset (28 songs)?
Reminder Validation phase songs don’t count toward your leaderboard scores.
We wouldn’t release an additional song from private songs in the validation phase.
But will include an additional song from MUSDB18, etc having length ~same as the longest song in private songs.
We have added an extra song during the validation phase with a 03:30 min length.
Your submission runs for this song BUT the separated source isn’t counted towards any of the scores.
Timeouts, if any, should be visible early enough now on.
Good news! Thank you for your hard working!
Participants are not able to remove their submissions from leaderboard. It would be great if it becomes possible. This could be useful for participants who set the external_dataset_used flag incorrectly.
Not sure if my thinking on Leaderboard A vs. Leaderboard B is correct, but should models from leaderboard A supersede models from leaderboard B?
Hypothetically, if say:
- Model 1, SDR = 10.0
- Model 2, SDR = 9.0
- Model 3, SDR = 7.0
- Model 4, SDR = 6.0
Because model 1 and 2 have a higher SDR, do they also automatically “win” leaderboard B?
Basically, I can see both scenarios make sense:
- Option 1: leaderboard A is strictly “external_dataset_used=False”, leaderboard B is strictly “external_dataset_used=True”
- Option 2: leaderboard A is strictly “external_dataset_used=False”, in leaderboard B “external_dataset_used=True” is allowed, but all leaderboard A models are also automatically eligibile
I think the second option is preferable. Current “Leaderboard B” should be replaced by current “Overall”.
@_lyghter @sevagh Yes, I agree with @_lyghter - the second option should be used and leaderboard B should also include systems which did not use external datasets (they are allowed to use extra data but don’t have to).
From experience, systems that are limited to use MUSDB18 train will not perform as good as systems that are allowed to use more data. Hence, the top systems of leaderboard A will not appear on top of the list of leaderboard B.
By the way, the rules clearly state:
There are two leaderboards – one for systems that were solely trained on the training part of MUSDB18HQ (“Leaderboard A”) and one for systems trained on any data (“Leaderboard B”).
Hence, Leaderboard A is subset of Leaderboard B
I hope @shivam hears us )
Hi @sevagh @_lyghter, thanks for bringing it up. And clarification from Stefan.
Leaderboard B now includes the submissions without external dataset usage as well.
Great stuff. And yes, realistically I’d expect extra data to win, but I’m glad for the clarification.
If somebody can achieve the absolute best SDR with only MUSDB, that’s extra impressive and deserves double gold medals
Feature request: ability to disband a team if there is only one member.
Unfortunately the way we have current teams feature designed we wouldn’t be able to provide it as an option for this challenge.
We have added it as feature requests on our side and should be able to provide it in future challenges.
Will there be an “open-source reveal day” or something? Presumably after the competition deadline/July 31 where contestants should make their code public?
It could be a real party to have 3 months of hidden work come to light.