We’ve decided to make the scores of all the tasks and borda ranks of each submission public. Now you can view the scores of all the tasks. Our original motivation was to prevent overfitting on the tasks in the spirit of the challenge being unsupervised representation learning. But we understand that its frustrating for participants to not have any source of feedback on the tasks and why one of their submissions does better than another.
P.S: The scores visible are on the public split of the data. (Ignore that the names say private). The scores on the private split will be available after the competition ends, and will be used for selecting the winners (No changes here)
P.P.S: Negative scores indicate MSE scores, these are made negative because the borda system requires “higher is better”. So you can ignore the negative sign when looking at the MSE scores.
For those interested, here’s a short description on how the selection system works (taken from this discussion from AIcrowd Discord)
The selection system we have works like this:
First your own submissions are borda ranked, the best among these is selected, then your submission is borda ranked against other top submissions from each team. This is done to prevent any team from having multiple entries to increase their rank gap based on a single task.
As a consequence of this, you are also competing against your submissions in the borda system. In this case, your new submissions may perform slightly better in average borda rank against your own submissions, because it can significantly outperform on a few of the hidden tasks.
Note that the above case may cause the a submission with lower average f1 score to be selected.