[Announcement] Start of Round-2

#1

Dear all,

Thanks a lot for participating in stage 1 of the NeurIPS 2019 Disentanglement Challenge. In addition to the already released simple simulations (mpi3d_simple), we now release the data from the public leaderboard of the first stage i.e. the realistic simulations (mpi3d_realistic).

In stage 2, we will now try to advance the unsupervised learning of disentangled representations to more difficult objects. In stage 2 both the public and private leaderboard will consist of real-world images.

In stage 2 your are allowed to use all the released images of stage 1 and we encourage you to have a look and review the submitted reports on openreview from stage 1 as well.

The timelines are as follows:

Sept 2nd 20:00 CET: Start of Stage 2 of the NeurIPS 2019 Disentanglement Challenge

Oct 1st: 11:59pm AoE Submission deadline for methods, Stage 2

Oct 7th: 11:59pm AoE : Submission deadline for reports, Stage 2

All the best,

Stefan
For the Organizers

1 Like
The competition is halted?
#2

Thank you for the announcement.
Where can we check the final results for stage 1 and it’s reposts?

1 Like
#3

I want to know to final result of stage 1 too!!!@vis7i

#4

So I guess this is the final results: https://www.aicrowd.com/challenges/neurips-2019-disentanglement-challenge/leaderboards?challenge_round_id=73

Is the ranking algorithm as was discussed: average of the rank of each metric for each participant?
If so, I guess I missed by 1 rank to be among the top 3 :slight_smile:

#5

Dear @all,

Confirming that we are accepting submissions now for Round-2.
The leaderboard for the new submissions is available here

Cheers,
Mohanty

#6

@amirabdi: Yes, the ranking algorithm used is the same one as discussed here

1 Like
#7

Thank you @mohanty

We select the model with the highest average rank across all metrics on the leaderboard and call it the selected model.

I recall we were asked to nominate one of our submissions as the one we assume to be the best. I wonder which path the committee ended up taking: choosing the best model automatically, or based on the user’s selection.

#8

@amirabdi: The current computation actually chooses the best submission for you automatically. I havent closely looked at how well they agree with the nominated ones, but if they do not match, your rank would only go down :smiley:

1 Like
#9

Thanks @mohanty
Please let us know of the reports/papers once they became available.