๐Ÿ‘‹ Welcome to the ADDI Alzheimer's Detection Challenge!

Dear Participants,

We warmly welcome you to the ADDI Alzheimerโ€™s Detection Challenge, organized by ADDI.

:memo: Click here to checkout the Community Contribution Prize

:raised_hand: Check out this thread if you are looking for teammates

All the best!
Team AIcrowd


Hi @vrv, can you please check the maximum submission/day, because in the rule it is 10 but we canโ€™t submit more than 5 submissions.

1 Like

Hi @siddharth_singh8,

Sorry for the inconvenience. You can now submit 10 submissions/day.



how are the submissions sorted on the Leaderboard? I think they are sorted only based on the log-loss score. But now(5/8 13:35 UTC) on the Leaderboard, it seems that #1 public log-loss score is 0.60837 and #2 is 0.60834. These submissions are tie if they are rounded on 3 decimal place, however, #2 was submitted earlier than #1. Is that an expected behavior of the Leaderboard?

in addition, do we need to select the final submission that is evaluated on the private leaderboard? or all submissions will be included on the private leaderboard?

iโ€™m sorry if this is not appropriate place where we ask questions to competition admins.

1 Like

Hi @no_name_no_data,

As per the rules of the competition,

The Submission entry will be evaluated against the applicable ADDI Environment using multi-class log-loss, rounded to the third decimal place. The lowest logloss will be the best score. If two or more participating entries have the same log-loss score, the tie will be broken in favor of the Submission that was submitted first.

You do not need to select your final submission. All the submissions made are also evaluated on the private dataset. For the final leaderboard, the best score ( on 100% of the test set) among all your submissions will be considered.

Thanks for the clarification.

Just a side note to avoid any confusion:

Is the current public Leaderboard not reflecting the final methodology?


To the third decimal, @demarsylvain and I are equal, but mine was submitted a couple of hours before. :timer_clock: :checkered_flag:

If that was the final LB, Iโ€™d expect to be ranked prior?

1 Like

Thank you for your reply.
It seems that the leaderboard has changed, but the order may still not be correct. (for example, there are 3 teams in rank#3, and the team that has the second highest score and submitted the earliest is placed at the bottom among them.)
Is it still being fixed?

Is it really ok to consider all submissions? if so, I think a solution that just happen to overfit the test dataset is more likely to be selected as the best.

1 Like

@ashivani, could you please precise the metric of the final leaderboard?
Will it be logloss as is (without any rounding) or it will be logloss rounded to the 3rd decimal (as we see at the Public Leaderboard)?
In my opinion it would be really unfair to rank the submissions by rounded logloss with the current level of competition (at the moment difference from top4 to top1 is about 0.001).
If, for example, one person gets final score 0.6065ั… he can lose to person with 0.6074ั…. It would be a โ€œ'little bitโ€ frustrating.


change your username for โ€œbordeleau_michaelโ€ โ€ฆ :wink: :stuck_out_tongue:

I agree with @konstantin_nikolayev. I think it is better to evaluate the submissions by more precise log loss score instead of rounded one.
In this competition, to improve the log loss by 0.000x (it may be ignored if rounded to the 3rd decimal) requires some effort.
if scores are rounded, it is more likely that the difference between better solution and others is just when it is created rather than what it is.
Though 0.000x improvement may not be so important for practical use, I think it is better to take the difference into consideration to make this competition fair.

1 Like

@ashivani, one more thing from me.
Could you please confirm that probing of LB and usage of true answers in submissions is prohibited and such submissions will be disqualified at the final leaderboard.
This question occured from this picture (look at screenshot below).
This behaviour of participant looks like situation I described above.


While in theory, you can probe the Leaderboard and seek the true answers, unless Iโ€™m missing something, this canโ€™t replicate over the final leaderboard. :thinking:

Letโ€™s say I find out the true answers for record 1 to 25 on the Public Leaderboard. It boosts my public leaderboard position.
But, thereโ€™s no way for me to tell, on the final evaluation, which record is which in order to feed the correct answer. No?

Also, Iโ€™d be careful before calling out other participants like this. I think there are legitimate trial and errors that you can do. But if you are right, staff will notice by the structure of his code.

Anyways, just my opinion. :slight_smile:
Trust your work, and keep in mind overfitters will fail the final leaderboard. :x:

@michael_bordeleau Final test score will be computed on the complete test datasets, so if you prob in the pb lb then it will add little advantage.