🚨 Important Updates for Round 2

Hello Participants!

We’re gearing up for Round 2 and have some exciting updates to share:

1. Code Submissions: We are now accepting code submissions for Round 2. Your model needs to predict of each image in 1 second. For compute, you will be provided a virutal machine with 2 CPU cores and 12 GB of RAM.

2. Dataset Enhancements:

  • The test dataset from Round 1 has been integrated into the training set for Round 2, aiding in the further refinement of your models.
  • The test set for the second round evaluation is curated with precision. It does not contain any images with multiple mosquitoes.
  • The bounding boxes have undergone careful revision.
  • The evaluation criteria remain consistent. We’ll be evaluating models based on the F1 score with a 0.75 IoU threshold, as initially defined.
  • Despite rigorous quality checks, the training set might still include approximately 3-4% of images with multiple mosquitoes or suboptimal bounding boxes. If participants find such images detrimental to their training, they are welcome to exclude them.
  • We remain committed to enhancing the dataset’s quality and will release updated versions if necessary.

3. Evaluation Criteria: We would like to reiterate that the evaluation metric remains the F1 score, employing a 0.75 IoU threshold, as was mentioned at the outset of this competition.

4. Leaderboards: Round 2 features both private and public leaderboards, each representing roughly half of the test set. During live evaluations, only the public leaderboard will be accessible. Post-challenge, private leaderboard scores will be unveiled. The final winners will be determined based on these private leaderboard scores.

5. Team Formation Deadline: A gentle reminder that the last date to finalize your teams is 5th September. Ensure your team members are confirmed by then.

:closed_book: Starter-kit for Phase 2

We’re looking forward to seeing your contributions in this round!

All the best!
Team Mosquito Alert

1 Like

Can you explain how the calculation of the private leaderboard is done, please?
So here’s how I understand it:
all submissions are run on the private test set and ranked by score.
This means one candidate may have submission x rank high on public leaderboard and submission y rank high on private leaderboard.
Another way it could be implemented would be that whatever solution is ranked high on the public leaderboard is then evaluated on the private test set and other submissions are ignored.
Which one is it?

Hi @tfriedel
Hope this post provides the clarification: Scoring Announcement: Public vs. Private
Let us know if you have any more questions!

2 Likes

What will happen if our solution fails during the final testing phase? Would we be able to resubmit until we have 3 successful submissions? As we can see the submissions are not stable.

They said:

So private score is already known but hidden. Final testing is already done, it cannot fail.