Question Regarding to Model Evaluation Process to Select the Winners

Hi @dipam

I have some questions regarding the model evaluation process at the end to select the winners. The rules state that: at end of the competition, the participants select 2 submissions to evaluate on the private dataset.

My questions are:

  1. How do the participants select the submissions that will run on the private dataset?
  2. Is the size of the private dataset the same as the dataset that is evaluated for the public leaderboard?
  3. Since there will be slight variations in the CPU, will the organizer run the participant’s submission once and then decide if pass or not? Or will they run it multiple times? Or can we submit our selected submissions multiple times?

In general, what should be expected from private runs?

1 Like

Hi Everyone,

This is regarding my previous reply regarding Round 2.

I made a pretty embarrassing mistake, the question was asked exactly on the day Round 2 of another challenge was launched. I replied to this post without noting that it was regarding the Visual Product Recognition Challenge.

Here’s the updated answer,

  1. We’ll provide a form for selecting up to 3 submissions to run on the private dataset.
  2. The size of the private dataset will be a similar size to the Round 1 dataset.
  3. To account for slight variations in CPU, we’ll relax the timings slightly so that submissions that passed Round 1 don’t fail on the Round 2 dataset. Any submissions that still fail will be checked manually and a decision will be made if the timings were sufficient.
2 Likes