Updated starter kit & submission evaluation

Hi, after reading the old posts I have some questions about the submission process.

When will the updated kit for the round 5 be posted?

I saw in this reply that it will pe posted by the end of the month. Is that information still accurate?
What I did for now is:

  • I generated my own ‘round5_classes.csv’ csv by extracting the species from the ‘SnakeCLEF2021 - TrainVal Metadata (73.2 MB)’ file.

  • I added my own dependencies in the ‘environment.yml’ file.

  • I modified the run.py script so that it loads my model and outputs a softmax(prediction) for each image it receives.

Is there something else I should add, or should I wait for the release of the updated starter kit?

The other question I have is about the evaluation process.
I made a submission and I have the following labels attached to my issue:

  • ‘image_built_successfully’

  • ‘waiting_in_queue_for_evaluation’

Am I on the right path? Also, from what I read in the other discussions, the evaluation servers aren’t up yet. If that’s true, do you have a date for when they’ll be up so that I can properly test my submissions?

Thanks,
Razvan

3 Likes

Hi everyone,

We are in touch with organizers for providing the new timeline for Round 5.
As of now, the expected date for the launch the submission deadline is 14th May 2021 (here).

Regarding the submissions, you will, unfortunately, need to wait for the updated starter kit to release.
It will still be following a similar structure so you can start with your code and submit it later.

The starter kit will be released along with the challenge launch.

Apologies for keeping you waiting, but in the meantime, I hope you all might be having fun with the latest dataset released for Round 5! :smiley:

You can always share some nice insights with fellow participants as community contribution in the Notebook Section.

UPDATE: Submission close date is 14th May 2021, and not the challenge launch.

1 Like

Hi Shivam,
Any updates on the starter kit and evaluation server?

Thanks,
Gokul.

1 Like

Hi Gokul,

looks like there are some issues at the AICrowd side. Following that, we might use a completely different strategy this year. Probably, releasing the “public” part of the test dataset and expecting CSV files from the participants. This is really unfortunate as the deadline is pretty close. Stay tuned!

Best,
Lukas

2 Likes

That’s unfortunate. Awaiting more information regarding this year’s procedure.

Hi @picekl,

looks like there are some issues at the AICrowd side.

I think that misrepresents the situation. It is hard for us to deprioritize everything else just because there is an urgency here because the test data is finally available two weeks before the submission deadline.
Deprioritizing everything else to address this would be unfair to our internal roadmaps and the other challenges which have been planned months in advance.

And separately launching the competition two weeks before the submission deadline is also unfair towards the participants who may or may not have enough time to put together their submissions.

Thanks,
Mohanty

1 Like

Hi @mohanty,

I really do not know why you made this public. Even though you are wrong in most of your points, I won’t respond to that. Just two question. Will you make the CSV submissions available? If yes, when?

Otherwise, I will do the evaluation “manually”.

All the best,
Lukas