Results and working notes

Dear participants,
we have reported the results in the form of a table and a graph here, as well as the instructions for writing the working notes:
We remind that all participating teams with at least one graded submission, regardless of the score, should submit a CEUR working notes paper as part of CLEF, the Conference and Labs of the Evaluation Forum :
Thank you very much for your contributions, thank you for your tenacity in spite of the very large volume of classes and data, leading to long training times of models.


Thank you very much for this grand opportunity related with plant identification with quick reply.

I am still interested in the latter submission before submitting working note and its final version with below factors.

  1. I just know the challenge at April 29. So I had no enough time to train and update my results. Moreover, the images are really in a large scale, it is time consuming to train models, which may be a commen issue for other people.

  2. The main difference between this challenge and other image classification task is the observation as you mentioned before. I am really interested in this. But as the limited time, I just use a very simple method to integrate the multiple images for one plant or observation. Personally, this is really inspired for other related applications.

  3. I believe that latter submission is going to give us more chances and opportunities to have better works and therefore, contributes more to the conference.

Thank you again for this challenge.

Hi Mingle Xu, you did a great job in such a short time so it can’t have been easy to get your models to converge so quickly, I’m very curious to see what approach you used in your future working note. For your additional run, normally we have to limit the restitution of the results only to the runs submitted during the challenge and we prefer to stay on the display of the results of the tab leaderboard as it is and where you appear in first position (congratulations by the way!). I imagine however that it is very frustrating to have runs ready and that you would like to know the performance. I suggest you send me your new runs by email with a downloadable link. I could calculate and communicate your scores outside the aicrowd platform. You can then mention this last result in your working note, at the end of the document in a separate subsection but saying explicitly that it is a post-challenge result, out of competition. Would this be convenient for you?

Thank you for this quick reply.

Your suggestion is nice for me. I am going to send my additional results to you.

Could you please evaluate my results in this link: Google Drive: Anmelden

Since I changed a lot of my codes with careful checking, I hope it correct.

Thank you in advance.

Hello, Herve Goeau.

I really appreciate your help about this late submission that makes me have good understanding about the plant identification task. I hope these work also contribute to the interesting conference.

I tried to understand the observation and its impact. So I did several ablation study detailed below.

  1. In the official submission, we are required to submit top 30 predictions. If it is possible, please help me check top5, top10 and top20.

  2. Towards the observation, I try to see different method to combine multiple images for single observation. I did single_low, single_high, sorted, and random. In random case, we just use a random image as the output of the observation, so this can be regarded as a baseline without observation.

You can use the excel file in the link to fill in my last evaluations if you like this way. The link is: ClefPlant2022_late_submission – Google Drive

I am looking forward to hearing from you.

Thank you again.

Hello, I would say that normally these conclusions should be made by yourself as a preliminary study to your runs. Typically the participants take a small part of the observations from the training set to create a validation set. Then they are free to report in the working notes any conclusions they find relevant to observe: e.g. the impact of different types of data augmentation, the classification capacity of different architectures, techniques for reducing the last layer of classification, or different techniques for combining images from the same observation as you ask me. I understand that in the context of the challenge, re-training such a large model on a sub-part of the training set is expensive, but if you are short of time you can for example do your preliminary studies on a sub-part of the classes, limiting the task to 1000 or 10000 classes, it will be faster and you will still be able to put forward relevant analyses on the aspects of image combination that can be a priori valid and generalized to the case of considering more classes it seems to me

Hello. Your idea is in the right way and great. I believe it makes sense to understand and save time for the task. Thank you for your valuable suggestion.