Seem like @OG_SouL is using the metric exploit I have mentioned previously on the forum.
How can we (the honest participants) be sure that such submissions wont be considered in the competition? For your information, our submissions are constructed to top F1 score as the harmonic mean of precision and recall.
Thank you @dimitri.fichou for clarification. I would also like to hear from the @OG_SouL about their solution. I could be wrong and they might have such solution that is super accurate.
Hi @picekl, if you look at our first submission, we had got a good overall_precision score but a lower overall_recall score according to the evaluation metric. Therefore, we improved our model to get a better overall_recall score, which we achieved with our subsequent submissions.
But, by then, when we submitted, the ‘overall_recall’ column was removed. After getting confirmation from @dimitri.fichou via mail that overall_precision score is going to be the sole evaluation metric for the competition, we re-trained our model to improve on the precision scores. This is reflected in our last two submissions.
We believe that even if the evaluation metric is modified to consider either the f1 score or mAP (over IoU > 0.5), two of our submissions would excel in that, as they were trained particularly to increase the same.
@dimitri.fichou, it would be great if you could clarify what exactly would be the final evaluation metric. We’ll make another submission and tag that as the ‘Primary Run’.
Thanks @dimitri.fichou for the clarification. Since we have not tagged any of our submission as a primary run, can you please confirm which one would be considered for deciding the leaderboard (according to new evaluation script) ?
Also, it would be great if we could also see the scores of other participants’ submissions, because we’re only able to see our own scores. @picekl, can you help us out in this?
I would like to know how many submissions a team can make ? If there are 3 members, can we make 30 submissions (10 submissions from one account ). Also during registration,it was mentioned that team members should have their username as their team name. But then we were not able to have the same username. Can someone help me out on this case ? @dimitri.fichou@shivam
I’m afraid that you are supposed to submit only 10 submissions per team. Using multiple accounts to increase the number of submissions could be considered as rules violation. At least other platforms (e.g. Kaggle) works this way.
@picekl Ohh I was not aware of this. But I would like to point out a few things. I don’t think the information that you have mentioned was mentioned anywhere in the challenge webpage. Secondly, we did not have an option to create a team and during registration, we were told to have the team name as the username. Hence logically, if the members of a team are represented with one particular name, then submissions can be made from either or all of the member’s accounts.
This is up to organisers to decide. From my perspective, it’s really hard to track the number of people under single team and it’s expected that one team will have 10 submissions. If not you will be motivated to accumulate the huge number of “contributors” to increase your number of submissions. In our case we are 3 and we are going to submit only 10 submissions. There is only one who have signed the Eula.
This is definitely the exploit thread…
Please submit under the same username so it’s limited to 10 submissions and fair to the other participants.
Dimitri