The submission needs japonicus-koreicus class not the japonicus/koreicus

The AICrowd website and the dataset’s csv file refer to the japonicus/koreicus class, but the submission is based on japonicus-koreicus, as I saw class names from the baseline predictions, which I only discovered recently in the hard way.
It could have saved people like me some time if organizers had at least mentioned it earlier.

3 Likes

OMG! Almost 2 weeks and more than 70 submissions to try to understand why CV/LB gap was so huge. My first assumption was that I messed up the labels mapping on my side but I did not take time to fully probe LB to detect this issue. Round1 baseline notebook here is just wrong for round2.

class_labels = {
“aegypti”: 0,
“albopictus”: 1,
“anopheles”: 2,
“culex”: 3,
“culiseta”: 4,
japonicus/koreicus”: 5
}

One has to notice the correct labels in random model here:

self.class_names = [
‘culex’,
'japonicus-koreicus’,
‘culiseta’,
‘albopictus’,
‘anopheles’,
‘aegypti’
]

Without your finding @saidinesh_pola I was on the way to conclude that test set distribution was totally different than train set and I was going to give up the lottery.

Thanks again! The competition can start now.

BTW: The submission issue I’ve reported is still here. It happens around 2 on 10 submissions for me. Same code submitted twice either fail or succeed. The 2 second limit check per image is not stable. Even, I think it’s not 2 seconds but more 1.2 second limit to make a submission work safely. So, one should try to submit the same more times to make sure it works/fails really.

2 Likes

Good, I assumed you were aware of that. The inference is limited to two seconds, although in some circumstances, the initial images may take longer to load. This has also happened on my local also. Are you comparing the inference from your local CPU, as the inference from the CPU used for submission will take longer given the old model.

1 Like

For the initial images, it’s easy to workaround with warmup. Usually the problem (timeout I guess) is after 40/50min once submitted. My local CPU is 4x faster than AICrowd so I compare only between submissions on AICrowd as we’ve logs of “MosquitoalertValidation” stage.

Damm, now I get a similar result with my validation score too :slight_smile: I guess the problem was wrong labeling. @saidinesh_pola thank you for pointing it out.

1 Like