Regarding evaluation metric

What kind of f1 score is being used for this multiclass classification? f1 micro or macro or weighted?

1 Like

@gokuleloop: We use macro averaged f1_score from sklearn.metrics.

Thanks for the reply.

I checked the documentation:
@mohanty, it is not clear to me from your answer whereas you mean ‘weighted’ or ‘macro’?

1 Like

@gloria_macia_munoz: Whoops !! Thanks for pointing that out. I realised I had a typo in my previous message, which I just corrected.

Here is what is being used in the evaluator :

f1 = skm.f1_score(ground_truth, submission_idx_max, average='macro')

PS: the prob distributions etc are passed through a separate softmax
(before computing the submission_idx_max), so please dont even try to try with randomly large values for the prob values for individual classes :angel: !!!

Best of Luck !!

1 Like