Regarding evaluation metric

#1

What kind of f1 score is being used for this multiclass classification? f1 micro or macro or weighted?

1 Like
#2

@gokuleloop: We use macro averaged f1_score from sklearn.metrics.

#3

Thanks for the reply.

#4

I checked the documentation: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html
@mohanty, it is not clear to me from your answer whereas you mean ‘weighted’ or ‘macro’?

1 Like
#5

@gloria_macia_munoz: Whoops !! Thanks for pointing that out. I realised I had a typo in my previous message, which I just corrected.

Here is what is being used in the evaluator :

f1 = skm.f1_score(ground_truth, submission_idx_max, average='macro')

PS: the prob distributions etc are passed through a separate softmax
(before computing the submission_idx_max), so please dont even try to try with randomly large values for the prob values for individual classes :angel: !!!

Best of Luck !!

1 Like