Hi community,

I’m simply interested in the strategies used to build the 5 sets of 3 for the submission. I think this is also an integral part of the whole process besides “just the model”. Getting 1 right out of 3 = 0.2, 2 = 0.5 and all 3 right = 1.(if base truth has 3 descriptors)

This also in regards to the 2 notebooks posted in the “explainer” thread.

The strategy in this notebooks seems to be to take the top 15 prediction based on class-probability and then build the 5 sets of 3 in order of these 5 predictions. It’s a different strategy to what I choose with different limitations. With this strategy of top 15 I feel the additional sets won’t add much to the score. The 5 sets are too diverse.

My own strategy was pretty much the opposite. I take the top-6 predictions (could probably be even less) and create all possible combinations (since order doesn’t matter this results in only 20 combinations.) From these 20, I pick the most likely according to class probability (while handling the special case of odorless separately, eg “floral, odorless, woody” makes no sense). As said this leads to the opposite problem with the 5 sets being very similar.

Both strategies also don’t take into account the possibility to only predict 1 or 2 labels. If the ground truth has only a label but the prediction has 3 even with a match it’s only a similarity of 1/3.

So I’m interested in hearing about your strategies which might generate a more balanced set of sentences. Not to diverse but not too similar?