[Explainer + Baseline] Baseline with +0.84 on LB

Hi all! I have added a new submission for the Community Prize including a Baseline that achieves +0.84 in the LB.

Hope you find it useful!

Kudos to @gaurav_singhal for his initial Baseline that helped me get through creating a submission.

5 Likes

I noticed you attached your classifier on top of the classifier of efficientnet, rather than replacing it. This works, but is not the traditional approach. Typically these pretrained models have some module like .classifier or .fc and you can just load a model and then replace the existing one. If you stack another one on top, you have multiple fully connected layers and multiple dropout layers. This works, but may cost you in performance or training time.

Have a look at how it’s implemented in the efficientnet source;

        self.classifier = nn.Sequential(
            nn.Dropout(p=dropout, inplace=True),
            nn.Linear(lastconv_output_channels, num_classes),
        )

You could replace it like this, after loading the model:

  model.classifier = torch.nn.Sequential(
                    torch.nn.Dropout(p=DROPOUT),
                    torch.nn.Linear(in_features=model.classifier[1].in_features, out_features=n_classes)
                )
1 Like

I removed the whole classifier class with this code. We don’t need to put another fully connected layer or extra Relu.

self.model = models.efficientnet_b0(pretrained=True)
self.model.classifier = torch.nn.Sequential(
    torch.nn.Dropout(p=self.model.classifier[0].p),
    torch.nn.Linear(self.model.classifier[1].in_features, out_features=self.NUM_CLASSES)
)

I think this is what you are talking about right? I haven’t tried with Relu though.

2 Likes

This is great! Thanks for the advice! I was having the issue of not being able to implement the pre-trained weights and found the solution, clearly not the most elegant one :grin: