The mystery of 0.489 and how to beat 2 deep-learning baselines with a single line of code

If you look at my notebook AIcrowd | Baseline + Exploration: random purchase vs full purchase | Posts you could see that the zero-prediction solution got a score of 0.478 locally.

And that solution will score 0.489 in the LB, beating 2 public baselines.

How to do that, just replace



That’s it.


I noticed that, it’s something to take care of. I will not be surprised if someone just replace zeroes with random 0,1 and get above 25%.
In either case it is not something to build upon. The purpose of baseline is to build a simplistic model that can be improved further by numerous experiments.

1 Like