I’m curious as to whether the top leader board competitors used better models or a different dataset.
I can just say that is too early to reveal the details about submissions. Just work hard, try many ideas and finally, you will get to the top of the leaderboard!
I don’t know about other competitors (could only assumed), but we use only this dataset, and it is sufficient.
So working on model, training pipeline, etc. provide the possibility to get LB=0.62114.
could you get the same result on the public validation dataset? Do you think the results on the public validation dataset and LB is similar?
yes, they are strongly correlated