Sharing Best submissions

Hi all,

This competition was great :sparkles::+1:, and even it’s over, i’m still curious about what other participants have done :thinking::grey_question:. Is your best submission (kept in the private LB) the one you expected? Do you think you find a feature others didn’t think about? Did you test ensemble models or other approaches?

Mine (20th position) is a quite simple R xgboost with:

  • pretty high learning rate: 0.1
  • low depth: 3
  • iterations: 750
  • features: 110

and weightings giving the following distribution (using all observations of both training and test datasets):

  • normal: 68.2%
  • post: 24.8%
  • pre: 7%

About feature engineering :hammer_and_wrench::link:, I gather several variables with mean and sum, like most of you probably. Maybe one feature (with good ranking) we never talk about it, is the ratio between the minute hand and the average distance from centre of the digits:
ratio_hand_Xi = minute_hand_length / mean_dist_from_cen__

Kudos to the organizers, and congrats to all winners and contributors! :trophy:

7 Likes

My solution is an ensemble of an LGB and NN model.
The NN was performing better than LGB on my CV and validation data but was much worse on public and private LB !!
the NN architecture had a convolutional layer on a 5x5 grid representing the clock.
Lucky for me, even if the NN by itself didn’t perform well on LB, it gave a nice boost when ensembled with LGB

2 Likes