This competition was great , and even it’s over, i’m still curious about what other participants have done . Is your best submission (kept in the private LB) the one you expected? Do you think you find a feature others didn’t think about? Did you test ensemble models or other approaches?
Mine (20th position) is a quite simple R xgboost with:
- pretty high learning rate: 0.1
- low depth: 3
- iterations: 750
- features: 110
and weightings giving the following distribution (using all observations of both training and test datasets):
- normal: 68.2%
- post: 24.8%
- pre: 7%
About feature engineering , I gather several variables with mean and sum, like most of you probably. Maybe one feature (with good ranking) we never talk about it, is the ratio between the minute hand and the average distance from centre of the digits:
ratio_hand_Xi = minute_hand_length / mean_dist_from_cen__
Kudos to the organizers, and congrats to all winners and contributors!