Seeking clarification on model selection

Can the admins provide a bit more clarity on if (and how) competitors select which of their submitted models will get used for the competitive market leaderboard.

I ask because I presume you won’t test all models we’ve submitted (that just doesn’t fit with how insurance markets operate, we only get one shot at rate deployement) but equally I’m hoping you won’t just take the submission that performs best on the RMSE leaderboard forward to the competitive market.

From my experimentation the results on the RMSE leaderboard are somewhat volatile, there’s a fair bit of luck in your ranking which is to be expected when evaluated on only 5,000 rows of data. I seem to have got lucky early on but I know that submission will not perform well on a competitive market leaderboard!

I guess I’m used and expecting a Kaggle style approach where competitors have to select what they believe is their best model and that goes through to the final evaluation (competitive market leaderboard).

3 Likes

Hi @nigel_carpenter

Indeed in reality you only get one shot at rate deployment. I’ve provided some detail on how the leaderboards function below.

RMSE leaderboard :bar_chart:

This leaderboard will always show your best submission. That is, it will always include the submission that has the lowest RMSE from your history of submissions. This is based on your predict_expected_claim function.

Average competitive profit leaderboard :moneybag:

This leaderboard will show the latest submission you made prior to the previous weekly leaderboard deadline (Saturdays 10PM CET). This is based on your predict_premium function.

With this setup we try to simulate the “one shot” at rate deployment on a weekly basis. Also recall that the dataset for each week will be a different sample of contracts. We will investigate the possibility of choosing your submission in the interim and I will reply here. But the default right now is “the latest model”.

The same model for both leaderboards? :thinking:

As you note, it could be that your best RMSE submission does not do well at all in a competitive environment, so no we do not just put that into competition (unless it’s the last model you submitted!). But it would be interesting to see how often the best RMSE models appear among the top of the weekly profit leaderboard. For that stay tuned for our newsletter after the first weekly leaderboard next week!

In addition, tomorrow we will be updating the overview page to include these important details as well as small tweaks to the starter-kit to make the submission process easier :slight_smile:

Please let me know if this does not answer your question.

1 Like

Thanks @alfarzan that’s enough for me to work on.

To summarise; the competitive profit leaderboard will use the predict_premium function from the last valid upload prior to the weekly Saturday 10pm cutoff.

So I must remember to keep a submission free and make sure I upload in plenty of time for the 10pm cutoff.

2 Likes