RMSE as an evaluation metric

There’s been some great discussion on the weekly profit leaderboard feedback. Figured I’d throw in a recommendation for the Claims Estimation leaderboard too. Using RMSE as a metric assumes constant variance, which isn’t true for this dataset (and is generally not true for insurance claims). Maybe use something else for claims prediction accuracy? If it’s too late for this competition, maybe something to consider in the future.

2 Likes

you can optimize whatever metric you want and ignore the RMSE leaderboard. The winner is after all determined by the profit leaderboard.

1 Like

Of course, but just thinking that there could be a better metric to help participants gauge their performance against other teams. And, you are clearly not ignoring the RMSE leaderboard :stuck_out_tongue_winking_eye:

4 Likes

Thanks @lolatu2!

This is a very interesting point. We chose RMSE only because it’s one of the most accessible and widely known metrics. You’re right that perhaps it’s not exactly the best insurance-specific metric.

There have been suggestions to use something like deviance or other metrics as well.

What other metrics did you have in mind?

I’m not sure how you would implement deviance on a leaderboard, but that can be used to select our best model within a team. Maybe MAE or RMSLE? There are pros/cons to whichever metric is selected. Just something to consider I think.

1 Like

Gini maybe?
I remember seeing some code on kaggle for a weighted gini ( for difference exposure) written by no other than @nigel_carpenter

2 Likes

Yes Gini is also one of the contenders.

I think a likely solution would be a sort of “performance” leaderboard that is sortable by the participant based on a few claim estimation metrics like that. RMSE, Gini, MAE, RMSLE and maybe something more industry standard?

Then we can set a “default” and allow participants to see different views as well.

2 Likes