Weekly Leaderboard

Hi sir/mam
i submitted my work for this week for evaluation but I can’t see my score in the leaderboard.
My submission id is:: 123844
Please help me out.




This appears to have happened to me as well. My submission ID is 123887.


1 Like

Throwing the idea out there, seeing the number of users having had issues in this leaderboard…

What about a additional mid-week round, recycling and mixing previous weeks data?

For some it could iron out issues, for others, one final chance to tweak price margin and see feedback.

On my end, this Leaderboard was my final model, which I will no longer be touching, besides shifting price loading.


Great idea (although I must say, I hope there’s not too many issues with this week’s LB given my new model got 6th haha)


Same for me, made a submission got weekly feedback but the submission is not appearing on the leaderboard.

Sorry all,

It seems for some submissions there was misleading information about which model was being used in the profit leaderboard. It should all be fixed now if you refresh.

Once you refresh and you think you still don’t have the right submission there then please comment here :point_down:


Did you only change your profit loading this time around too?
Got unlucky on the claims?

Guess I’ll start a new thread to open discussion.

Yes @michael_bordeleau I kept with the same claims model (which I’m now convinced underfits to the data) and reduced my profit load by approximately 5%. My market share went from 0.7% to 14.8% a level I’d be happy with but because my model doesn’t differentiate risk as well as others then I picked up too many claims, especially in the 5k to 10k bucket.

Overall I now think I’ve got a good handle on the right profit margin distribution. Now just have to tweak the underlying claims model.

Interesting to hear your comment on staying with your current model. I certainly won’t be going with my current one. The final week is very different to all that have gone before and that gives rise to the ability to use a new feature that is likely to be very predictive but which up until now no-one has been able to exploit…


Ah, yes, that…
Well, I’ve been planning ahead. Have a look at my last bunch of submissions:

^^This was me trying to push my framework for the finals.

Just a headsup, plan ahead. And keep it lean.
You know why it failed? … Turns out, my framework was busting the server memory which is capped at 16gb per submission.

Made it work by deleting objects and calling gc() along the way.


sir, Can you please let me know which model are u using…

This is a really cool Idea. Before the final submissions, a last round with the hole previous data would be interesting, or just mixing them as suggested by michael @alfarzan :smiley:

It would be interesting but it’s too late in this game and I don’t think we’d be able to do that unfortunately. In the meanwhile, I’d suggest going back to the feedback from week 4 onward and studying that in step with the leaderboard results, maybe something is there :thinking:

I will be making an announcement soon about the process for the final leaderboard and what we will do to make sure no one misses out due to small errors as well :muscle:


At least we tried @Baracuda :wink:

But I understand the limitations. The staff has been around all this time, extremely responsive to all the little issues we have encountered. As a crowd, I find we were quite demanding lol, both here and on discord.

Super grateful for all their hard work. :fist_right::fist_left: