Hey there!

You probably are in the same situation…

Personally, I’m juggling between:

That’s when I thought about re-using my market simulation, discussed here and here.

Let’s use my top 10 models, which are all different algorithms.
It would be a proxy for the leaderboard top 10 insurers market.

Then, because the top 10 leaderboard all show profitability, it naturally leads to this optimization question…

Question

What’s the minimal loading required so that all 10 insurers are profitable?
And for simplicity, let’s use a simple multiplicative approach.

Response

A lot. …

But before spoiling you…

Perhaps I’m oversimplifying, but is it reasonable to think of the top 10 insurers as a “final market”?
So that their respective profit and market share would be the results of those unique top 10 insurers going against each other? (no other insurers involved)

If so…

• How do I reconcile the fact that the sum of the top 10 insurers, on week 10, have a combined market share of 65.4%, which is far from 100%?
Should it be ideally closer to 100% ?

• Should I expect the top 10 insurers to all be profitable when going against each other?

^^ I understand that there are averages along the way, but if the staff can chime in on that, it would be great!

By now, perhaps you see how those questions are crucial assumptions in the simulation analysis.

Response

Note that the minimal loading required is highly sensitive to the actual losses occurring in the test set.
Ran the simulation on first 4 seeds, which represents 4 different holdouts.

Seed 17 … . .32%
Seed 42 … …36.5%
Seed 666 … . 41%
Seed 1313 … 31.7%

Results indicate that a loading between ~30% and ~40% is required, so that all insurers are profitable.
It seems high to me, what do you think?

Is it high because of how the problem is framed… that all 10 insurers must be profitable? Which leads me back to my two comprehension questions…

EDIT:
It becomes clear that the actual losses randomness is important.

Either

• to prepare for more losses , you shoot for a hefty loading,
• you gamble that year 5 will have smaller losses and apply smaller loading.
7 Likes

Thanks for the interesting insight!

• in the second round of benefit calculation, everyone is fighting against top 10% of first round (25 top insurers). Maybe you should average the market share on 25 insurers and not 10? (Well, that doesn’t change the result, the average market share of top25 insurers is around 5%, we would expect 10%).
• also, the top25 insurers of second round are likely not to be the same as the top25 insurers of the first round. I did market simulations, and I usually have around 15 of the top25 insurers of round1, that remain in the top25 insurers of round2.

Apart from this, the 30-40% loading in order to make the insurers profitable, is an interesting value! I have to think about it

2 Likes

Well I think this may now be a self-fulfilling prophecy to some extent if people look to your analysis as a guide and everyone just goes with 35%, we shall see

I’m not feeling as generous as you, so I’m not sharing the details for now, but I’ve been exploring something somewhat similar and get a much lower range (not necessarily correctly).

I’ll throw this in - how bad is your worst model? … is it fair to expect this to make a profit?

4 Likes

I don’t want to spoil any of the interesting discussion here but I’ll just chime in on these two points:

1. The top 10% (not top 10) play against each other in markets of size 10. So their average market shares will not sum to 1. To understand, consider a stylised example where you have a total of 3 players playing in markets of size 2 and the winner always gets a market share of 20%. Then let’s say the top 2 are identical. So their average market shares will 20% and the top 2 add up to 40%. Which is fine.
2. Not necessarily, it just means the competition is likely more fierce in the top. But people can still lose money!
2 Likes

Very interesting post. And, yes, I’m definitely surprised at how much profit load we’re having to apply to achieve profitability.

One other interesting thing is that if you pull the profit leaderboard tables from week to week, there is “some” correlation for profits in one week to the next (I think there needs to be more data scrubbing to confirm if the correlation is real), but there is definite correlation between market share and profits. Many of the consistently decent performing participants have relatively small market share, which probably means there is a small segment of the market that can consistently be identified as profitable, and the rest is much more luck driven.

1 Like

I’m curious if anyone was influenced by this post when setting their profit margin for the final round. I have to admit it was one factor i took into account when setting my final loading, mainly because i thought it may have an influence on others - the self fulfilling prophecy @tom_snowdon mentioned. Was there an achoring effect? https://en.m.wikipedia.org/wiki/Anchoring_(cognitive_bias)

2 Likes

All part of my evil plan to get people to overcharge.

Haha, just kidding.
As I pointed out, felt like it was too high. I ended up choosing an average loading of ~23%, with some fluctuations case by case.

My rational was that I have to account that some insurers need to lose money. That’s the harsh reality.

We undercut eachother to try to survive.

This is not a charitable positive-sum or even a zero-sum game. We are in negative-sum game…

If any of the top players are reading this and have used any of the insights I’ve provided, give me a shout-out in your presentation.

5 Likes

Haha!

There was a bit of concern at the design phase that because this is effectively a commodity market without product differentiation or customer loyalty, that competition would erode all profits and the leaderboard will just be a full of negative numbers from the very top. Glad to see people corrected for this!

On a separate note, one of the competitors recently revealed a very interesting strategy they took in setting their margins using findings from early Keynesian beauty contest experiments where you basically win \$X if you guess a random X between [0, 100] and X is lower than everyone else’s guess.

In that game the numbers people came up with usually ended up around 14 even the Nash equilibrium suggests it should be zero. I guess the question is how many levels deep do people go on “How much do they think, I think they price things…” and so on.

6 Likes

As I didn’t see too much activity on the thread I pretty much ignored the thinking that everyone might jump on 30%+ bandwagon in the end.

My week 10 results had me at 9.4% market share, and I assumed that there wouldn’t be too much market movement between there and the final leaderboard. I ended up going with a final profit strucuture of (X+5)*1.135, which (by some dodgy maths) I thought would land me in a similar position to week 10 (where I had a lower additive and higher multiplicative component - I can’t download the files at the moment, but I think I had an additive of 0.7 and multiplicative load of 23%, plus some other funny business). I placed 6th in week 10, and while I did submit a significantly better model than week 9, I get the impression that there weren’t any 10k+ claims, and that this did me a big favour (I put minimal effort into investigating/protecting against large losses).

I’ve not looked into the error by predicted premium (although I note some convos have started on this on another thread), but under the (naive) assumption that my models are “good” across the spectrum, it makes sense to push for a greater %load on lower premium business (hence the additive) - this was backed up by some simulations of 10 equivalent competitors, 9 with a flat profit load and 1 with the form somewhat like the one I described above (and indeed the load of 5 selected via experimentation).

I don’t think I’ll end up placing highly unfortunately - a combination of me not realising the training data is likely not being provided in the predictions file on the final leaderboard (which means my NCD history features are probably reduced to being a constant NA), possibly being a few % too cheap on average, and not looking to protect enough from large losses will have all cost me … I suspect.

4 Likes

The Nash equilibrium would depend on how many competitors there are. In our case, it should be 10/9=11% (https://en.wikipedia.org/wiki/First-price_sealed-bid_auction). My team wasted many weeks early on because our profit load wasn’t anywhere close to where it needed to be (we were ultimately influenced by @michael_bordeleau 's 30%+ ).

2 Likes

Oh that’s pretty cool actually. To me, this setup seems neither a Keynesian beauty contest, nor a first price sealed bid auction. Only because in the beauty contest you are shooting for the mean and in those auctions you end up paying the bid you make, so you’re incentivized to bid low on one side. While in this game you receive the bid you make so you’re incentivized to bid high. The reason I think this is important is because going low has a floor (zero) while going high is dependent on your internal utility for the thing on auction with no real upper limit.

If anything the closest set-up to this game I could find with a quick search is this question on a long dead forum online.

I’m no economist by training but I don’t immediately see how the Nash equilibrium would be anything other than zero for the margin with at least 2 non-collusive players? As in, the strategy of reducing the margin will always be a dominant strategy all else being equal in a cheapest-wins market. Taking that to the extreme would mean a margin of zero (infinitesimally). Though I might be missing something since I’m no theorist

That’s why I reversed the numerator/denominator. The first-price bid auction Nash equilibrium price would be (N-1)/N, but for us it would be N/(N-1). The equilibrium is not no adjustment in any case , but I also can’t really explain why. It intuitively makes sense though that in our market, the more competitors you have, then the lower profit margin you should be able to charge. The N/(N-1) would satisfy this.

1 Like

@alfarzan

“Taking that to the extreme would mean a margin of zero (infinitesimally).”

For this the competitors would need to share their definition of “margin”… so they would need to have the same cost estimates for the policy .
If they don’t, the result seems far less obvious to me.

1 Like

Yep that’s a good point

The intuition I’m following is that:

1. In claim estimation there is a ground truth: who made a claim and by how much.
2. There is a upper limit to predictability in the data which most models seem to hit (RMSE ~ 500)

These two points in combination make me think that actually, many models have quite similar cost estimates, in which case a “margin” makes sense. In other words, there should be a healthy level of correlation between the prices offered to policies claim estimates made by different models.

But if they don’t have this correlation and for example, each model is good at predicting a particular niche, then all bets are off as you say.

@alfarzan this is interesting ; in another thread ( “Asymmetric” loss function? ) we discussed how the optimal margin strategy of a player depends on the quality of his own estimate ; as @Calico suggested, more unknown in the estimate probably lead to higher optimal margins.
Here you get an opposite approach and conclude that the margin should not depend of the variation of your own estimate but on the variation of your competitors estimate…
And in both case, more uncertainty leads to optimal strategies with higher margin.

This topic definitely deserves more serious investigation !

3 Likes