I have been a pricing actuary for a few years and can give you what I saw in different companies. I may lack objectivity here, sorry in advance
Here is what I see for risk modeling in pricing:

GLMs : (Generalized Linear Models) : prediction(X) = g^{1}(\sum_d \beta_d \times X_d)  with possible interactions if relevant. g is often a log, so g^{1} is an exponential, leading to a multiplicative formula.
Here all the effects are linear (so of course it does not work well as soon as one wants to capture nonlinear effects such as in an age variable). Below 2 example  one good and one bad  with the observed values in purple and the model in green:
It is possible to capture nonlinearity by “extending” the set of input variables, for instance by adding transformed version of the variables in the dataset  so one would use age, age^2, age^3… or other transformations to the dataset to capture nonlinearities. This is very very oldschool, tedious, errorprone, and lacks a lot of flexibility.

GAMs : (Generalized Additive Models) : prediction(X) = g^{1}(\sum_d f_d(X_d))) : the predictions are the sum of nonlinear effects of the different variables ; you can enrich the approach by including interactions f_{d,e}(X_d,X_e) if relevant.
The very strong point of this approach is that it is transparent (possibility for the user to directly look into the model, decompose it, understand it, without having to rely on analysis / indirect look  eg PDP, ICE, ALE, …) and easy to put top productions (the models are basically tables). So they are a powerful tool to prevent adverseselection while ensuring the lowrisk segments are well priced, and are often requested by riskmanagements or regulators.
However these models were often built manually (the user selects which variables are included, what shape the functions have  polynomial, piecewise linear, stepfunctions, …) either through proprietary softwares or programming languages (eg splines with Python / R). For this reason GAMs are often opposed to ML methods and suffers from a bad reputation.
Newer approach allow the creation of GAM models through machine learning while keeping the GAM structure (I believe Akur8 leads the way here  but I may lack some objectivity as I work for this company ). The idea is that the ML algorithm builds the optimal subset of variables and the shape of the f_d functions to provide a parsimonious and robust model while minimizing the errors, removing all the variables or levels that do not carry significant signal. The user runs gridsearches to test different number of variables / robustness and pick the “best one” for his modeling problem. “Best one” being the models that maximize an outofsample score over a kfold and following several more qualitative sanitycheck from the modeler.
For instance below a GAM fitted on a nonlinear variable (driver age):

Treebased methods (GBMs, RF…) : we all know these well ; they are associated with “machinelearning”, there are very good opensource packages (SKLearn, xgboost, lightGBM…), and it is relatively simple to use them to build a good model. The drawback is that they are blackbox, meaning the models can’t be directly apprehended by a user  so the models need to be simplified through the classic visualization techniques to be (partially) understood. For instance below an ICE plot of a GBM  the average trend is good but some examples, eg in bold, are dubious:

No models : a surprisingly high number of insurance companies do not have any predictive models to compute the cost of the clients at underwriting and don’t know in advance their lossratios. They would track the lossratios (claims paid /premium earned) on different segments and their conversions, trying to correct if things go too far offtrack.
I have seen many firms in Europe, where a very large majority of insurers use GAMs ; most of them use legacy solutions and build these GAMs manually ; a growing share is switching to MLpowered GAMs (thanks to us ).
There is a lot of confusion as insurers tend to use the term “GLM” to describe both GLMs and GAMs.
A minority of insurers  usually smaller and more traditional ones  use pure GLMs or no models at all.
Many insurers considered using GBMs in production but did not move forward (too much to lose, no clear gain) or leverage only GBMs to get insights relevant for the manual productions of GAMs (for instance identifying the variables with highest importance or interactions). I have heard rumors of some people did move forward with GBMs but didn’t hear about anything very convincing.
In the US the situation is a bit less clear, with a larger share of insurers using real old linear GLMs with datatransformation, some using GAMs and rumors on GBMs, either directly or as scores that enter in GLM formulas. The market is heavily regulated and varies strongly from one state to another, leading to different situations. In Asia (Japan, Korea…) I met people starting to use GAMs ; the market is also very regulated there.