Actuaries are always looking for new techniques to help quantify insurance risks. I have become accustomed to a two-stage process of determining the range of reasonable estimates. That is, after the first step of producing a central point estimate, we either run a bootstrap to obtain a full stochastic distribution of outcomes, or deterministically vary the key assumptions (e.g., the speed of patterns) to get high and low reasonable estimates, to which we can then fit a loss distribution such as lognormal. As tried and tested as this process is, there is a new kid on the insurance modelling block that I have been advocating as an alternative: Bayesian Markov Chain Monte Carlo (MCMC) models.
In fact, the “new kid” is not really that new. MCMC models have been around and available for a while now (including in our reserving software ResQ), but they just have not been widely employed. There are good reasons for this slow adoption, but the obstacles are coming down.
From a “hard” perspective, MCMC models require intense simulations, making them historically prohibitive for the IT infrastructure available to most actuarial departments. Advancements in technology (including the rise of cloud computing and virtual grids) are making this less of a barrier for insurers, large and small alike.
Meanwhile from a “soft” perspective, the younger generation of actuaries is receiving training on MCMC models through their exams (e.g., Casualty Actuarial Society (CAS) Exam Modern Actuarial Statistics II and 7). As a bigger proportion of actuaries come to understand MCMC, we can expect the popularity of such models to grow.
However, seasoned actuaries, who tend to be in managerial positions, have likely never studied MCMC models during their educational journeys, unless from a strong statistics background. Less understanding means less adoption. To address this knowledge gap, I recently co-presented a primer on MCMC for senior actuaries at the CAS Spring Meeting.
One major advantage of Bayesian MCMC models is that they let the data “speak,” but only as “loud” as it is credible. In an actuarial context, this may mean an objective way of blending average link ratios (based on data) and the industry benchmarks (confidence in which is asserted via the assumed variance of the prior distribution). The higher the quality of the data (i.e., greater volume and smaller sample variance), the more the model output (a posterior distribution) will gravitate toward the data, and vice versa.
How does that benefit the actuary and the business overall? Principally, such models provide a deeper understanding of the risks inherent in estimating ultimate losses without taking away the valuable actuarial judgment that an experienced actuary has to offer. The distributions produced are also likely to have wider applications in a re/insurer’s capital modeling and pricing exercises.
Actuarial instincts as they are, it seems to me only a matter of time before the use of cutting-edge MCMC models in insurance modeling becomes more widespread and a source of competitive opportunity.