Skip to main content
main content, press tab to continue
Article

Article series: Mastering the art of model approximation — Part 2

Heeding lessons learned

By Cheryl Angstadt , Karen Grote and Nik Godon | May 31, 2022

The use of actuarial model simplifications is unavoidable and, in most cases, a business imperative. The authors explore some of the inherent risks and ways to minimize them.
Insurance Consulting and Technology
Insurer Solutions

In the second of a series of articles looking at the implications of actuarial model simplifications, we look at what experience tells actuaries about some of the inherent risks and ways to minimize them.

The first article outlined that for actuaries and risk managers, who are used to pursuing granularity and accuracy, model simplifications and approximations can be troubling. But as we also noted, their use is unavoidable and, in most cases, a business imperative.

While more robust computing power and memory have significantly reduced historical limitations on modeling speed and granularity, the cost of achieving the greatest speed and accuracy (or “modeling perfection”) may not be affordable.

Like it or not, actuaries will need to continue to deal with simplifications. But there are lessons to be learned that can help them avoid some of the common pitfalls and bolster confidence in using them — as the following two simplified, real-life case studies demonstrate.

Case study 1 — The need to consider the future

Back in the day…

Imagine it is the early 2000s and a life pricing team has just completed the pricing of a new secondary guarantee universal life (UL) product for issue ages 60 and higher. It is designed to have no account and cash surrender values to achieve the lowest secondary guarantee price possible. Now that the direct pricing has been completed, the pricing team has turned its attention toward putting together a new 90% first dollar quota share yearly renewable term (YRT) reinsurance pool for the product.

Historically the company has dealt with three main reinsurers who have each agreed to continue with their same YRT pricing as the prior UL product: 35% of the Society of Actuaries (SOA) 7580 Select and Ultimate Mortality table. (Remember, it’s the early 2000s!) The pricing team has modeled these reinsurance terms, and they provide a slight gain in the primary pricing measure — Value of New Business (VNB) — using a 30-year period and a discount rate of 11%.

The underwriting and marketing teams now get involved and indicate they would like to add a fourth reinsurer to the pool: NewTableRe. This would give them more facultative outlets, and they have heard good things about this reinsurer.

NewTableRe provides a quote, but it is on a different basis: 49% of the 2001 Valuation Basic Tables (VBT) Select and Ultimate Mortality table. Since the pricing team’s models can only handle one reinsurance table basis — a known limitation — the team does a separate run of its 30-year models with the 2001 VBT and NewTableRe’s proposed 49% rate.

The model results come in, and it turns out the present value over 30 years at 11% of NewTableRe’s rates are almost exactly the same as the current pool, and the overall VNB has only slightly decreased. Given the model limitation (only one reinsurance table basis) and the immaterial difference at issue in the VNB, the pricing team decides to base its model projections on the original pool terms and hands the model over to the valuation team. The pricing team’s documentation does not reflect the addition of NewTableRe to the YRT pool and, more important, doesn’t detail the simplification and the different reinsurance terms for NewTableRe.

Figure 1 compares the two different sets of reinsurance rates and projected reinsurance premiums. As can be seen, rates in the future differ materially, and their impact is masked by the use of an 11% discount rate combined with the pricing focus on a present value measure (VNB).

Reinsurance premiums and rates
Figure 1 compares the two different sets of reinsurance rates and projected reinsurance premiums. - description below:
As can be seen, rates in the future differ materially, and their impact is masked by the use of an 11% discount rate combined with the pricing focus on a present value measure
Figure 1. The divergent effects of modeling based on two separate mortality bases

Cut to the present…

It is now 20 years later. The valuation team is performing its quinquennial review of the model. Surprisingly, every pricing assumption has continued to come true, but the valuation team has noticed that the actual reinsurance premiums paid have varied slightly from the modeled reinsurance premium. Since the actual reinsurance premiums are about 1% lower than the modeled reinsurance premiums, the valuation team decides to use a 99% factor to true up the future projected reinsurance premiums to better match the recent experience.

Unbeknown to the valuation team, it and the pricing team have committed several modeling cardinal sins:

  • They simply assumed the past would represent the future.
  • They used a simple true up factor.
  • They did not dig into the results to determine the true cause of the deviation in reinsurance premiums (one of our key lessons from article 1).

If the valuation team had dug into the details, it would have learned:

  • The reinsurance pool has a fourth reinsurer that uses a different mortality table basis.
  • The 2001 VBT-based reinsurance rates begin to differ materially from those based on SOA7580 after year 20.
  • The slightly lower actual reinsurance premiums over the past five years are just a temporary phenomenon.
  • The team needs to update its model to reflect the actual reinsurance pool treaty terms.

Figure 2 shows the materiality of the valuation team’s sins and does not even reflect the 99% true-up factor, which actually makes the problem worse! While the first graph shows that the reinsurance premium rates were consistent in the first 20 years of the projection, the second graph shows the material divergence of premiums after year 20. By using the simplification of one table, the team has grossly understated the future reinsurance premiums.

Reinsurance premiums and rates
Figure 2 shows the materiality of the valuation team’s errors and does not even reflect the 99% true-up factor, which increases the severity of the problem.
Figure 2. The true impact of the use of different mortality tables on reinsurance premiums

Lessons to learn…

This case study, which as we noted is a simplified version of a real-life example, provides several key lessons:

  • Simplifications are unavoidable and may be driven by system limitations.
  • Proper documentation is critical! If a simplification is used, you must properly document the simplification and the rationale for it, and communicate its existence and limitations to all users of your model. Understand the likely impact of the simplification on results and document triggers that might necessitate reviewing the simplification sooner.
  • Regularly review your simplifications to make sure they don’t become stale.
  • If your modeling capabilities have increased (e.g., more than one reinsurance table basis can be used), update your model to remove simplifications as soon as practical.
  • If your model does not validate well, dig into the issue and try to find the underlying cause. Do not use a simple true-up factor to gloss over a validation issue.
  • Critically important, when using present value measures, also look at projected income statements, cash flows and balances to see if future years may have more material differences.
  • High discount rates in present values can hide the materiality of future differences and the impact of simplifications.
  • Heavy reliance on only one pricing measure (e.g., Internal Rate of Return) is not advised. Generally, each pricing method has a weakness; hence, it is best to consider more than one.
  • Consider the projection period in your model and whether all material cash flows have run off. Even if the materiality of those cash flows is low on a present value basis, they may not be in the future.
  • The past will not always be a good representation of the future!

Case study 2 — Be wary of using selective single cells

In some cases, the model platform is open — meaning there are little to no model limitations. In a perfect modeling world, where time is of no consequence, actuaries could build a model as accurate as they wish. In the real world, however, time is often a main consideration and becomes a limitation in and of itself.

Such was the case in this example, where a model needed to be built to support a sale with a tight and unmovable deadline.

The block of business to be sold included policies that were reinsured across a large number of treaties. For simplicity, our example is limited to six treaties (in reality, there were many more). Figure 3 shows the distribution of reserves by treaty.

Distribution of statutory reserves by treaty
Figure 3. Distribution of reserves by treaty
Treaty block Percent of total statutory reserve
1 2%
2 4%
3 30%
4 3%
5 46%
6 15%
Total 100%

Model build/validation approach

While the seller had a model it used to project the business, the buyer wanted to use its own modeling software and therefore needed to build its own models for the business. Luckily, the buyer’s modeling software included much of the basic life product functionality needed to project the business; however, a large number of treaty-specific features still needed to be built into the model.

From the inforce data file provided, the team got to work selecting single data points to both understand the additional model needs (i.e., customizations and changes needed in the buyer’s model) and to ensure that the cash flows produced by the buyer’s model validated well to the cash flows the seller was producing with its model. The buyer selected 10 single cells, focusing on capturing all material policy features of the major treaties, including product type, smoking/risk class and recapture provisions.

Single cells were validated by comparing cash flows from the new model to those provided by the ceding company on a monthly basis. By not using a present value calculation to validate cash flows and by examining each month’s results for each cash flow, the team felt confident it would not be missing anything. There was also a very low tolerance for discrepancies; the team dug into differences as small as 0.5% of projected cash flows 40 years into the projection!

When complete, the results for the single cells matched almost to the penny, paving the way to run the block in aggregate. Since some product features that were less material were not modeled, the team knew not to expect the validations for the aggregate block to tie to the penny, but the discrepancy was beyond expectation! While the target had been to be within 1% on an aggregate basis, the validation was off by more than 10%. Annual cash flows looked even worse. Consequently, the team started to break results down, including by treaty, which showed that most treaties were validating well while others were abysmal.

The error became apparent: Given the short time frame, the team had concentrated on selecting cells from the aggregate block based on materiality, and in doing so, the team had missed entire treaties — and those treaties had unique features that were not reflected in the model at all.

In fact, the single cells were drawn entirely from treaties 3, 5 and 6 in Figure 3, which covered over 90% of the initial statutory reserve. While treaties 1, 2 and 4 represented very small percentages of the block, omitting entire treaties with unique/different features was enough to throw the aggregate validation off. Even if the aggregate had validated well, the purpose of the model would need to be considered. If results were ever needed on a treaty level in the future, or if the mix of business shifted over time, the results could have been materially off and led to poor confidence in the model and/or modeling team. Imagine a scenario in which treaty 5 was recaptured; previously immaterial treaties and their features would have an even larger impact.

While the tight time frame led the team to take a shortcut on single cell selection, in the end, this cost the team more time and effort to investigate the validation issues.

Lessons to learn…

  • Proper planning and understanding of the business to be modeled at the start of the project could actually save time by reducing the time it takes to investigate and remediate differences.
  • Time spent on very low validation tolerances for parts of the business may not be as valuable as time spent validating additional business.
  • Think about which groupings within an aggregate business should have to stand on their own and validate at those levels in addition to the aggregate.
  • Make sure single cells cover each group and the material features of each block, in addition to material features at an aggregate level.
  • If the model will be used in the future, consider how the business might shift in the future (for example, in this case, due to a recapture).

Experience is a great teacher

These two case studies give a flavor of what can go wrong with model simplifications. Much as it would be nice to say these are the only pitfalls, they are not — as we will explore further in a forthcoming article.

That’s not to say that actuaries can be expected to have all the answers and foresee all eventualities when developing models. But as a profession, we should certainly be looking to learn from past mistakes and misjudgements.


The third article in the series will examine additional examples of model simplifications and the risks entailed in them as well as the importance of appropriate model controls and governance linked to simplifications.

Authors


Director, Insurance Consulting and Technology

Senior Director, Insurance Consulting & Technology

Senior Director, Insurance Consulting and Technology

Contact us