Skip to main content
main content, press tab to continue
Article

Article series: Mastering the art of model approximation — Part 3

The critical importance of model validation

By Cheryl Angstadt , Nik Godon and Karen Grote | November 11, 2022

Model validation, while governed by some core guiding principles, shouldn’t be a one-size-fits-all process for different insurance applications.
Insurance Consulting and Technology
Insurer Solutions

Part 1 of this series opened by noting that there are no hard and fast rules on how granular or accurate actuarial models should be. Part 2 discussed some examples of real-life model simplifications and problems that resulted from those simplifications. In this third part of our series, we examine the potential different applications of the guiding principles around model validation and how they can help ensure models are fit for purpose and accurate.

ASOP 56

The basis for model validation for all U.S. insurers is Actuarial Standard of Practice (ASOP) 56, Modeling. ASOP 56 came into effect on October 1, 2020, and provides guidance to actuaries in any practice area. It covers all aspects of modeling, including model use, and contains several sections relating to the validation of models. This is the actuarial starting point. Some of its key principles include:

Section 3.6.1 Model Testing states:

For a model run or set of model runs generated at one time or over time that is to be relied upon by the intended user, the actuary should perform sufficient testing to ensure that the model reasonably represents that which is intended to be modeled. Model testing may include the following:

  1. reconciling relevant input values to the relevant system, study, or other source of information, addressing and documenting the differences appearing in the reconciliation, if material;
  2. checking formulas, logic, and table references;
  3. running tests of variations on key assumptions to test that changes in the output are consistent with expectations given the changes in the input (i.e., sensitivity testing); and
  4. reconciling the output of a model run to prior model runs, given changes in data, assumptions, formulas, or other aspects of the model since the prior model run.

As such, 3.6.1.a refers to a key piece of what is commonly known as static validation. When completing a static validation, the actuary compares key statistics or calculations as of the model/valuation date to ensure that the model has complete and accurate input data (e.g., policy count, face amount, account value) as well as accurate initial calculations of balances such as the statutory reserve and cash surrender value.

While validating such values as cash value or statutory reserves, you are also addressing 3.6.1.b by confirming that those formulas, and the tables that feed the calculations, are working. Furthermore, 3.6.1.d is a common and recommended practice for new models, addressing the practice that an actuary should first attempt to replicate prior model runs to ensure a consistent and accurate starting point.

Section 3.6.2 Model Output Validation contains the following:

The actuary should validate that the model output reasonably represents that which is being modeled. Depending on the intended purpose, model output validation may include the following:

a. testing, where applicable, preliminary model output against historical actual results to verify that modeled output would bear a reasonable relationship to actual results over a given time period if input to the model were set to be consistent with the conditions prevailing during such period;

d. running tests of variations on key assumptions to test that changes in the output are consistent with the expectations given the changes in the input; and

e. comparing model output to those of an alternative model(s), where appropriate.

Section 3.6.2.a is commonly called dynamic validation and compares model projected values over early time periods to recent historical actual results to see if the projections are consistent and trending well with the recent past. Backcasting, where the model is used to produce results for prior periods, can also assess how model results compare for at least three years of historical actuals.

Section 3.6.2.e, which involves having a separate model to validate calculations, is also a fairly common practice. You may not perform this type of model validation with each use of the model, but you may on a periodic basis (e.g., every few years or when substantial model changes are made). In contrast you would typically perform static validation and some form of dynamic validation each time a model is used.

Sections 3.6.1.c and 3.6.2.d both cover sensitivity testing. While not necessarily performed each time a model is used, they help to provide validation of any changes that you make, ensuring that formulae are working as intended. Sensitivity testing can also set expectations of how results would change if experience differs from assumptions and provide ranges of impacts from future assumption changes.

In practice, you can use all these validation methodologies together to ensure the accuracy of models. But their specific application or use may differ based on the needs and purpose of the model, as these examples of common actuarial modeling efforts demonstrate.

Pricing: The heart of cash flow and future profits

Pricing attempts to ensure that an insurance company is being adequately compensated for the business it sells. As such, it is key to ensure that, for each product, the modeled product’s cash flows are reasonable and accurately reflect future profit expectations. As a result, validations are imperative.

A common first step for pricing models is to replicate a prior model run for the current version of a product that is being repriced or for a similar product if it’s something new. There are typically several key profitability measures — such as internal rate of return, return on investment, value of new business (VNB) and profit margin — to duplicate to ensure the correct starting point. You can take the profitability measures from a well-documented pricing memo or from recent quarterly profitability runs for companies that perform regular VNB reporting. At WTW, we recommend replication of more than just one profitability measure to ensure an accurate starting point.

While static validation is less common in pricing exercises, since an assumed pricing distribution is commonly used rather than actual seriatim sales, pricing models used for quarterly profitability reporting would benefit from this validation step. It helps validate that the new business data being fed into the model — such as new business issued volumes, issue ages and face amounts — match the administrative systems.

Key elements of pricing models are also typically validated against alternative models. For products such as universal life and variable annuity with guaranteed withdrawal riders, it is common to validate the calculation of projected product values, such as account value, shadow account value and notional guaranteed balance compared with a spreadsheet or illustration program. Validating projected statutory reserve balances against valuation software is also critical to ensure proper earnings projections.

The actuary should then perform an attribution analysis for each model change. Regardless of the nature of the change (e.g., assumption change, updated new business distribution, product charge or feature adjustment), it is important to validate the accuracy of the change and that the impact to profitability is consistent with expectations. You should be able to roll forward the original model to the current model and explain the impact of each change made.

Valuation: A foundation of financial reporting

Valuation exercises determine the current liability/reserve and intangible (e.g., deferred acquisition cost) amounts across different reporting bases. These balances are key components of balance sheets, and the changes in these items feed into income statements — the core components of the financial statements produced for key users that help explain the performance of insurance companies and their financial strength/solvency. As a result, valuation accuracy is of critical importance, as are the models used for these purposes.

Static validation is central to valuation. Ensuring that liabilities are calculated for all business/policies is critical to ensure accurate financials and performance. This includes validating all key data inputs into the valuation models, such as policy counts, face amount, account value and policy characteristics.

Historically, many valuations used formulaic approaches, validating against separate sample calculations in a spreadsheet. That is still the case for a good portion of the life insurance business, but more and more, business valuation depends on projection models of some kind. As a result, the validation exercise has become more complex, and more effort is now required to ensure accuracy of those models.

More business valuations do or will depend on best estimate assumptions that are reviewed at least annually. That needs a robust process around the validation of assumption change impacts, one in which sensitivity test results help the actuary gauge whether the assumption change impact is reasonable.

As was the case for pricing, attribution analysis is an important tool in valuation. In fact, it is mandatory for many reporting bases to have some form of roll forward of the prior reserve balance to the current balance, with several key attribution steps being common, including expected change, reserve release from death and assumption change impact. A step-wise process that assesses the impact of each step/change is key to ensuring the accuracy of calculations.

Asset adequacy testing: Clarity about future potential liabilities 

Simply put, asset adequacy testing ensures that a set of assets backs the liability cash flows for a block of business; therefore, for each block of business tested, you need to:

  • Verify that the modeled block of liabilities and assets accurately reflects those in force as of the valuation date
  • Check that both liability and assets cash flows are reasonable based on historical data and expectations for the future

Consequently, both static and dynamic validations are imperative.

When completing a static validation for asset adequacy testing, it is especially important to ensure initial base statutory reserve levels (i.e., not including any needed additional reserves) are equal to the reported values. If base levels are not correct, it could lead to understating or overstating the additional reserve needed. While a true-up could overcome small differences (less than or equal to 1%), actuaries need to be cautious of the principles outlined in parts 1 and 2 of this series — specifically, that ideally the driver of any difference should be understood, and that true-ups should be used cautiously and with consideration of their potential life span.

Dynamic validations should, at a minimum, compare the prior three years of cash flows to the first three projected years at a level as granular as possible (meaning key cash flows should be compared separately and not aggregated). You should explain any area in which there is either volatility in results (historical or projected) or deviations in projected values when compared with historical. For example, in the case of annuities, shock surrenders may drive some years higher than others if a large portion of business reaches the end of the surrender charge period during a certain year. Simple calculations can sometimes be used to gain comfort on the magnitude of the deviation.

To the extent that the experience supports an assumption change, actuaries should make changes to the model and validate their impact. Prior sensitivity runs can help to provide a baseline of the impact of specific assumption changes.

Year over year, actuaries must be able to understand and explain changes in the additional reserves needed to support the block of business being tested or the sufficiency that is generated by a block. This typically involves an attribution analysis, whereby each change from the prior testing is done step by step. By doing so, actuaries isolate the impact of each change.

Completing the attribution analysis not only leads to greater understanding, but also improves validation and confidence in the model. This is because it is easier to assess the direction and magnitude of impact when making one change at a time rather than trying to assess multiple (sometimes offsetting) changes in aggregate. When complete, a thorough attribution analysis should:

  • Start with a projection run that matches the results from the previous year
  • Show results at the level at which cash-flow testing is complete (i.e., before any allowable aggregation)
  • Show the impact of each change on additional reserves required
  • Demonstrate the impacts through comparisons of output before and after the change
  • Include commentary explaining the results

And remember, prior models could trip you up

These points aside, there is one final important lesson to note on model accuracy. Common practice is to assume that prior models are accurate. That is not always the case and why it is important to periodically perform a more thorough independent model validation to catch inaccuracies and simplifications that may no longer be appropriate.

Authors


Director, Insurance Consulting and Technology

Senior Director, Insurance Consulting & Technology

Senior Director, Insurance Consulting and Technology

Contact us