Skip to main content
main content, press tab to continue
Podcast

Best practice and recent trends in actuarial modeling

(Re)thinking Insurance - Series 4: Episode 20

October 08, 2024

Insurance Consulting and Technology
N/A

In this episode of (Re)thinking Insurance, our host Paul Headey is joined by Farahin Mahizzan and Roger Chan to discuss key aspects of actuarial modeling, including the significance of clear and understandable models, the advantages of centralized and standardized models, and the necessity of regular model testing.

Best practice and recent trends in actuarial modeling

Transcript for this episode:

FARAHIN MAHIZZAN: Now with IFRS 17 especially, you need to model additional things like CSM, risk adjustment. So why not leverage on tools that are available so that we can free up some of the resources that we have to do other more important tasks?

ANNOUNCER: You're listening to (Re)thinking Insurance, a podcast series from WTW where we discuss the issues facing P&C, life, and composite insurers around the globe, as well as exploring the latest tools, techniques, and innovations that will help you rethink insurance.

PAUL: Hello and welcome to our (Re)thinking Insurance podcast on the topic, best practice and recent trends in actuarial modeling. I'm your host, Paul Headey, APAC Life Practice Leader for WTW's Insurance Consulting and Technology division. And I'm delighted to be joined today by my guests, Farahin Mahizzan and Roger Chan. Welcome both and thank you for joining me. Farahin, would you like to introduce yourself?

FARAHIN MAHIZZAN: Hi, Paul. Thanks for having us today. I'm Farahin, and I'm an Associate Director at WTW's insurance consulting and technology practice based in Kuala Lumpur. My experience has been primarily in financial modeling across various softwares.

ROGER CHAN: Hello, I'm Roger Chan. I'm based in Hong Kong. My education background is half actuary and half IT. For the last 20 years, I've been involved in developing, implementing, and marketing actuarial software.

PAUL: So I'd like to ask you the first question now. So in your experience, what makes a good model and what are some of the pitfalls to avoid when building models? And if I could direct that first to Farahin, please.

FARAHIN MAHIZZAN: Thanks, Paul. In my experience of model reviews, we have seen both simple and complex ones. To me, good models are often a balance between accuracy and speed. A key factor to consider is how easy it is to understand the model because, let's face it, nobody wants to have a black box model that nobody can understand.

I tend to imagine a simple model to be a bicycle and a complex model to be a motorbike. If both modes of transportation gets you from point A to point B, I would probably go with the bicycle. Firstly, I know that more people will be able to ride a bicycle as opposed to a motorbike because they need to be licensed to ride motorbikes, whereas anyone can ride a bicycle. So in that sense, there will be less resourcing risk if you were to go with a bicycle, or the simpler model in this sense, because you can easily find people who know how to use it.

Secondly, bicycles are simple to operate and have much fewer components than motorbikes. If, say, the chain falls off, you can see it and you can immediately fix it. If a motorbike overheats, it could be because of many reasons, maybe engine issues, electrical issues, or some other parts of the bike. This is similar to how we think about simple versus complex models whereby it is easier and likely less costly to maintain a simple model, which in turn means less business risk for the company. What's your view on this, Roger?

ROGER CHAN: Yeah, Farahin, I totally agree with the simplicity point. And I think one thing many people sometimes forget is that the main purpose of any financial model is about communication. And I would say there are two directions of communication. One is to the management, the other is to other modelers. Maybe I'll talk about the two one by one.

So for the senior management, I think is important that the model outcome can be easily explained. So we need a sufficient and flexible granularity of the results. Ideally, we can also see the results at different level of the business hierarchy. So company level, portfolio level, fund level, cohort level, et cetera. For the modelers, I think it's obviously more important for the logics and the cash flow interaction to be easy to understand and also easy to modify. So system features allowing users to inspect and trace results is very important.

And in addition to the simplicity and ease of communication, another point I would like to note is that a good model is not just a calculator to generate the reporting figures. It needs to have the flexibility for modelers to try out different modeling approaches to help them understand the impacts of the underlying risk drivers. So it is essential that a good model can readily connect to assumption tables of different format and different structure without having to modify the coding and, as a result, without having to repeat the full set of testing.

And I think Paul also wants to know a little bit on the pitfalls. I've seen a few recently. So one thing I see recently is people employing what I would call insider practice or tricks and shortcuts instead of using the built-in system feature with explicit definition. So one example of that is people using some special code like 99999 to indicate it's the last entry in the file rather than having an explicit indicator. And this kind of thing can make new users difficult to understand and learn the model.

Another pitfall would be assuming a financial model is a static object, that something we build once and we can use it forever. We need to remember that our insurance products, the regulations are constantly evolving, changing as well as the underlying technology. So it's important we conduct review of our models and model solution periodically to ensure that our models are reflecting the reality and that we are taking advantage of the new system features and technologies.

PAUL: Thank you. Model centralization and standardization seem to be recent trends. Why is this and how is this best achieved? And again, can I ask this question to Farahin first?

FARAHIN MAHIZZAN: Yes, I do see this trend in Malaysia as well. My team and I actually had the privilege of building some of these centralized models for our multinational clients. In my experience, one reason for the rising trend of centralized model centers is because people see a need for clearer and more consistent models among the business units. And they realize that they can achieve this by having a centralized model with a group of people working together in one of the more cost effective centers, which is more efficient than having multiple teams in various places, looking at the same things. This arrangement also helps the business to manage costs since they can consolidate their efforts. Maybe, Roger, if you could share a bit more on the technical side of how this can be achieved.

ROGER CHAN: Yeah. Thanks, Farahin. I think cost, consistency, and efficiency are definitely some of the key drivers for more people considering centralized models. I think one other consideration would be the need to establish a stronger process standardization, which can help smooth the learning curve and improve the robustness of the model. Having a more standardized approach can often speed up wider scale model updates with the replacement of common components less disruptive.

So Farahin, you touched on this point a little bit. I think a centralized model is a natural extension of establishing a centralized team of model experts, or in some cases, people may call it a center of excellence approach, which is a great way to avoid duplication of work and achieve better consistency. I think there are a few key ingredients to make the COE approach work. I think it's important that a software platform provides some kind of modularity so that it's easier to reuse some of the modeling objects, so we can design the model objects to be more parameter driven, same set of code can be reused for different purposes when you just change some of the inputs.

And in some cases, and I hope I'm not getting too technical here, is that an inheritance functionality would be handy so that a central team can set up reusable common modules while still allowing certain parts of the code to be localized. So this way, you can ensure the consistency at the same time providing flexibility. With this kind of approach, you can also roll out model updates easier, quicker, so that the inherited and unmodified part can be automatically adopted while the differences are where the localization happened, we can highlight those coding changes.

PAUL: Since the introduction of IFRS 17 in global markets and including many Asian markets, governance control and audibility are increasingly important for insurers. Please, can you share your thoughts on this from a modeling perspective and any insights on how insurers are meeting these requirements in their actuarial models? Farahin, please.

FARAHIN MAHIZZAN: Well, based on my experience in an IFRS 17 engine implementation recently, IFRS 17 adds further complexity to the already complicated modeling process because there will be additional items that need to be modeled. During the model development process, there will be many versions of the same model going around concurrently because typically, you would have different work streams for different parts of the model.

For example, maybe one team on reserving, one team on reporting, and another one on pricing. All these components are interlinked and dependent on each other, so you need to be aware of the other team's progress. In some cases, the company may even assign someone to be a model steward just so they can keep track of all these different versions.

You would then need to consolidate all these different versions together into one version, carry out testing before finally coming up with the production version. Because of this complicated process, it's really important to have a proper audit trail and model change documentation. But this becomes a challenge for companies that don't have proper governance and control. From what we've seen from other projects, some clients were even using pen drives as a quick way to share models with each other. So you can imagine the amount of time and effort that it must have taken to merge their models.

ROGER CHAN: Yeah, I remember those days pen drives. I think this is indeed a very error prone and unreliable approach, especially if you consider potential branching of your model development work. The trouble multiplies. Having said that, this collaborative development challenge is certainly not unique to actuarial modeling. The software development industry has been dealing with similar challenges for decades. And there are many professional source code management tools out there to help with this challenge.

And some can even be integrated to actuarial modeling platforms seamlessly. Two of the more common tools in the market are Git and also Microsoft DevOps. In addition to version management, there are also functions to support conflict resolution and rollback with sometimes some evolving model requirements which are often implemented under extreme time pressure. There's an additional need to help keep track of the code changes in relation to the requirements and, of course, the approval track records. With more complex reporting requirements rolling out, this kind of tool can certainly help actuaries reduce manual work and make time for more analysis.

FARAHIN MAHIZZAN: Yeah, I could definitely see how the model consolidation process would be a lot smoother and faster if the client had utilized the tools you mentioned, Roger. Now, with IFRS 17 especially, you need to model additional things like CSM, risk adjustment. So why not leverage on tools that are available so that we can free up some of the resources that we have to do other more important task?

PAUL: Model testing is a vital aspect of quality assurance. What advice do you have, Farahin, for our modelers in this area?

FARAHIN MAHIZZAN: A typical model development process from what we've seen, usually does a lot more testing towards the end of the process. But this is not ideal because if the test failed, it might be too late or too costly to fix. This reminded me of a client a few years ago actually, who added some new features to their model, but they only did the testing at the end because they were confident that the changes that they put through wouldn't impact the model much.

Once they actually did the test, they realized that their model runtime has doubled overnight. But because they weren't able to identify the cause, they needed to seek our expertise to help them investigate. I think that if they had done the testing throughout the process, they could have avoided this issue.

Generally, the more frequent the testing that we do, the smaller the scale of the issues we encounter. Ideally, the checks should be carried out continuously throughout the model development process so that any issues that arise can be detected as early as possible. I think that the challenge with regular testing is that companies lack the manpower for it. In this day and age, there are so many tools and technology that are available to help us testing. So we should definitely leverage on this as much as possible.

ROGER CHAN: Yeah, I think testing is definitely an area that is often overlooked, possibly because of the effort required Farahin mentioned that. And also, let's be honest, I think testing can be a very mundane and boring task. However, there are services available to help actuaries conduct model testing. Some even support nightly automated testing using latest development versions on pre-specified test cases. So this can help achieve a more granular testing point you mentioned and also help identify the errors early and make fixing easier, quicker, and cheaper.

A side benefit to this would be to force the actuaries to write the test cases early. This would align with software development best practice better. Automated testing also helps remove some subjectivity in the testing process and make the testing result more robust. And of course, the test cases also acts as a target for completion, which helps the project manager to measure the progress.

FARAHIN MAHIZZAN: Yeah, definitely agree with you, Roger, that there is more to testing than just reconciling the numbers. We also need to consider how the changes affect runtime and memory usage and how well the model can be maintained. Testing also reinforces the importance of having good audit trail like we discussed earlier.

ROGER CHAN: Yeah.

PAUL: OK. So final question for today. There's been a perception of relatively little career development opportunities for modeling specialists. In your view, is this really true, and what tips do you have for actuarial modelers to develop their skills and their careers?

FARAHIN MAHIZZAN: On the contrary, I think I have actually seen a lot of development opportunities and demand for modeling specialists. So at the junior level, you generally start out learning a particular model calculation first to gain some technical knowledge. As you develop in your career, you begin to learn more and more complicated things to model like asset liability interaction, model optimization, and model design.

Being a modeling expert doesn't mean that you should limit yourself to coding, as there are also other areas of management that you can put your technical foundation to good use. I say this because modelers tend to have a strong analytical way of thinking that is often needed in these roles. Modeling knowledge may be one attribute of a successful actuary, but it doesn't need to be the ultimate goal. Knowing how to model actually opens you up to a lot of other opportunities as well.

There is, however, a distinction between being a software expert versus being a modeling expert. I think that being a modeling expert would make you more adaptable to a world where the toolsets are constantly changing. Tools that we have today might not be as useful in, say, 10 years time. So I feel that we need to be open and diversify our knowledge to ensure that we can future proof our skill sets.

ROGER CHAN: Yeah, agree. I think modeling is definitely a very important skill set for actuaries, but definitely not the end goal. I think it's more important to be able to apply the knowledge of the underlying cash flow dynamics rather than just being an expert of syntax, correctness, and coding efficiency. That being said, if we do want to focus on the modeling pathway, I think one potential career extension would be to move on to model management.

So automating the whole production process is one hot area recently, particularly in terms of improving the whole process efficiency and controlling the production costs. The development side, there are also opportunities related to model centralization and also about developing and managing a central model team as well. So these are some of the potential opportunities.

PAUL: Thank you very much, Farahin and Roger, for joining me today. It's great to hear your thoughts on actuarial modeling best practice and recent trends. We've covered several relevant areas, including what makes a good model, centralization and standardization, governance control and audibility, model testing, and career development for modeling specialists.

And to all our listeners, thank you for joining us. If you enjoyed this episode, please make sure to subscribe. And we look forward to seeing you again on the next episode of (Re)thinking Insurance.

Thank you for joining us for this WTW podcast featuring the latest perspectives on the intersection of people, capital, and risk. For more information, visit the Insights section of wtwco.com. This podcast is for general discussion and/or information only. It's not intended to be relied upon and action based on or in connection with anything contained herein, should not be taken without first obtaining specific advice from a suitably qualified professional.

Podcast host


Paul Headey
Regional Life Practice Leader, APAC, Insurance Consulting and Technology

Paul Headey is the Regional Life Practice Leader for Insurance Consulting and Technology, Asia Pacific at WTW, based in Hong Kong. With over 30 years of life insurance industry experience including more than 25 years in East Asia, Paul’s core focus areas include: Capital management and ERM, Financial reporting, Mergers and acquisitions (M&A) and Valuing life insurance companies.


Podcast guests


Roger Chan
Regional Technology Engagement Leader, APAC, Insurance Consulting and Technology

Roger is the APAC regional product leader of WTW’s life actuarial software including RiskAgility FM and Unify. He has educational background in both actuarial and IT. He has more than 20 years of experience in actuarial software development and implementation.

email Email

Farahin Mahizzan FSA
Associate Director, Insurance Consulting and Technology

Farahin is a qualified Life actuary with over 11 years of experience, based in Malaysia. Farahin has experience in various model implementation, model development and model review projects across Asia, covering Embedded Value, ALM, capital and reserving. Farahin is adept in using a variety of financial modelling softwares, including RiskAgility FM.

email Email

Related content tags, list of links Podcast Insurance Consulting and Technology
Contact us