An element in driving life financial transformation in your organization is to determine the optimal balance between production costs, model precision and ease of maintenance of your models. There’s a very real trade-off between these considerations, and no universal right answer that’s applicable for all. You can, for example, make a model faster by introducing approximations, or by using less transparent but more advanced algorithms. But is the cost of the trade-off worth the extra speed?
In this article, the fourth in our five-part series on life modeling and technology, we’ll explore the three key factors that will help guide your decisions on models and model platforms.
01
Realistically, your compute costs are going to dominate your financial reporting spend. A sensible estimate of the IT cost of owning 1,000 cores worth of computing power is around $1,000,000 per year, after accounting for power, cooling, depreciation and IT costs (excluding software license costs). If you move that to the cloud, turning it off when not in use, the cost for the same computing power will shrink to around $350,000 per year.
While you could bank that cost saving, you might instead invest some of it to improve and speed up the reporting process, allowing for more detailed calculations, more analysis and checking. Whichever you choose, the days of owning a static grid are over.
If that elastic cloud compute can be delivered as a service, such as WTW’s vGrid, which is designed to complement WTW’s RiskAgility Financial Modeler (RAFM), then the cost of maintaining and upgrading the environment to keep pace with changing needs also disappears, being replaced instead with service level agreements from the vendor.
Of course, the other component of your production compute costs relates to the straight-line speed of the model – how quickly can it execute the required calculations? Part of this will be determined by your choice of calculation engine: the latest generation utilize technology features and optimizations that can deliver results on average ten times faster than older models, delivering 90% cost savings in calculation compute time. That’s a substantial saving that could easily pay back the costs of migrating to a latest-generation platform in less than two years.
02
Model quality is also a factor in determining the cost. Models with high levels of legacy code, multiple rebased calculations, or parameterisation for configurations that don’t happen in practice, can all increase the computation time — and therefore cost — without delivering any additional benefit. This has a direct impact on the additional calculations and limits the optimization potential for the calculation engine.
The other cause of messy models, which can run more slowly, is the handling of assumptions. It is important that you can update the indices to an assumption during reporting cycles without needing to change the model code. If the model code must change, then you should not only plan for this before your model freezes, which limits options, but you may also need to keep both variants in the model to support your year-end reporting, which means a backwards step to cluttered models.
Dealing with model inefficiencies can be tricky. And while initially few teams would be prepared to invest in changes to a model that either reduces its precision or flexibility, that may deliver the biggest potential cost-saving. Ultimately, actuarial models are like beautiful gardens: both need regular pruning, nurturing and vigilance to deliver the best possible results.
03
The typical compute costs of a development environment are small and not a primary area for concern. However, it’s still worth urging caution in development when it comes to your model’s resource footprint, which will in turn impact your ongoing production costs.
For many teams, functional testing is completed before the regression and acceptance stages. This is a recipe for frustration as the ongoing development can increase the compute time or the memory required for a model. If changes in these less obvious aspects aren’t caught early, that degradation becomes engrained into the model by further enhancements.
The answer to this is to ensure your testing strategy allows for this by having realistic and regular regression testing throughout the development phases. This can be done manually or, with services such as RAFM’s vTest, automated as part of the model versioning.
Once you have explored the impacts for your organization in balancing your production costs, model impact and development costs, there’s a further complication: your people. The people side of development is traditionally the greatest challenge. A dedicated team of highly skilled modelers are essential to carry out updating and maintenance work on the models.
Efficiency in development is key to getting the most from this scarce resource. Every modeling team would, without hesitation, welcome the support of extra people. So how do we ensure we get the best from the talent we have — and more importantly, allow them to do interesting and challenging work, so they grow and stay with the team?
Fortunately, the solution to improving the modeler’s experience doesn’t have to be doubling the size of your team. Rather, it’s an additional positive outcome gained when you implement an integrated version control system to address the business need for control, governance and auditability within the model development cycles. With an integrated version control system, such as RAFM Team Edition, users can check in and check out the right models, compare and merge their changes in, and link those changes to the work items assigned to them.
When coupled with vTest, which tests the models automatically for unwanted side-effects, any problems can be rapidly identified and backed out. The impact of that level of control and governance is: more junior staff can be trusted to undertake the work and to learn; there’s a reduced dependency on the model steward; and there’s increased fluid working, without Friday-afternoon bottlenecks.
All this improves the lives of the modelers, the efficiency of the work and the auditability. Now you can link back from the model-change run in your Analysis of Change to the model changes themselves, the person who made the changes and why those changes were made.
In the previous article in this series, we explored how focusing on business process excellence meant moving beyond merely automating tasks to streamlining entire business processes so they’re more efficient, effective and delivering consistent, positive outcomes. The next step in your life financial transformation journey is to optimize your models and platform to meet your specific needs.
That means taking a step back from where you are today, and auditing and costing your existing solutions to determine the optimal balance between production costs, model precision and ease of maintenance to meet your future business needs. And recognizing that’s something that you may need help with.
In the final article in this series, we’ll set out the key steps that you can take to actually realize life financial transformation in your organisation.
To find out how we can support your actuarial team drive financial transformation, including modeling, automation and process improvements, please contact your WTW consultant, email us or visit RiskAgility Financial Modeler (FM).