Reserving actuaries put a lot of time and effort into getting to the right number for potential future claims. But as we all know, there is no one right number. Instead, perhaps there is just the least incorrect number — or just a bunch of other numbers that other people might simply prefer.
But to get to even that point, after we’ve done the work, we also need to convince a broad range of stakeholders that we did the work, that we did the right work and that we did it the right way.
The reserving actuary’s job is both an analytical one and a marketing one. Our key deliverable might consist of numbers, but the real measure of success is confidence — in the dollars, in the process that produced them and in the decisions upon which they are based. Visualizations are a key to effectively communicating our conclusions and building credibility around our work product, which is, after all, no more than a forecast.
Historically, reserving actuaries could point to their credentials and the reams of documentation supporting the analysis. While these things are critical, they need to be supported by effective communication. This is tricky when trying to explain even a chain-ladder model to a chief financial officer (CFO). It becomes even more challenging when using vast fields of individual claims data and complex models.
This is where visualizations are useful. The effective use of visual communications plays a key role in two main areas.
First, timing is everything. While an analyst may eventually locate a key mistake, outlier or trend, eventually isn’t good enough when we’ve only got days before we’re standing in front of our senior team, presenting numbers that affect both their remuneration and shareholder confidence. It’s crucial to create adequate space to correct that mistake, to think about how to account for the outlier, and to dig in and do the due diligence for more information around that trend — and then think about how best to communicate.
Second, in a field where there is no correct answer, it’s important to communicate not just what we think the most likely scenario for our forecast is but also what it could be and what it is definitely not.
Used effectively, visualizations should provide this context in real-world terms, heading off potential conflicts or arguments that are not material using metrics that our audience understands. By getting our audience to agree on things that should be eliminated from consideration, we leave time for discussions around those things that are relevant and material.
Reserving is full of conflicting models, and we may often think of weaker models or weaker data as being something that can be considered, discounted and then discarded as we move in a linear fashion through an analysis.
Let’s say that a change in case reserves means that our reported loss triangle no longer provides a stable history upon which to base a pattern-based projection. We might simply toss it to one side and focus on using the paid loss to develop our ultimate.
Now from a modeling standpoint, we may think that we’ve done a good job: We used the better data, and we can deliver a reserve estimate. But we’ve not squeezed out all the information we could from our analysis.
For example, now that we know the answer, what does our selected ultimate tell us about our data, specifically our reported loss data? For anyone who is familiar with WTW’s reserving tool, ResQ, this is where the all-powerful “All Origin” graph comes in.
Showing the cumulative history of the reported loss by origin periods in the context of what we now believe our final estimate to be gives us our narrative around the impact of those case reserve changes on our incurred but not reported losses. Rather than the data informing our result, our result is telling us a story about the data — and not just any story, but our story. The one that we’re about to tell our CFO, our pricing team and our claims team. And we need to make sure that we believe our story before trying to convince our stakeholders.
This storytelling can get confusing in the real world when our “readers” have strong wills and the ability to challenge the narrative as it’s being told. Dynamic business intelligence tools such as Tableau or Power BI present both an opportunity and a challenge.
The opportunity is that by interacting with our visuals and filtering the data, we can move around the details to provide context and depth only when needed.
The challenge is that without careful design, we could become disconnected from the key takeaways from our carefully thought-through analysis or allow the audience to hijack the story. This can result in arguments around plot points that are misleading or simply not germane to the stated objective of our analysis.
Another factor that falls into both camps of opportunity and challenge is the curse of the novel visual. Yes, it may be useful and tell exactly the story we need, but if we are constantly introducing new graphs and visuals, we’re going to lose a lot of valuable time explaining what we are showing to our audience.
If the visual is a new graph and it’s important to show, explain it. We must also make sure we give our audience the touchstone metrics that they always expect. I am in this regard very guilty of redesigning perfectly good wheels.
Anyone who has been picked up by a Tesla rideshare vehicle and struggled to open the door knows full well the dangers of needlessly reinventing a perfectly good tool. Don’t design your visuals to be just clever or different. Design them so that they solve the problem in a materially improved, intuitive way. There’s nothing more painful than trying to educate your senior audience on how to use a door handle.
Overcoming this challenge means designing visuals in a way that leads consumers down an instinctive path through the results, preempting their reactions and providing them the tools to address the next question and perhaps the question after that. Using an “orientation” visual at the top or side of our dashboard further allows users to see where they are in a rabbit-warren of diagnostics and to avoid losing site of the materiality.
Well-designed dashboards will provide a clearly marked path through a complicated field of data and diagnostics. Poorly designed ones will simply shove the audience into a vast, dimly lit warehouse and close the door behind them.
These new tools require new skills to understand and predict how our audience prefers to consume information — the user experience — and strong data management skills. The data challenges take two forms: logistical and logical.
The logistics mean that we need to consider how to harvest the data in an automated, well-governed and secure fashion for whom, where and when it’s required. We need to also control access to the visual in a format and in a medium that the user can pick up easily.
For example, we can’t expect — and we likely don’t want — senior executives to go hunting for a .pbix file. But do we want the visual on a webpage? Should we deliver it to them in Microsoft Teams, where they are already working, or do we ping them with a notification to open it up? And how should they then view it — on a tablet? On a phone?
The logical challenge lies in the complexity of how the underlying data model behaves. We’re accustomed to making visuals that dance and perform based on the data, but how do we make the data dance and perform based on the visuals? How should interactions with one visual affect another one? If it does impact another visual, is it obvious to the user? Again, this should be intuitive, not just in having the visuals perform how the user would expect them to before the fact, but also in allowing the user to quickly validate how the visuals performed after the fact.
I’ve used a lot of words here to describe my thoughts on the effective use of visuals in presenting reserving information. Sometimes words work well too. In thinking about how to communicate quickly, here are three final words I’d like to share that bounce around in my head when putting together a presentation: confidence through context.