Recent advances in generative AI promise to change how (re)insurers approach scenario development. Given the rapid evolution of this technology, what might the future hold?
|Environmental Risks|Insurance Consulting and Technology|Willis Research Network
Artificial Intelligence
Scenarios are narratives about how the future might unfold, designed to raise awareness and stimulate discussion among stakeholders. In the (re)insurance industry, scenario analysis is a cornerstone of risk management, crucial for understanding tail risks, identifying emerging risks, strategic planning, and managing risk aggregations.
Peter Schwartz, an early pioneer of scenario planning, likens the use of scenarios to “rehearsing the future”[1], where the objective is to run through (or practice) simulated events as if we are already living them. Similar to rehearsing a theatre production, the process of scenario development requires a collaborative effort of numerous individuals and several days, weeks, or months of refinement before the scenarios are ready for their intended audience. This traditional approach to scenario development is notably time-consuming and resource-intensive.
However, over the past 18 months, advances in Generative Artificial Intelligence (AI) tools, including Large Language Models (LLMs), have enabled the rapid generation of numerous scenario narratives across a wide range of disciplines. This raises important questions for the (re)insurance industry: Could scenarios generated by AI be beneficial? Do these scenarios make logical sense? What are the potential limitations? And given the rapid development of this technology, what might the future hold?
The origins of scenario analysis
Herman Kahn, an American futurist, is often credited as one of the pioneers of modern scenario planning. During the 1950s and 1960s, Kahn used scenarios at RAND Corporation and the Hudson Institute to model post-World War II nuclear strategies. By the 1970s and 1980s, Pierre Wack, an executive at Royal Dutch Shell, had transformed scenario analysis into a critical corporate planning tool, effectively navigating the oil price shocks of the 1970s by predicting and understanding potential futures.
The 1990s then brought the digital revolution and the birth of catastrophe models that enabled (re)insurers to simulate a large number of hypothetical natural disasters quickly and at scale. Despite these advances, scenario science has remained a relatively static field of research, requiring a blend of foresight, analytical thinking, and – most importantly – imagination. Today, Royal Dutch Shell maintains a scenario team of over 10 people from diverse fields such as economics, politics, and physical sciences, which can take up to a year to develop a full set of scenarios[2].
Failure of imagination
The reinsurance industry's ability to foresee and prepare for future disasters heavily relies on the breadth and depth of its scenarios. A significant challenge insurers face, particularly in the tail of the distribution, is the failure of imagination – when we overlook or underestimate potential risks that have not yet occurred in historical data. In such situations, the mind’s eye narrows, dismissing the unprecedented and sticking too closely to the beaten track of past experiences. This results in potential risk blind spots, leaving organizations vulnerable to highly disruptive events.
An example of failure of imagination was evident during Hurricane Katrina in 2005, when levees protecting the city failed, resulting in devastating flooding and nearly 2,000 fatalities. Despite the known risk of levee breaches in New Orleans prior to the event[3], such scenarios were not incorporated into catastrophe models used for risk management at the time. As a result, many (re)insurers unwittingly had large flood exposure concentrations in the city, which translated into substantial losses when the levees failed, resulting in the costliest insured loss on record at the time.
This problem stems from limitations in the brain. Human thinking is riddled with cognitive biases[4] that skew our judgment. Our ability to imagine potential future outcomes is limited by the availability bias, causing us to overestimate the likelihood of events that are more memorable, the recency bias, which draws too heavily upon the most recent experiences and the hot hand fallacy, whereby a string of successes can lead to an overestimation of future success. But the point of scenario development is to imagine unimaginable – but possible – future events. How can we achieve this with brains that are inherently wired to cling to the familiar?
A new frontier in scenario generation
Generative AI, particularly LLMs, presents a compelling solution to overcome the limitations of human imagination, while also speeding up the traditional, resource-heavy process of scenario development. LLMs are a type of artificial intelligence that processes and generates human-like text based on the patterns they have learned from a vast amount of textual data. Compared with the traditional scenario generation process, these tools can produce a large number of potential scenarios rapidly, across a range of disciplines, providing (re)insurers with a broader spectrum of risk assessments for consideration. This not only streamlines the scenario development process, but also introduces novel perspectives that might be missed by human analysts.
For example, we asked ChatGPT, an LLM developed by OpenAI,[5] to produce a scenario where a U.S. earthquake leads to unforeseen sources of insured loss. The model provided a coherent and plausible scenario (Figure 1):
Figure 1: A scenario where a U.S. earthquake leads to unforeseen sources of insured loss. Image generated by ChatGPT/DALL·E 3.
If this event were to happen tomorrow, in hindsight you may think that the risk was obvious, but how many (re)insurers are currently monitoring their exposures to this type of scenario? Not many. This highlights the value LLMs can add in broadening the scope and improving the efficiency of scenario planning.
Human oversight
However, before turning to your favorite LLM, it's important to note the difference between AI-generated scenarios and AI-assisted scenario development. Although the earthquake scenario provided above is plausible, this is not always the case. LLMs are, after all, models that have no concept of the real world. This means that they can hallucinate, creating implausible scenarios that are not relevant to the world we live in.
Furthermore, whilst using LLMs helps to avoid introducing human cognitive biases, scenarios produced by generative AI may inadvertently reflect biases present in their training data or model code. And while LLMs can produce scenario narratives, they cannot currently do the quantitative bits very well, such as estimating losses or evaluating business impacts.
Given these caveats, many applications will necessitate an AI-assisted approach to scenario development. This process includes sense-checking and adjusting scenarios for specific business use cases, as well as translating narratives into measurable business impacts. LLMs should therefore be viewed as tools to assist with the heavy lifting of generating scenario narratives, rather than a turnkey solution.
It is also important to note that the quality and specificity of a prompt provided to an LLM can significantly influence the accuracy, relevance, and usefulness of the scenario produced. Investing time in prompt engineering – the practice of carefully crafting inputs to elicit the desired outputs from generative AI – is therefore vital. At WTW, we have been refining this practice to aid our insurance clients in developing a broad range of scenarios relevant to their exposures.
What’s on the horizon?
AI is advancing quickly, with breakthroughs now spanning beyond language models to areas like weather forecasting, including hurricane landfall predictions[6]. It is entirely plausible that within a few years, AI will not only generate natural catastrophe scenario narratives but also produce synthetic hazard data for these scenarios, such as hurricane wind fields. Eventually, we might even see AI-generated catastrophe models capable of simulating probabilistic losses. The potential applications are as vast as they are exciting, and our engagement with this technology can unlock the door to new capabilities in catastrophe risk assessment.
Footnotes
Schwartz, P. (1996). The Art of the Long View: Planning for the Future in an Uncertain World. New York: Currency Doubleday.
Return to article undo