Karen Clark writes on the challenge facing catastrophe modelling, and the need for a major upgrade.
Over 30 years ago, Hurricane Andrew caused massive disruption in the property (re)insurance markets: Eleven insurers went bankrupt, major insurers left Florida, reinsurers reduced capacity and dramatically increased prices.
The cause: Systemic underestimation of hurricane risk and potential hurricane losses.
Fast forward to today and we’re witnessing the same level of disruption in the market.
The cause: Systemic underestimation of losses from the frequency (aka secondary) perils, most notably severe convective storms (SCS) and winter storms.
Hurricane Andrew left no doubt the industry needed a major upgrade to their long-accepted methods for measuring and pricing hurricane risk.
Today, it’s equally clear the industry needs another major upgrade to the nearly 40-year old catastrophe modeling technology.
Catastrophe models have been available in the (re)insurance industry since 1987, but it was not until after Hurricane Andrew that there was widescale adoption of this then new technology. Since Andrew, the primary focus has been on the perils able to cause solvency-impairing event losses—most notably hurricanes and earthquakes. Emphasis has been on quantifying “tail risk”, and perils such as SCS, winter storms, and wildfires have been considered “secondary”.
Catastrophe models have become essential tools for the global (re)insurance industry, and the first-generation models have proven their value for pricing and managing hurricane risk. But they have fallen well short of providing credible information for other types of more frequent atmospheric perils.
Time for a major upgrade
The first-generation catastrophe models still used by many reinsurers are statistical models that base future events on extrapolations of historical events. Events are modeled using “parameters” such as maximum wind speed, storm track, and radius of maximum winds for hurricanes. Earthquakes are modeled using parameters such as magnitude, rupture depth, and faulting mechanism.
But the frequency perils, and in particular SCS, cannot be defined by a set of static parameters because these perils are too amorphous and dynamic over time. The images below illustrate the difference. On the left is a satellite image of a hurricane showing the typical shape of circular winds around a central eye. It’s easy to imagine a set of parameters that can reproduce this familiar shape.
Contrast that with the radar image of an SCS on right. Not only is the shape much more complex, it’s also unique to this particular weather system, and it’s dynamic over time.
Fortunately, new second generation models that can capture this complexity and accurately quantify the loss potential of the frequency perils now exist and can be easily implemented by reinsurers. These new models are based on the same fundamental structure, including the same components, inputs, and outputs, as the traditional models.
But as opposed to statistical extrapolation, the hazard components of the second-generation models are based on physical modeling techniques. Physical models operate on complex equations of the atmosphere and thousands of dynamically changing variables to capture the complexities of SCS and other frequency perils.
KCC scientists have proven the accuracy of the new physical models through a daily verification process. KCCLiveEvents is an advanced and fully automated process through which over 30 gigabytes of satellite, weather model, and radar data are ingested into the KCC SCS model each day in order to create high resolution hail and tornado/wind footprints insurers use to estimate their daily claims and losses.
Months after the events, insurers can compare their actual losses with the KCC SCS model estimates. Through this process the accuracy of the KCC physical models has been verified with tens of billions of dollars of high-resolution insurer claims data.
The verification process extends beyond SCS. The February 2021 Arctic Air Outbreak provided an impressive validation of the KCC Winter Storm model. While this nearly $20 billion event was a “model miss” for the older models, it was predicted accurately in real time by the KCC model. More importantly, events of this magnitude and greater were well represented in the KCC stochastic model and exceedance probability (EP) curve. Winter storms require three high resolution footprints to be modeled accurately: wind, snow/ice, and freeze.
Bringing reinsurance capacity back
In order for reinsurers to offer additional capacity for the frequency perils, they must first have confidence in the tools they’re using to quantify and price the risk. According to an important underwriting adage, “there’s no such thing as a bad risk, only a bad price.”
KCC has invested tens of millions of dollars and years of research to build accurate models for SCS, winter storms, and wildfires. This new physical modeling technology—incorporating over 100 terabytes of high- resolution atmospheric data and advanced AI and ML techniques—has proven its accuracy and (re)insurers already adopting this technology are quickly gaining confidence in the results. KCC experts are also working with leading reinsurance brokers and ILS investors on innovative products to close the current gap between supply and demand.
One such innovation is the Modeled Loss Transaction (MLT) in which the payout to the insurer is based on what the model estimates for an event rather than the final indemnity amount. The insurer provides current or projected exposures to the modeling agent, and those exposures are run through the model to produce the EP curves used to price the transaction. When a covered event occurs, those same exposures are run through the model to determine the event loss.
This means there’s symmetry between the assumptions used to price the transaction and the assumptions used to determine the payout. Other benefits of the MLT include the speed of payout (weeks after an event versus months or years) and the elimination of potential loss creep from economic and social inflation.
Of course, the MLT requires accurate models or there would be too much basis risk to the insurers. Second generation physical models provide the required accuracy. While the MLT can be used for any peril, the highest demand has been for SCS, and several such transactions have been completed to date.
Conclusions
It may be tempting to blame the current market disruption on climate change, but to date, the impacts of climate change on hurricane, SCS, and winter storm losses have been relatively minor. That may not be the case in the future, but as of today, historical event losses when calculated based on current property exposures do not exhibit an increasing trend.
The current disruption in the reinsurance market has been caused by systemic underestimation of the loss potential for perils long considered secondary. And the shrinking of reinsurance capacity for these perils has in turn exacerbated insurance market dislocations as insurers have pulled out of states and limited their writings to control exposures.
While likely not solvency-impairing for many companies, there is significant annual volatility in non-hurricane weather-related losses that insurers would like to manage with reinsurance. Increasing demand means increasing opportunity for reinsurers with the most advanced and accurate technology for quantifying and pricing the risk.
After nearly 40 years, it’s time for a major upgrade to the industry’s risk modeling technology. Along with unprecedented accuracy, a major advantage of the new second generation models is they automatically incorporate any changes due to climate because they are based on current atmospheric data rather than decades old historical data.
By Karen Clark, CEO of KC&Co.
No comments yet