Gaps in catastrophe risk modeling point to path forwardReprints
Years in which hurricanes and other natural catastrophes cause substantial damage and insured losses challenge perceptions and expectations about catastrophe models but also provide teachable moments and opportunities to improve for both modelers and those who use the models.
“Catastrophe Models: In the Eye of the Storm,” a recent JLT Re Viewpoint publication from JLT Re Ltd., the reinsurance unit of Jardine Lloyd Thompson Group P.L.C., shows the results of the company’s study of more than a dozen catastrophes since 2004 and how losses fared against the models’ estimates.
“It’s fair,” said Rob Newbold, Boston-based executive vice president at catastrophe modeler AIR Worldwide, a unit of Verisk Analytics Inc., said of the report. “I think JLT has done an excellent job looking at the range of information put out by the modeling companies.”
Last year’s highly active hurricane season and resulting losses have brought renewed scrutiny to the abilities of catastrophe modeling.
“The impetus for the study was the events of 2017,” said David Flandro, Global Head of Analytics for JLT Re in London. “In 2017, several clients came to us and asked ‘Why are the ranges so wide? Why are the estimates so inconsistent?’”
There was also discussion of the topic at the company’s January client event in London, Mr. Flandro said. JLT Re wanted to take the opportunity to study the questions empirically and quantify answers.
“In a quiet year when there aren’t many big storms or catastrophes,” less attention is paid to models and their performance, Mr. Flandro said. “But if you’ve got a Harvey, Irma and Maria, or a Sandy, if you’ve got a year like that, then of course the catastrophe models come under the magnifying glass.”
“There’s been such a long period of time since we’ve had an active hurricane season,” Mr. Newbold said. “Any time you get a real event which you can evaluate your book against, everyone’s going to sit up and take notice of that.”
One key finding of the JLT Re study was that it appears that initial estimates for the catastrophe events studied, approximately 15 since 2004, were, “systematically underestimated for large, complex losses, although it remains to be seen whether this will hold for 2017,” Mr. Flandro said, adding that the discovery of a pattern was a bit surprising.
JLT Re’s study also shows, however, that vendor models have historically performed relatively well for wind events that incurred moderate losses, regardless of landfall location.
The models seem to do a better job with wind than with flood, according to Phil Klotzbach, research scientist with Colorado State University’s Tropical Meteorology Project in Boulder. Wind is more straightforward to model than flooding, he added.
“The more basic or simple the event is meteorologically from a wind-field component, certainly the models are going to be able to generate a more refined view of loss for that,” Mr. Newbold said.
The study also found that modeled loss accuracy for hurricanes suffers when events are both costly and complex, often due to an array of unmodeled loss components such as flooding.
“The report notes poorer model performance in hurricanes with a higher proportion of water-related loss,” said Tom Sabbatelli, manager, event response at Risk Management Solutions Inc. in Hoboken, New Jersey.
The 2017 hurricanes and past events in general help inform the catastrophe modeling industry.
“I hope this exercise will add to the sector’s understanding of catastrophe models and that it will help firms understand the uncertainty associated with industry loss estimates,” Mr. Flandro said. “I think it was a learning experience for everyone, including the modeling firms.”
“The JLT Re report offers a balanced and educational view of the factors that lead to different industry loss estimates from catastrophe modeling companies following real-time events,” Mr. Sabbatelli said. “It is fair to expect that model performance should improve over time. As the report points out, many of the early loss estimates were calculated with models with outdated methodologies.”
“We see any actual event as a learning opportunity,” Mr. Newbold said. “Obviously these are devastating events that cause loss of property and unfortunately to life. Any time we experience an event which will lead to actual claims data and information we can use to refine the modeling, it’s an opportunity.”
“We’ll be working with our clients and the market and our partners at Verisk to collect as much data as we can to see what if any revisions are required to the models,” Mr. Newbold said.
Catastrophe models use data to provide clients with estimates, not certainty, Mr. Flandro said, adding that models are a tool for clients, not a crystal ball.