Q&A: Andrew made big waves in catastrophe modelingReprints
Karen Clark is president and CEO of Boston-based Karen Clark & Co., a firm she established in 2007 to help insurance companies enhance their exposure data processes, better understand catastrophe risk, and to more effectively utilize models and other information to manage the risk. Ms. Clark developed the first hurricane catastrophe model and in 1987 founded the first catastrophe modeling company, Applied Insurance Research, which subsequently became AIR Worldwide after its acquisition by Insurance Services Office Inc. in 2002. Ms. Clark recently discussed Hurricane Andrew's effect on catastrophe modeling with Business Insurance Senior Editor Mark A. Hofmann.
Q: Did Andrew help focus attention on catastrophe modeling, which was then a new field?
The first hurricane model was introduced to the insurance industry in 1987, but this new approach for estimating potential catastrophe losses didn't get much traction until after Hurricane Andrew. The first catastrophe model indicated that insured losses could approach $70 billion from a hurricane striking a populated area such as Miami, but before Andrew most insurers thought the worst-case scenario was closer to $7 billion—a number popularized by an industry publication called “How the Insurance Industry Would Handle Two $7 Billion Hurricanes.” Hurricane Andrew proved the value and credibility of the models and led to their rapid adoption.
Q: What effect did Hurricane Andrew have on catastrophe modeling in terms of data entered and accuracy?
Insurance companies were so significantly underestimating their potential losses from hurricanes because they were not monitoring the rapid growth in property values. In the years between Andrew and the previous major hurricane, property values in coastal areas such as Florida had risen several fold, and most companies were not managing these exposure concentrations.
Before Hurricane Andrew, reinsurers typically only received premium figures by state and line of business. Within a year of Hurricane Andrew, reinsurers began requiring county-level exposure aggregates and then eventually ZIP code and even policy-level data. While the data used to assess hurricane risk improved as a result of Andrew, Hurricane Katrina revealed significant problems, particularly for commercial properties, and caused a major refocusing on data quality.
Q: Twenty years after Andrew, what challenges do modelers and their customers face?
Fundamentally, the models have not changed since Andrew; they still have the same components and structure. They have, however, become more complex, but because there is so little data supporting this added complexity, the model loss estimates have become more volatile and prone to mistakes. The modelers are challenged with trying to account for more and more details and sources of loss, for which there is little or no data to model credibly.
Insurance companies are challenged with using the models as tools providing rough estimates rather than “answers.” The false precision of the model output has created an illusion of accuracy. Insurers are also challenged with the volatility and lack of transparency around the models and are looking for new tools to address these areas.
Because the models never will be accurate and much of the volatility is driven by changing assumptions versus new scientific knowledge, the value of using multiple tools and approaches for managing catastrophe losses is clear. Hurricane Andrew was a wake-up call to better track exposures, but the industry was gradually lulled into a false sense of security by model output such as PMLs. An accumulation of events last year provided a second wake-up call about the model uncertainty, limitations and a single perspective on risk. There are multiple ways to estimate catastrophe loss potential, and more complete toolkits will lead to better risk understanding and management over the next 20 years.