BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.
To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.
To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.
If a monster storm struck today, insurers might not be caught off guard as badly as they were in August 1992 when Hurricane Andrew swept through Florida, causing about $16 billion in insured losses.
Andrew grabbed many in insurance by the lapels, helping shake them to a realization that greater losses than previously expected are possible. That realization in turn spawned interest in the development of catastrophe models. The models are computer programs now widely used to help insurers and reinsurers understand catastrophe risks, including critical factors such as the concentration and locations of their business in an at-risk area.
The models have affected commercial policyholders as underwriters have used them to dictate coverage variables such as capacity and attachment points (BI, April 10, 1995).
"Everyone involved, from the insurer, to the insured to the reinsurer, understands risk better now than they have in the past," said Carl Hedde, vp of technical operations for American Re-Insurance Co. in Princeton, N.J. Because of the models, "we are able to make better decisions concerning risk," he said.
Before the models, rate makers relied on 30 years' worth of loss estimates, said John Kollar, vp of actuarial services and research for the Insurance Services Office Inc. in New York. That narrow view skewed expectations of hurricane losses downward prior to Andrew and Hurricane Hugo in 1989, because there were few destructive hurricanes during the 1960s and 1970s.
"A 30-year average seems like a long time, but it isn't when you talk about events that take place every 50 years or every 100 years," Mr. Kollar said. The models can evaluate 150 years' worth of historical data and simulate possible occurrences for many years into the future, he said. The models also evaluate relevant factors that were not weighed before, such as building codes.
Yet it still is virtually impossible to say just how precise the models are in determining potential losses. Insurance actuaries have been known to use competing models to evaluate the same portfolios, only to arrive at vastly different loss estimates.
"I would take the liberty to say that on an absolute basis, no one knows if the numbers are right or wrong," said Jayant Khadilkar, vp in modeling for Renaissance Reinsurance Ltd. in Hamilton, Bermuda. "They may be in the right ballpark, but no one knows for sure."
Yet the models are superior to the old methods or anything else available, Mr. Khadilkar and other users said. From a reinsurer's perspective, they are excellent at distinguishing a good book of business from a bad one, Mr. Khadilkar added .
"They are not perfect, no two ways about it," added ISO's Mr. Kollar. "But I think they have helped a great deal, and I think they will advance further and further, and perhaps they will converge and come up with similar results."
Just exactly how good the models are is tough to determine, in part because the companies that license them have been reluctant to share information, such as the assumptions used in the models, several observers said.
Extensive independent evaluation of the models would require amassing resources similar to those the model companies have assembled. That could include a staff of meteorologists, structural engineers and computer engineers.
Many insurers use the models but have very little understanding of them or interest in how they work, said an actuary for one U.S. reinsurer who asked not to be identified. They merely use the models to satisfy rating companies that demand extensive information on how a a 100-year or 250-year event might hit an insurer.
"As long as they get an answer they can present to (the rating companies), most of them don't care too much about the details," the actuary said. "Some of the more sophisticated ones do, but for the most part, most of them don't. They just want something to show."
But more clients are getting sophisticated about the models, their uses and how they work, pointed out Mark T. Broido, director of corporate marketing for Risk Management Solutions in Menlo Park, Calif. RMS licenses its software, known as IRAS.
Consequently, RMS has stepped up efforts to share with its clients information on how it arrives at IRAS data, Mr. Broido said. RMS recently released a new version of IRAS that incorporates information such as data on sea-surface temperatures and their effect on hurricane intensity. Customers say it is greatly improved.
Information generated by the new version has prompted Mr. Broido to claim that existing models have greatly overstated hurricane strike risks in certain coastline areas such as the mid-Atlantic and the Northeast. At the same time, catastrophe models have understated risks in the Houston and Galveston areas of Texas, he said.
RMS' competitors say such broad categorizations of the modeling industry are unfair. They suspect RMS has merely corrected for shortcomings in its own model and is touting the release of a new version.
San Francisco-based EQE International upgraded its software, USWIND Version 4.0, about two months ago to reflect greater storm potential and severity in Texas, said Bob Healy, vp of sales and marketing for EQE. The company also lowered its risk assessment for the Northeast.
Boston-based Applied Insurance Research prides itself in the stability of its products, said Karen Clark, president. AIR continues to fine-tune its products as new information about storms and other data becomes available. The company has not made major changes in hurricane strike probability, she said.
AIR products already accounted for higher storm risk in Texas than did RMS, Ms. Clark said. She points out that just hours after Hurricane Andrew struck, AIR used its wind model to estimate that storm damages would total $13 billion in insured losses.
Even though that estimate was about $3 billion below the final count, it drew criticism then from skeptics who couldn't believe losses would be so severe, Ms. Clark said.
"People said, 'No way; you're crazy," Ms. Clark recalled. "The thing is there was no other source of information for at least two months after Andrew that was more accurate than our model."
Yet the models are not at their strongest in predicting losses for a single event, observers said.
"They are generally best when used for large numbers of assets and for large numbers of possible events," said John Schneider, a senior manager for Aon Risk Technologies. Aon is developing its own catastrophe model software, which is expected to be available by year's end.
"Whenever you start narrowing down for a single event or a single site, you are typically overwhelmed by the uncertainty," Mr. Schneider said.
One thing the models have been good at is making insurers more fearful of their potential losses.
"It's a little bit of a paradox," the reinsurance actuary said. "The companies are probably in a stronger position than they were, but they probably think they are in less of a strong position because they have more information."