BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.

To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.

To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.

Login Register Subscribe

Risk experts deploy artificial intelligence

Early efforts apply machine learning to cybersecurity, fraud detection, managing operational efficiencies

Artificial intelligence

Artificial intelligence and machine learning technologies are increasingly being deployed by companies to reduce risk, with a broad swath of industries from financial services and transportation to energy and technology applying the tools to detect fraud, manage cyber threats and predict disruptions.

While risk professionals are embracing and implementing technological advances at an accelerating pace amid the explosion in the “internet of things,” it will take time for artificial intelligence to become embedded in the risk management process, experts say.

Artificial intelligence and machine learning applications are moving at such a rapid pace that they are transforming every industry segment, said David Derigiotis, Detroit-based professional liability and cyber risk practice leader with Burns & Wilcox Ltd.

“Everybody is rushing to implement some form of AI and machine learning because it adds efficiency, it allows companies to capture and analyze a lot of data, to be able to forecast trends and to reduce risks. I see it across the board,” Mr. Derigiotis said.

Cybersecurity is one area where there’s an “absolute requirement” and need for artificial intelligence to be deployed, according to Eric Boyum, Denver-based national practice leader, technology for Aon PLC.

“Cybersecurity is probably the biggest area in risk management where it’s being employed right now,” said Mr. Boyum. In an environment where “pursuers of information and data are trying to penetrate, impose, and compromise networks constantly,” AI is being used by private companies, governments and other groups to mitigate the threat, he said.

For example, Seattle-based Microsoft Corp.’s Windows Defender advanced threat protection service uses machine learning and AI to determine whether a cyber threat is active and what action to take to protect an organization’s network.

“We build tools that people can use or adapt that have a machine learning function to them. We are selling them as tools, not as artificial intelligence agents, so to speak,” said Tom Easthope, a member of the Risk & Insurance Management Society Inc.’s Strategic and Enterprise Council and director of enterprise risk management at Microsoft.

AI-based technologies are also being used to manage brand and reputation “to monitor and alert companies when there’s negative sentiment being developed, whether in social or traditional forms of media,” Mr. Boyum said.

In an environment where one negative comment can go viral, real-time social media listening or monitoring tools use AI technology to help companies track and monitor sudden changes on social media around their brand and online reputations, experts say.

AI used in regulatory compliance

Regulatory compliance is another area where AI-based technologies are being deployed.

“Banks use it to manage and monitor their traders, sometimes they use it for compliance around the Foreign Corrupt Practices Act, or they utilize it around money laundering or credit card fraud,” Mr. Boyum said.

Banks use the technologies to mitigate corruption by analyzing data from different sources, identifying patterns in transactions and detecting any unusual transactional activity related to employees, customers and third parties, experts say.

New York-based credit card giant Mastercard Inc. uses artificial intelligence and machine learning in three key areas: to manage operational risk, to detect and prevent fraud and to mitigate insider threats, according to Ed McLaughlin, St. Louis-based Mastercard’s president of operations and technology.

“The primary set of fraud patterns we look for is transactional fraud, credit card fraud (and) retail spending fraud with consumers,” said Mr. McLaughlin. “It’s not only about being able to identify the fraud, but about taking action to prevent the fraud from happening.”

Using machine learning, Mastercard analyzes data in real time across every transaction that’s flowing through its system, Mr. McLaughlin said. The move to real-time analysis has led to a 40% reduction in fraud and a 50% reduction in so-called false positives, where transactions are incorrectly flagged as fraudulent, unintentionally preventing business from occurring, he said.

“In the financial services and transaction processing industry — a lot like the insurance industry — there’s a huge focus on loss reduction, but you also need to focus on the business prevention and on the customer experience,” Mr. McLaughlin said.

Recognizing, evaluating images

The combination of artificial intelligence and image recognition can make detection more precise and increase a company’s ability to detect manufacturing faults, according to Manan Sagar, chief technology officer for insurance at Fujitsu UK in London.

Fujitsu, for example, has applied image recognition and AI in wind turbine manufacture, he said.

“In the past, every time a blade needed to be inspected, an engineer had to do it. With technology, what we’ve been able to do is have photographs taken of the blades and then train a bot and artificial intelligence to read and evaluate those images to see if there are any faults in the plates,” Mr. Sagar said.

As well as being more accurate and much quicker than human intervention, this application could also reduce business and liability costs for insurers, he said.

“It will have an impact on insurance costs because we know the quality of the blade is improved, so you should have less business interruption costs and less possibility of a business interruption claim — and for that matter, liability should be lower, which would have a knock-on effect on premiums,” Mr. Sagar added.

Industrial facilities are using smart sensors, said Jaap de Vries, Providence, Rhode Island-based vice president for innovation, science and technology at mutual insurer FM Global. “Now you have full emerging operational technology and information technology that makes a plant or an industrial facility a cognitive organization,” he said.

“What we are interested to see is — if that network of sensors is in place and is connected — how we can use that to make predictions about the health of machinery or if there’s a piece of equipment about to break down?” Mr. de Vries said.

Sensors also can be added to buildings to prevent losses, such as from a leak. “We can make the building smart. There are a lot of opportunities there,” he said.

AI is also being deployed in the shipping sector to analyze the real-time behavioral data of vessels to better quantify and manage risk.

“In that scenario, it’s about how we can use artificial intelligence to track their behavior and monitor whether there are exposure clusters around the globe,” said Asha Vellaikal, San Francisco-based head of Marsh Digital Labs, an incubator launched by Marsh LLC in 2018 that experiments with emerging technologies and insurtech.

“Are they going into geographical areas where we are detecting a higher level of risk and can we proactively inform the vessel owner, or change their behavior in exchange for lesser premiums?” she said.

Adoption of AI in its infancy

While there is general awareness that the use of AI can help make sense of the big data generated by the internet of things — the virtual network that connects billions of web-based devices such as self-driving cars and thermostats — the adoption of AI in risk management is still at a relatively early stage, experts say.

“It’s early days. Risk managers have to see concrete examples of how risk was mitigated in a certain industry, using certain information,” said Ms. Vellaikal.

There are several reasons why AI adoption has been slower among risk professionals than in other areas, including the regulatory environment, according to Derek Waldron, partner in the New York office of consultancy firm McKinsey & Co. Inc. One example is the fair lending regulation in the United States, which applies to how banks make credit decisions on who to lend to or not.

Legacy systems also challenge the use of AI in risk management activities, he said.

“In many cases, risk functions have been using analytics as part of their toolkits for decades, far longer than many other areas,” said Mr. Waldron.

“In those cases, because risk functions already have well-functioning tools, analytics and processes — even though AI can drive better results — there can be an obvious challenge in driving change in companies’ legacy processes and systems,” he said.

A McKinsey Global Institute report estimates that AI has the potential to generate $3.5 trillion to $5.8 trillion in value across all industries. Of this, risk is one of the sizable contributors, Mr. Waldron said.

“The value of risk analytics including AI is a total of about $500 billion to $900 billion across all industries. We are seeing the adoption of AI in risk management across just about every industry at varying levels of pace,” he said.

Despite the tangible value that AI can bring to risk reduction and prevention, it also raises ethical and regulatory concerns, experts say. Companies capturing and analyzing data with AI for surveillance or monitoring purposes need to be cognizant of privacy laws, for example.

“Number one, are you disclosing it to your clients? Are you letting them know what your data collection methods are and who you are sharing the data with?” said Mr. Derigiotis. “If you are using it in the workplace, are you disclosing ahead of time to let employees know what you’re monitoring and where you are monitoring them?”

Insurers, too, are using AI to better assess risks and to price more accurately, and while this can benefit policyholders, it can also spark concerns around bias, experts say.

There are several elements inherent in AI that could produce bad outcomes, according to Mr. Boyum. “There’s always a possibility based on the way you wrote the algorithm, or recipe for how you’re dealing with data. If that recipe has bad assumptions or unintended biases built into it, then it’s going to produce the possibility of biased recommendations or outcomes,” he said.

Properly used, AI is a great tool, said Peter Miller, CEO of The Institutes, the Malvern, Pennsylvania-based provider of education and research in risk management and property/casualty insurance. “In most applications, you need a human to look at it or look at some significant sample of its output. Then I think it has value.” But AI is also difficult to understand, experts say.

“When you get into the mechanics of how these AI systems work, it’s very difficult even for developers of systems to tell you how an AI system arrived at a result,” said Mr. Miller.

“In real-time, AI systems will make decisions and if an insurer needs to go in front of a regulator and justify that decision, it’s not a simple thing to do. So, I think regulators are going to be very cautious about that,” he said.





Read Next