BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.

To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.

To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.

Login Register Subscribe

AI use in financial services sector increases risks: Experts

machine learning

While the use of artificial intelligence and machine learning technologies by the financial services sector can transform systems and processes, it also increases bias, inclusion and risk management concerns, according to experts speaking at the first hearing of the House of Representatives Committee on Financial Services artificial intelligence task force on Capitol Hill on Wednesday.

“When done right, artificial intelligence can mean innovative underwriting models that allow millions more people access to credit and financial services,” said Rep. Bill Foster, D-Illinois, chair of the task force on artificial intelligence, in opening remarks at the hearing.

For example, artificial intelligence can be used to better detect fraud and money-laundering, and regulators use it to improve market surveillance and police bad actors, he said.

However, artificial intelligence also raises key questions such as “How can we be sure that artificial intelligence credit underwriting models are not biased? Who is accountable if artificial intelligence algorithms are just a black box that nobody can explain when it makes a decision?” Rep. Foster said.

Artificial intelligence also runs on enormous amounts of data raising concerns on where that data comes from and how it is protected, he said.

“Machine learning algorithms have become more sophisticated and pervasive tools for automated decision making,” said Dr. Nicol Turner-Lee, fellow, governance studies, Center for Technology Innovation, Brookings Institution in Washington DC during testimony.

“These models make inferences from data about people including their identity, their demographic attributes and likely future preferences,” Dr. Turner-Lee said.

Despite the models’ greater facilitation of efficiency and cognition, “the online economy has not resolved the issue of racial bias,” she said.

These issues are “troubling and dangerous,” Dr. Turner-Lee said, in particular for African Americans and Latinos who have been “ill-served within the financial services market.”

“Artificial intelligence offers the possibility of greater financial inclusion, but its rapid growth in an already complex financial system presents major challenges regarding regulation and policymaking, risk management, as well as ethical, economic and social hurdles,” said Dr. Bonnie Buchanan, head of school of finance and accounting and professor of finance, Surrey Business School at the University of Surrey in the U.K., in testimony.

Machine learning algorithms can also potentially introduce bias and discrimination, she said.

“Deep learning provides predictions, but it does lack insight as to how the variables are being used to reach these predictions,” Dr. Buchanan said.

Hiring and credit scoring algorithms can exacerbate inequities due to biased data, she said.

Despite its benefits, machine learning raises “serious risks” for institutions and consumers, said Dr. Douglas Merrill, founder and CEO, ZestFinance.

“Machine learning models are opaque and inherently biased. Lenders put themselves, consumers and the safety and soundness of our entire financial system at risk if they do not appropriately validate and monitor machine learning models,” said Mr. Merrill.

“Getting this mix right, enjoying machine learning’s benefits while employing responsible safeguards is difficult,” he said.

Seismic shifts in the financial services landscape create new risks, aid Mr. R. Jesse McWaters, financial innovation lead, at the World Economic Forum.

“The enormous complexity of some advanced artificial intelligence systems can make them opaque, challenging traditional models of regulation and compliance,” Mr. McWaters said.

While these threats are real, “It is critical we avoid knee-jerk reactions informed by fear,” he said.

“The advent of AI does not call into question the fundamental principles that inform our regulatory frameworks. Rather, it demands that we be open to using both existing and emerging techniques to ensure we remain aligned to these principles even against a backdrop of rapid technological change,” he said.

With the right governance and oversight, artificial intelligence has the potential to do enormous good, he said.

“The use of artificial intelligence and machine learning is not without challenges and questions, just like any other technology,” said Rep. French Hill, R-Arkansas.

“As policymakers, we need to ensure we are asking the right questions, about appropriate testing and evaluating the new technology so that the ultimate benefits are benefiting consumers,” Rep. Hill said.





Read Next

  • AI holds promise for insurance industry, but with caveats

    Artificial intelligence and other technologies hold great promise for the insurance industry but are not without issues, such as adoption and security, according to a panel of insurance and technology industry executives speaking at the Insurance Information Institute’s Joint Industry Forum in New York on Thursday.