Help

BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.

To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.

To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.

Login Register Subscribe

Companies face potentially tighter constraints on AI use as regulators lead drive to assess risks

Reprints
AI regulation

The regulatory and potentially statutory framework emerging to govern the development and use of artificial intelligence will likely create compliance requirements for organizations and possibly legal exposures as well.

“The regulations will give the legal framework and the guardrails within which companies need to operate,” said Pamela Hans, managing shareholder of the Philadelphia office of Anderson Kill P.C.

The movement toward such a framework is just beginning, though several major companies have acted independently to restrict the use of generative AI by employees due to privacy and security concerns, according to news reports.

AI regulation is in the early stages of development, said Jaymin Kim, Toronto-based senior vice president, cyber risk practice, for Marsh LLC. The European Union is arguably the furthest ahead, she said. 

While it is “likely we will see compliance requirements emerge, we’ve yet to see any completely wide-sweeping” requirements that could affect policy wordings, she said. 

In April 2021, the European Commission proposed its Artificial Intelligence Act. The regulation would place AI applications in three categories: applications and systems that create an unacceptable risk; high-risk applications; and applications not explicitly banned or listed as high-risk, according to information on the EU’s Artificial Intelligence Act website. 

“To a certain extent outside of the U.S. or for multinational companies, this is a new exposure,” said Meghan Dalton, Chicago-based partner at Clyde & Co. Organizations will have to ensure they are compliant with the European directive once it is finalized and implemented. 

In the United States, individual states are leading the movement to regulation, much as they did with data privacy, experts say. 

“You’re starting to see states take the helm,” in the absence of a federal law, said Nadia Hoyte, New York-based national cyber practice leader for USI Insurance Services LLC, noting New York state’s recent Proposed Insurance Circular Letter. 

Sent Jan. 17 by the New York State Department of Financial Services to New York-based insurers and others, the circular provides guidance on the “use of artificial intelligence systems and external consumer data and information sources in insurance underwriting and pricing.” The Department asked for feedback on the proposed guidance by March 17. 

Among other things, the circular addresses concerns over potential discrimination through the use of AI, transparency over the use of the technology, and data and privacy concerns.

According to The Council of State Governments, 17 states have enacted 29 bills focused on regulating the design, development and use of artificial intelligence, primarily addressing data privacy and accountability, since 2019.

The National Conference of State Legislatures says that in the 2023 legislative session, at least 25 states, the District of Columbia and Puerto Rico introduced artificial intelligence bills, and 18 states and Puerto Rico adopted resolutions or enacted legislation.

Federal regulatory activity, so far, has been in the form of Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence,” which requires the director of the Office of Management and Budget, in coordination with the directors of the Office of Science and Technology Policy, the Domestic Policy Council and the National Economic Council, to issue a memorandum that provides guidance to all federal agencies to inform the development of regulatory and nonregulatory approaches regarding technologies and industrial sectors that are empowered or enabled by artificial intelligence, according to the White House.

“Regulators want to protect against unfair uses of AI,” said Marshall Gilinsky, a shareholder in Boston and New York for Anderson Kill P.C., who practices in the firm’s insurance recovery and commercial litigation departments.