Help

BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.

To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.

To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.

Login Register Subscribe

AI-written articles spark liability concerns

Reprints
AI

Media organizations that publish artificial intelligence-generated content should be transparent about how and when they are using AI and ensure that human checks and balances are in place, brokers say.

Insurers are not excluding or limiting coverage for AI-related exposures, but media organizations can expect more questions about their use of AI-generated content when they renew their media liability policies, they say.

In November, Sports Illustrated was called out for publishing content allegedly created by AI-generated authors with fake bylines and writer profiles, as reported by the Futurism website.

In a statement issued in response to the report a spokesperson for Arena Group, the parent company of Sports Illustrated, said the articles in question were licensed content from a third-party company, Ad-Von Commerce and that AdVon had assured they were written and edited by humans.

Sports Illustrated is not the only media organization to have come under scrutiny in connection with AI-generated content.

Earlier this year, newspaper publisher Gannett Co. Inc. stopped using an AI sports-writing tool after some of its articles were criticized on social media. Technology news site CNET was also reported to be publishing articles generated by AI without disclosing the practice.

Most large companies are experimenting with generative AI and there are many potential applications, said Eric Boyum, Denver-based managing director and national leader of Aon PLC’s technology and communications industry practice.

“The risks have to be assessed for each individual application. It’s not just what it is and how do its risks work but what are you doing with it, how have you trained it, and in what ways are you governing that,” Mr. Boyum said.

Whether AI-generated works can be protected by copyright under U.S. law remains unclear, and disputes are being handled on a case-by-case basis in court, brokers said.

Mr. Boyum said a large music label client recently asked whether it would be covered if it used Open AI’s DALL-E image generator to create an album cover and it received a copyright claim related to the image.

“The answer was, if you get any claim for copyright infringement, we certainly think you’re getting into the policy. But there’s this other provision that says if you intentionally do something that you know is wrong that creates a particular infringement then we probably shouldn’t cover you,” he said.

Statutory or case law has yet to establish definitively whether using such an AI model is legitimate, he said.

It’s becoming increasingly difficult to distinguish between AI versus human-generated content, and transparency and disclosure are key, said Jaymin Kim, a Toronto-based senior vice president in the cyber risk practice at Marsh LLC.

“In the absence of watermarking technology that’s foolproof and widespread, using proactive disclaimers to ensure that users are informed when they’re interacting with an AI chatbot or AI-generated content is certainly a best practice,” Ms. Kim said.

Whenever businesses use AI to generate content, there should also still be human oversight, she said.

From a coverage perspective, media liability, cyber liability and technology errors and omissions are among the policies that could respond, brokers said.

Businesses that generate AI content should be aware that the content they create is not necessarily protected by copyright, given the current uncertain legal environment, said Joe Quinn, Chicago-based Midwest cyber claims leader at Willis Towers Watson PLC.

Media liability coverage can help protect companies from potential copyright issues, Mr. Quinn said. Such insurance typically covers third-party claims arising from copyright and trademark infringement, invasion of privacy and defamation.

Businesses that engage with AI vendors to help them generate content also need to be clear about the data sets they’re providing, he said.

“They can try to screen for copyrighted material at that stage, but if they’re unable to do that they should build in those indemnity provisions in their contracts with those AI vendors” and make sure the vendors carry some form of media coverage to provide an extra layer of indemnity and defense, Mr. Quinn said.

“Conceptually, existing insurance policies if triggered would respond to AI-specific exposures and claims,” Ms. Kim said. “We’re not seeing wholesale exclusions or limitations when it comes to AI-specific exposures,” she said.

But intentional acts are often excluded, she said.