Printed from BusinessInsurance.com

YouTube case at high court could shape protections for ChatGPT, AI

Posted On: Apr. 24, 2023 10:53 AM CST

YouTube

(Reuters) — When the U.S. Supreme Court decides in the coming months whether to weaken a powerful shield protecting internet companies, the ruling also could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT.

The justices are due to rule by the end of June whether Alphabet Inc.’s YouTube can be sued over its video recommendations to users. That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.

What the court decides about those issues is relevant beyond social media platforms. Its ruling could influence the emerging debate over whether companies that develop generative AI chatbots like ChatGPT from OpenAI, a company in which Microsoft Corp. is a major investor, or Bard from Alphabet’s Google should be protected from legal claims like defamation or privacy violations, according to technology and legal experts.

That is because algorithms that power generative AI tools like ChatGPT and its successor GPT-4 operate in a somewhat similar way as those that suggest videos to YouTube users, the experts added.

“The debate is really about whether the organization of information available online through recommendation engines is so significant to shaping the content as to become liable,” said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on AI. “You have the same kinds of issues with respect to a chatbot.”

Representatives for OpenAI and Google did not respond to requests for comment.

During arguments in February, Supreme Court justices expressed uncertainty over whether to weaken the protections enshrined in the law, known as Section 230 of the Communications Decency Act of 1996. While the case does not directly relate to generative AI, Justice Neil Gorsuch noted that AI tools that generate “poetry” and “polemics” likely would not enjoy such legal protections.

The case is only one facet of an emerging conversation about whether Section 230 immunity should apply to AI models trained on troves of existing online data but capable of producing original works.

Section 230 protections generally apply to third-party content from users of a technology platform and not to information a company helped to develop. Courts have not yet weighed in on whether a response from an AI chatbot would be covered.