Help

BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.

To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.

To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.

Login Register Subscribe

AI tools inclined to ‘hallucinations’: Expert

Reprints
AI

Generative artificial intelligence tools including ChatGPT are prone to “hallucinations,” including fabricated answers, a consultant warns.

They also have a tendency to provide incorrect, if “superficially plausible” information, which may the most common issue associated with using these tools, said Rob Friedman, senior director analyst with Stanford, Connecticut-based Gartner Inc.’s legal and compliance practice, in a statement.

“Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted,” he said.

Other issues are cases of bias, intellectual property and copyright risks, cyber fraud and consumer protection risks, the statement said.

Few organizations have corporate policies on the use of AI, which could expose them to loss of confidential information if employees put it into ChatGPT as part of a work project, Stephanie Snyder Frenier, senior vice president, business development leader-professional and cyber solutions, at CAC Specialty in Chicago, said at the Risk & Insurance Management Society Inc.’s Riskworld annual conference in Atlanta earlier this month.