Human monitoring of AI necessary to oversee hiring processes, limit discrimination claims

Employers are increasingly using artificial intelligence tools to speed up hiring processes, such as resume screening, but human oversight remains essential to prevent unintentional discrimination.

Employment practices liability insurance policies should cover AI-related claims, but companies should expect questions from insurers, several sources say.

Evolving state and local regulations on employers’ use of AI, including rules in California that took effect Oct. 1, increase the need for companies to manage the risks, according to experts (see related story below).


AI can help HR departments review large numbers of applicants faster, but discrimination laws such as Title VII of the Civil Rights Act, the Americans with Disabilities Act and the Age Discrimination in Employment Act, “are still going to apply,” said Joni Mason, New York-based senior vice president of national executive and professional risk solutions claims at USI Insurance Services.

Some states and cities have enacted laws governing the use of AI in the hiring process, so employers operating in multiple jurisdictions need to be cautious and ensure they comply, she said.

“The new technology of AI brings up the age-old risk of discrimination, namely disparate impact,” said Jon Janes, Austin, Texas-based senior vice president and account executive for the management liability practice at Woodruff Sawyer, a unit of Arthur J. Gallagher & Co.

AI models used in hiring help sort potential candidates and identify desired characteristics or keywords on resumes. “While not discriminatory on their face, it can ultimately result in some impact against a protected class,” leading to litigation, Mr. Janes said.

Legal, regulatory landscape shifts

Several lawsuits have been filed alleging discrimination against job applicants by AI hiring tools, and more are expected.

In a case filed Aug. 4 in federal court in Michigan, an applicant alleges that Sirius XM’s use of AI hiring technology discriminated against him based on his race, resulting in his applications for about 150 IT positions at the company being downgraded and rejected despite his qualifications. He seeks class-action status for the suit.

On May 16, a federal judge in the Northern District of California granted conditional class-action certification for age discrimination claims in a closely watched lawsuit that a job hunter filed in 2023 against Workday, a tech company that handles HR services for employers. In Mobley v. Workday Inc., the plaintiff alleges that Pleasanton, California-based Workday’s AI systems and screening tools prevented him from finding a job despite applying for at least 100 positions since 2018.

Most of the litigation so far hasn’t targeted employers directly but instead has focused on the technology companies that developed the AI screening tools, which allegedly built biases into algorithms, said Sara Jodka, a member at law firm Dickinson Wright in Columbus, Ohio.

President Donald J. Trump’s executive order discouraging federal enforcement of disparate-impact claims could create a divide in enforcement among federal agencies, courts and states, Ms. Jodka said.

In April, he ordered federal agencies to deprioritize investigations and lawsuits related to disparate-impact liability.

New York and California are leading the way in employment laws related to AI, said Scott R. Green, Garden City, New York-based partner at law firm Goldberg Segalla.

New York City’s local law 144, enacted in 2021, mandates annual audits of automated decision systems, including AI, Mr. Green said.

As of Oct. 1, updated regulations from the California Civil Rights Council ban employers in the state from using AI or automated-decision systems that discriminate against applicants based on protected classes under the state’s Fair Employment and Housing Act.

The regulations require employers to retain data from all automated decision systems and encourage the use of anti-bias testing.

California’s regulations hold employers responsible for the outcomes of any automated hiring system, including AI, Mr. Green said.

“You can’t turn a blind eye to this. You can’t say, well, it was the machine that churned out this data, we can’t be held responsible. No, they want you to know you are held responsible,” he said.

EPL coverage should respond

Employment practices liability insurance policies should cover AI-related claims, sources say.

If a lawsuit alleges discrimination, that’s generally covered under the definition of an employment practices wrongful act, said Mary Anne Mullin, senior vice president, EPL and fiduciary product leader at QBE North America.

“We haven’t seen enough to know how the coverage is going to be determined, but if it’s a discrimination, wrongful termination, failure to hire because of gender bias, for instance, that’s something that’s traditionally covered under the EPL policy,” Ms. Mullin said.

EPL insurance generally covers AI-related claims, said Kelly Thoerig, Richmond, Virginia-based U.S. directors and officers liability and employment practices liability product leader at Lockton.

“Discrimination is discrimination, whether it was done in real life or allegedly through some electronic means,” Ms. Thoerig said.

Insurers are increasingly scrutinizing employers’ AI use, she said. “This is an issue that underwriters are digging in on and asking more questions. It’s not simply a check-the-box: ‘Do you use AI or not?’” she said.

In QBE North America’s Employment Practices Liability report released in August, about 51% of respondents identified the use of AI in HR as an area most likely to lead to employment-related claims in the next 12 months. The survey covered 200 legal and HR professionals at organizations with annual revenues between $500 million and $5 billion.

Some 54% of respondents in QBE’s survey believe employee education and training related to the use of AI for HR purposes should be strengthened to reduce potential claims.

Strong governance and risk management is critical, said Will Lehman, Bloomington, Indiana-based global director of risk management at Cook Group and a board director of the Risk & Insurance Management Society.

New AI tools are becoming available every week, making it difficult for information security teams to shut down or block all these sites, he said.

“You can’t control everything that your team members do,” Mr. Lehman said.


Regular system audit educates organization on ‘AI guardrails’

Businesses that use AI hiring tools should carefully evaluate the systems for potential bias before implementing the technologies.

Employers should audit the systems to ensure they do not have a disparate impact on any protected class, said Joni Mason, New York-based senior vice president of national executive and professional risk solutions claims at USI Insurance Services.

“That’s really the only way you can keep up and update the data that’s fed into these systems, because it’s only as good as what goes in,” Ms. Mason said.

Employers should question third-party vendors about how they verify the information and data their systems produce, she said.

Organizations should establish cross-functional groups to vet AI systems and ensure they comply with privacy and AI laws, said Will Lehman, Bloomington, Indiana-based global director of risk management at Cook Group and a board director of the Risk & Insurance Management Society.

Responsible AI policies should be implemented, including regular bias audits of the models being used, Mr. Lehman said. “That really educates people in the organization on what the AI guardrails are and what you can and can’t do,” he said.

Employers often rely on third parties to create these models and may not fully understand them, said Jon Janes, Austin, Texas-based senior vice president and account executive for the management liability practice at Woodruff Sawyer, a unit of Arthur J. Gallagher & Co.

How the models perform and affect hiring outcomes should be regularly evaluated, Mr. Janes said.

Humans need to be in the loop on all final hiring decisions, said Scott R. Green, Garden City, New York-based partner at law firm Goldberg Segalla.

“Don’t just throw it to the machine and let it give you all the results,” Mr. Green said.

Employers should include indemnity clauses in vendor contracts, he said.