Help

BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.

To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.

To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.

Login Register Subscribe

Regulators target disability bias risks in AI tools

Reprints
AI

As more employers use artificial intelligence software to screen job applicants and employees, they should heed recent guidance from federal agencies warning about potential disability discrimination pitfalls, employment experts say.

In technical guidance issued earlier this month, the U.S. Equal Employment Opportunity Commission said the most common ways employers that use AI may discriminate against disabled employees and applicants is by not providing a “reasonable” accommodation; using an algorithmic decision-making tool to screen out individuals with disabilities; and using tools that violate the Americans with Disabilities Act’s restrictions on disability-related inquiries and medical exams.

The agency said in October it was beginning an initiative to ensure that AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights law.

The U.S. Department of Justice issued guidance the same day as the EEOC that describes how algorithms and artificial intelligence can lead to disability discrimination in hiring. The DOJ enforces disability discrimination laws with respect to state and local government employers. The Federal Trade Commission has also said it is monitoring the issue.

Cities and states are addressing the issue, too. Laws passed in Maryland and Illinois require applicants’ consent to use artificial intelligence during interviews, and a New York City law that takes effect in 2023 bans its use in making employment decisions unless the technology has first been subject to a “bias audit.”

California’s Fair Employment & Housing Council also released draft modifications to employment regulations regarding “automated-decision systems” in March.

According to the EEOC guidance, the different types of AI software employers use include: resume scanners that prioritize applications using certain keywords, virtual assistants or chatbots that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements, and video interviewing software that evaluates candidates based on their facial expressions and speech patterns.

Andrew F. Maunz, of counsel with Jackson Lewis LLP in Pittsburgh and former legal counsel at the EEOC, said that while the technical guidance was not voted on by the commission and therefore does not have the force of law, “it certainly highlights for everyone some of the potential issues that could arise through artificial intelligence that lead to ADA violations,” and is a “good tool” for employers that use AI in their hiring decisions.

The guidelines are “a good example of a government agency adapting to the way in which the modern workplace is beginning to operate,” said Gerald Maatman, a partner with Seyfarth Shaw LLP in Chicago. “They are starting to stake out their positions,” he said.

Many employers use AI systems obtained from outside vendors and may be unaware of how the systems may affect disabled applicants or employees and fail to provide them with the required information regarding reasonable accommodations, said Paul E. Starkman, a member of law firm Clark Hill PLC in Chicago.

Experts say examples of its misuse can include AI excluding someone with a vision disability because he or she failed to make good eye contact, or a chatbot screening out an applicant who says he or she can’t stand for 30 minutes without giving the applicant the chance to say they can work from a wheelchair.

AI’s use can be helpful in masking names and other information, which helps eliminate bias, but it presents risks because employers give up some control by turning the selection process over to machine learning, said Jill Pedigo Hall, a shareholder with von Briesen & Roper SC in Milwaukee.

Job applicants and employees should be told they are being evaluated using AI tools, said Lauren A. Daming, an attorney with Greensfelder, Hemker & Gale P.C. in St. Louis.

“If you don’t tell them, they don’t have a reason to anticipate they might need an accommodation,” she said.

“You shouldn’t use AI to the exclusion of other methods of selecting people because AI will have a tendency to decide negatively against those with disabilities, just by the nature of it, unless it’s highly sophisticated,” said Gerald T. Hathaway, a partner with Faegre, Drinker Biddle & Reath LLP in New York.

Allyson K. Thompson, a partner with Kaufman Dolowitz Voluck LLP in Los Angeles, said that to address the issue, a food manufacturer with which she worked created a parallel system, which did not use AI, to evaluate disabled workers.

If an AI system only suggests mainly white, non-disabled males for interviews “you need to go back and ask why that’s occurring,” Ms. Hall said.

The EEOC is expected to monitor the issue and respond if it discerns problems, observers say.

“The fact that this guidance has come out” shows “that the EEOC is willing and ready to take action,” said Marissa A. Mastroianni, an associate with Cole Schotz PC in Hackensack, New Jersey.

Joseph O’Keefe, a partner with Proskauer Rose LLP in New York, said the EEOC may follow AI-related guidance on ADA discrimination with guidance on other issues. These may include sex and race discrimination, observers say.