Printed from BusinessInsurance.com

NIST proposes addressing bias in artificial intelligence

Posted On: Jun. 22, 2021 2:34 PM CST

AI

The National Institute of Standards and Technology has issued a proposal for identifying and managing bias in artificial intelligence.

“The proliferation of modeling and predictive approaches based on data-driven and machine learning techniques has helped to expose various social biases baked into real-world systems, and there is increasing evidence that the general public has concerns about the risks of AI to society,” the proposal says.

“Improving trust in AI systems can be advanced by putting mechanisms in place to reduce harmful bias in both deployed and in-production technology.

“Such mechanism will require features such as a common vocabulary, clear and specific principles and governance approaches, and strategies for assurance.”

NIST is inviting public comments on the proposal.

The three-step process recommended by the Gaithersburg, Maryland-based agency in its proposal comprises pre-design, where the technology is devised, defined and elaborated; design and development, where the technology is constructed; and deployment, where the technology is used by or applied to, various individuals or groups.

A NIST study issued in July 2020 found the best of 89 commercial facial recognition algorithms tested had error rates of between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask.