Help

BI’s Article search uses Boolean search capabilities. If you are not familiar with these principles, here are some quick tips.

To search specifically for more than one word, put the search term in quotation marks. For example, “workers compensation”. This will limit your search to that combination of words.

To search for a combination of terms, use quotations and the & symbol. For example, “hurricane” & “loss”.

Login Register Subscribe

NIST proposes addressing bias in artificial intelligence

Reprints
AI

The National Institute of Standards and Technology has issued a proposal for identifying and managing bias in artificial intelligence.

“The proliferation of modeling and predictive approaches based on data-driven and machine learning techniques has helped to expose various social biases baked into real-world systems, and there is increasing evidence that the general public has concerns about the risks of AI to society,” the proposal says.

“Improving trust in AI systems can be advanced by putting mechanisms in place to reduce harmful bias in both deployed and in-production technology.

“Such mechanism will require features such as a common vocabulary, clear and specific principles and governance approaches, and strategies for assurance.”

NIST is inviting public comments on the proposal.

The three-step process recommended by the Gaithersburg, Maryland-based agency in its proposal comprises pre-design, where the technology is devised, defined and elaborated; design and development, where the technology is constructed; and deployment, where the technology is used by or applied to, various individuals or groups.

A NIST study issued in July 2020 found the best of 89 commercial facial recognition algorithms tested had error rates of between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask.

 

 

Read Next