NIST asks for input on the AI ​​risk management framework

The National Institute of Standards and Technology is soliciting comments on developing an Artificial Intelligence Risk Management Framework (AI RMF) that would improve the ability of organizations to build trustworthiness into the design, development, and deployment of AI systems.

“The framework aims to promote the development of innovative approaches to the characteristics of trustworthiness such as accuracy, explainability and interpretability, reliability, data protection, robustness, security (resilience) and reduction of unintentional and / or harmful biases and harmful uses”, wrote NIST in a request for information published in the federal register on July 28th.

NIST aims to provide input on how the framework should address AI risk management challenges, including identifying, assessing, prioritizing, responding and communicating AI risks; how companies currently assess and manage AI risks, including bias and harmful consequences; and how AI can be engineered to reduce the potential negative impact on individuals and society, the RFI said.

Suggestions on what common definitions and characterizations should be submitted for the aspects of trustworthiness, as well as best practices that might be consistent with an AI risk framework.

NIST plans to develop its AI RMF using the same practices it used for the widely adopted Cybersecurity Framework 2014 and Privacy Framework 2020.

Answers are due August 19th. Read the full RFI here.

About the author

Shourjya Mookerjee is co-editor for GCN and FCW. He is a graduate of the University of Maryland, College Park, and has written for Vox Media, Fandom, and a number of news outlets in the capital area. He can be reached at [email protected] – or you can find him on Twitter @byShourjya railing about sports, cinematography and the importance of local journalism.