Seeking to promote the development and use of artificial intelligence (AI) technologies and systems that are trustworthy and responsible, NIST today released for public comment an initial draft of the AI Risk Management Framework (AI RMF). The draft addresses risks in the design, development, use and evaluation of AI systems.
The voluntary framework is intended to improve understanding and help manage enterprise and societal risks related to AI systems. It aims to provide a flexible, structured and measurable process to address AI risks throughout the AI lifecycle, and offers guidance for the development and use of trustworthy and responsible AI. NIST is also developing a companion guide to the AI RMF with additional practical guidance; comments about the framework also will be taken into account in preparing that practice guide.
“We have developed this draft with extensive input from the private and public sectors, knowing full well how quickly AI technologies are being developed and put to use and how much there is to be learned about related benefits and risks,” said Elham Tabassi, chief of staff of the NIST Information Technology Laboratory (ITL), who is coordinating the agency’s AI work, including the AI RMF.
This draft builds on the concept paper released in December and an earlier Request for Information. Feedback received by April 29 will be incorporated into a second draft issued this summer or fall. On March 29-31, NIST will hold its second workshop on the AI RMF. The first two days will address all aspects of the AI RMF. Day 3 will allow a deeper dive of issues related to mitigating harmful bias in AI.
This week, NIST also released “Towards a Standard for Identifying and Managing Bias within Artificial Intelligence” (SP 1270), which offers background and guidance for addressing one of the major sources of risk that relates to the trustworthiness of AI. That publication explains that beyond the machine learning processes and data used to train AI software, bias is related to broader societal factors — human and systemic institutional in nature — which influence how AI technology is developed and deployed.
The draft framework and publication on bias are part of NIST’s larger effort to support the development of trustworthy and responsible AI technologies by managing risks and potential harms in the design, development, use and evaluation of AI products, services and systems.