close
close

Commerce Department Announces New Guidance, Tools 270 Days After President Biden’s Executive Order on AI

Commerce Department Announces New Guidance, Tools 270 Days After President Biden’s Executive Order on AI

Letters "AI" appear in blue on a background of binary numbers, ones and zeros.

Credit:

NicoElNino/Shutterstock

The U.S. Department of Commerce today announced, 270 days after President Biden issued his executive order on Safe, Secure, and Trustworthy AI Development, the release of new guidance and software to help improve the safety, security, and reliability of artificial intelligence (AI) systems.

The department’s National Institute of Standards and Technology (NIST) has released three final guidance documents, first published in April for public comment, as well as a draft guidance document from the U.S. AI Safety Institute that aims to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. Additionally, the Commerce Department’s U.S. Patent and Trademark Office (USPTO) has released updated guidance on patent eligibility to address innovation in critical and emerging technologies, including AI.

“For all its potentially transformative benefits, generative AI It also carries risks that are significantly different from those we see with traditional software. These guidance documents and testing platform will inform software developers of these unique risks and help them develop ways to mitigate these risks while supporting innovation.” —Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and Director of NIST

Read the full press release from the Department of Commerce.

Read the White House fact sheet on administration-wide actions on AI.

Background: NIST Delivers 5 Products in Response to 2023 Executive Order on AI

The NIST publications cover various aspects of AI technology. Two of them are being released today for the first time: One is the first public version of a guidance document from the U.S. Institute for AI Security, which aims to help software developers mitigate risks related to generative AI and dual-use baseline models—AI systems that can be used for both beneficial and harmful purposes. The other is a testing platform designed to help users and developers of AI systems measure how certain types of attacks can degrade the performance of an AI system.

Of the three remaining publications, two are guidance documents designed to help manage risks related to generative AI (the technology that powers many chatbots as well as text-based image and video creation tools) and serve as complementary resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). The third provides a blueprint for U.S. stakeholders to work with others around the world on AI standards. All three publications were previously released as drafts for public comment on April 29, and NIST is now releasing their final versions.

The two versions announced today by NIST for the first time are:

Preventing the misuse of dual-use foundation models

AI foundational models are powerful tools that are useful across a wide range of tasks and are sometimes referred to as “dual-use” because of their potential for both good and harm. NIST’s AI Security Institute has released the first public version of its guidelines on Managing Misuse Risks for Dual-Use Foundation Models (NIST AI 800-1), which describes voluntary best practices for how foundation model developers can protect their systems from misuse intended to deliberately cause harm to individuals, public safety, and national security.

The draft guidelines propose seven key approaches to mitigate the risks of model misuse, along with recommendations on how to implement and be transparent about them. Together, these practices can help prevent models from enabling harm through activities such as developing biological weapons, conducting offensive cyber operations, and producing child sexual abuse material and non-consensual intimate images.

NIST accepts public comments on draft Managing the risk of misuse of dual-use foundation models until September 9, 2024, 11:59 PM Eastern Time. Comments may be submitted has NISTAI800-1 (has) nist.gov (NISTAI800-1(at)nist(dot)gov).

Testing how AI system models respond to attacks

One of the vulnerabilities of an AI system is the model that constitutes it. By exposing a model to large amounts of training data, it learns to make decisions. But if opponents poison training data If there are inaccuracies (for example, by introducing data that might cause the model to mistake stop signs for speed limit signs), the model can make wrong, potentially disastrous decisions. Testing the effects of adversarial attacks on machine learning models is one goal of Dioptra, a new software package aimed at helping AI developers and customers determine how well their AI software resists various adversarial attacks.

Open source software, available for free downloadcould help the community, including government agencies and small and medium-sized businesses, conduct evaluations to assess AI developers’ claims about the performance of their systems. This software meets Section 4.1(ii)(B) of the Executive Order, which requires NIST to assist in model testing. Dioptra does this by allowing a user to determine what types of attacks would make the model less effective and by quantifying the performance reduction so the user can know how often and under what circumstances the system would fail.

In addition to the first two versions published today, there are three finalized documents:

Mitigating the Risks of Generative AI

THE RMF Generative AI Profile (NIST AI 600-1) can help organizations identify the unique risks posed by generative AI and proposes generative AI risk management actions that best align with their goals and priorities. The guide is intended to be a complementary resource for NIST users IA RMFIt is structured around a list of 12 risks and just over 200 actions that developers can take to manage them.

The 12 risks include a lower barrier to entry for cybersecurity attacks, the production of fake news or hate speech and other harmful content, and generative AI systems that confabulate or “hallucinate” results. After describing each risk, the paper presents a matrix of actions developers can take to mitigate it, based on the AI ​​RMF.

Reducing threats to data used to train AI systems

The second publication completed, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (National Institute of Statistics (INST) Special Publication (SP) 800-218A), is designed for use with the Secure Software Development Framework (SP 800-218). While the SSDF is largely concerned with software coding practices, the companion resource extends the SSDF in part to address a major concern about generative AI systems: they can be compromised with malicious training data which negatively affect the performance of the AI ​​system.

In addition to covering aspects of training and use of AI systems, this guidance document identifies potential risk factors and strategies to address them. Among other recommendations, it proposes aAnalysis of training data for signs of poisoning, bias, homogeneity and falsification.

Global Commitment to AI Standards

AI systems are transforming society not just in the United States, but around the world. A plan for global engagement on AI standards (NIST AI 100-5), the third publication finalized today, is designed to stimulate the global development and implementation of consensus standards related to AI, cooperation and coordination, and information sharing.

This guidance builds on the priorities outlined in NIST’s Plan for Federal Engagement in AI Standards and Related Tools and is linked to the National Standards Strategy for Critical and Emerging Technologies. This publication suggests that a broader range of multidisciplinary stakeholders from many countries be involved in the standards development process.