Adversarial Machine Learning

Telechargé par Lahouari ZERNA
NIST Trustworthy and Responsible AI
NIST AI 100-2e2023
Adversarial Machine Learning
A Taxonomy and Terminology of Attacks and Mitigations
Apostol Vassilev
Alina Oprea
Alie Fordyce
Hyrum Anderson
This publication is available free of charge from:
https://doi.org/10.6028/NIST.AI.100-2e2023
NIST Trustworthy and Responsible AI
NIST AI 100-2e2023
Adversarial Machine Learning
A Taxonomy and Terminology of Attacks and Mitigations
Apostol Vassilev
Computer Security Division
Information Technology Laboratory
Alina Oprea
Northeastern University
Alie Fordyce
Hyrum Anderson
Robust Intelligence, Inc.
This publication is available free of charge from:
https://doi.org/10.6028/NIST.AI.100-2e2023
January 2024
U.S. Department of Commerce
Gina M. Raimondo, Secretary
National Institute of Standards and Technology
Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology
Certain commercial equipment, instruments, software, or materials, commercial or non-commercial, are
identified in this paper in order to specify the experimental procedure adequately. Such identification does
not imply recommendation or endorsement of any product or service by NIST, nor does it imply that the
materials or equipment identified are necessarily the best available for the purpose.
NIST Technical Series Policies
Copyright, Use, and Licensing Statements
NIST Technical Series Publication Identifier Syntax
Publication History
Approved by the NIST Editorial Review Board on 2024-01-02
How to cite this NIST Technical Series Publication:
Vassilev A, Oprea A, Fordyce A, Anderson H (2024) Adversarial Machine Learning: A Taxonomy and
Terminology of Attacks and Mitigations. (National Institute of Standards and Technology, Gaithersburg,
MD) NIST Artificial Intelligence (AI) Report, NIST Trustworthy and Responsible AI NIST AI 100-2e2023.
https://doi.org/10.6028/NIST.AI.100-2e2023
NIST Author ORCID iDs
Apostol Vassilev: 0000-0002-4979-5292
Alina Oprea: 0000-0002-9081-3042
Submit Comments
All comments are subject to release under the Freedom of Information Act (FOIA).
Abstract
This NIST Trustworthy and Responsible AI report develops a taxonomy of concepts and defines
terminology in the field of adversarial machine learning (AML). The taxonomy is built on surveying the
AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and
lifecycle stages of attack, attacker goals and objectives, and attacker capabilities and knowledge of the
learning process. The report also provides corresponding methods for mitigating and managing the
consequences of attacks and points out relevant open challenges to take into account in the lifecycle of
AI systems. The terminology used in the report is consistent with the literature on AML and is
complemented by a glossary that defines key terms associated with the security of AI systems and is
intended to assist non-expert readers. Taken together, the taxonomy and terminology are meant to
inform other standards and future practice guides for assessing and managing the security of AI systems,
by establishing a common language and understanding of the rapidly developing AML landscape.
Keywords
artificial intelligence; machine learning; attack taxonomy; evasion; data poisoning; privacy breach;
attack mitigation; data modality; trojan attack, backdoor attack; generative models; large language
model; chatbot.
NIST Trustworthy and Responsible AI Reports (NIST Trustworthy and Respon-
sible AI)
The National Institute of Standards and Technology (NIST) promotes U.S. innovation and industrial
competitiveness by advancing measurement science, standards, and technology in ways that enhance
economic security and improve our quality of life. Among its broad range of activities, NIST contributes
to the research, standards, evaluations, and data required to advance the development, use, and
assurance of trustworthy artificial intelligence (AI).
i
NIST AI 100-2e2023
January 2024
Table of Contents
Audience .......................................... iv
Background ......................................... iv
Trademark Information .................................. iv
How to read this document ................................ v
Executive Summary .................................... 1
1. Introduction ...................................... 3
2. Predictive AI Taxonomy ................................ 6
2.1. Attack Classification ................................ 6
2.1.1. Stages of Learning ............................ 8
2.1.2. Attacker Goals and Objectives ..................... 9
2.1.3. Attacker Capabilities ........................... 10
2.1.4. Attacker Knowledge ............................ 11
2.1.5. Data Modality ............................... 11
2.2. Evasion Attacks and Mitigations ......................... 14
2.2.1. White-Box Evasion Attacks ....................... 15
2.2.2. Black-Box Evasion Attacks ....................... 17
2.2.3. Transferability of Attacks ........................ 18
2.2.4. Mitigations ................................. 19
2.3. Poisoning Attacks and Mitigations ........................ 21
2.3.1. Availability Poisoning ........................... 21
2.3.2. Targeted Poisoning ............................ 23
2.3.3. Backdoor Poisoning ............................ 24
2.3.4. Model Poisoning .............................. 27
2.4. Privacy Attacks ................................... 29
2.4.1. Data Reconstruction ........................... 29
2.4.2. Membership Inference .......................... 30
2.4.3. Model Extraction ............................. 31
2.4.4. Property Inference ............................. 32
2.4.5. Mitigations ................................. 32
3. Generative AI Taxonomy ............................... 35
ii
1 / 107 100%

Adversarial Machine Learning

Telechargé par Lahouari ZERNA
La catégorie de ce document est-elle correcte?
Merci pour votre participation!

Faire une suggestion

Avez-vous trouvé des erreurs dans linterface ou les textes ? Ou savez-vous comment améliorer linterface utilisateur de StudyLib ? Nhésitez pas à envoyer vos suggestions. Cest très important pour nous !