Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 1: Overview of Adversarial Learning

Chapter 1: Overview of Adversarial Learning

pp. 1-18

Authors

, Pennsylvania State University, , University of Illinois, Urbana-Champaign, , Pennsylvania State University
Resources available Unlock the full potential of this textbook with additional resources. There are free resources and Instructor restricted resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Extract

In this chapter, we introduce attacks/threats against machine learning. A primary aim of an attack is to cause the neural network to make errors. An attack may target the training dataset (its integrity or privacy), the training process (deep learning), or the parameters of the DNN once trained. Alternatively, an attack may target vulnerabilities by discovering test samples that produce erroneous output. The attacks include: (i) TTEs, which make subtle changes to a test pattern, causing the classifier’s decision to change; (ii) data poisoning attacks, which corrupt the training set to degrade accuracy of the trained model; (iii) backdoor attacks, a special case of data poisoning where a subtle (backdoor) pattern is embedded into some training samples, with their supervising label altered, so the classifier learns to misclassify to a target class when the backdoor pattern is present; (iv) reverse-engineering attacks, which query a classifier to learn its decision-making rule; and (v) membership inference attacks, which seek information about the training set from queries to the classifier. Defenses aim to detect attacks and/or to proactively improve robustness of machine learning. An overview is given of the three main types of attacks (TTEs, data poisoning, and backdoors) investigated in subsequent chapters.

Keywords

  • adversarial input
  • test-time evasion attack
  • backdoor attack or Trojan
  • data poisoning
  • membership-inference attack
  • reverse-engineering attack
  • certified training
  • adversarial training
  • white box attack
  • black box attack

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$69.99
Hardback
US$69.99

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers