Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 4: Test-Time Evasion Attacks (Adversarial Inputs)

Chapter 4: Test-Time Evasion Attacks (Adversarial Inputs)

pp. 76-115

Authors

, Pennsylvania State University, , University of Illinois, Urbana-Champaign, , Pennsylvania State University
Resources available Unlock the full potential of this textbook with additional resources. There are free resources and Instructor restricted resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Extract

In this chapter we consider attacks that do not alter the machine learning model, but “fool” the classifier (plus supplementary defense, including human monitoring) into making erroneous decisions. These are known as test-time evasion attacks (TTEs). In addition to representing a threat, TTEs reveal the non-robustness of existing deep learning systems. One can alter the class decision made by the DNN by making small changes to the input, changes which would not alter the (robust) decision-making of a human being, for example performing visual pattern recognition. Thus, TTEs are a foil to claims that deep learning, currently, is achieving truly robust pattern recognition, let alone that it is close to achieving true artificial intelligence. Thus, TTEs are a spur to the machine learning community to devise more robust pattern recognition systems. We survey various TTE attacks, including FGSM, JSMA, and CW. We then survey several types of defenses, including anomaly detection as well as robust classifier training strategies. Experiments are included for anomaly detection defenses based on classical statistical anomaly detection, as well as a class-conditional generative adversarial network, which effectively learns to discriminate “normal” from adversarial samples, and without any supervision (no supervising attack examples).

Keywords

  • test-time evasion attack
  • adversarial input
  • black box attack
  • white box attack
  • man-in-the-middle
  • attack transferability
  • targeted attack
  • class-conditional generative adversarial network
  • anomaly detection
  • CW attack

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$69.99
Hardback
US$69.99

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers