Hostname: page-component-89b8bd64d-ktprf Total loading time: 0 Render date: 2026-05-08T15:03:16.221Z Has data issue: false hasContentIssue false

Laplacian networks: bounding indicator function smoothness for neural networks robustness

Published online by Cambridge University Press:  05 February 2021

Carlos Lassance*
Affiliation:
Département Électronique, IMT Atlantique, 655 Avenue du Technopôle, Brest 29280, France
Vincent Gripon
Affiliation:
Département Électronique, IMT Atlantique, 655 Avenue du Technopôle, Brest 29280, France
Antonio Ortega
Affiliation:
Department of Electrical and Computer Engineering, Signal and Image Processing Institute, University of Southern California, 3740 McClintock Ave., EEB 436, Los Angeles, CA 90089-2564, USA
*
Corresponding author: Carlos Lassance Email: cadurosar@gmail.com

Abstract

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.

Information

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press.
Figure 0

Fig. 1. Illustration of the effect of our proposed regularizer. In this example, the goal is to classify circles and crosses (top). Without use of regularizers (bottom left), the resulting embedding may considerably stretch the boundary regions. Consequently, the risk is to obtain sharp transitions in the network function (that would correspond to a large value of $\alpha$ in equation (2)). Another possible issue would be to push inputs closer to the boundary (bottom center), thus reducing the margin (that would correspond to a small value of $r$ in equation (2)). Forcing small variations of smoothness of label signals (bottom right), we ensure the topology is not dramatically changed in the boundary regions.

Figure 1

Fig. 2. Estimations of $\alpha _{\min }(r)$ obtained for different radii $r$ over training examples. The proposed regularizer allows for smaller $\alpha$ values when $r$ increases.

Figure 2

Table 1. Network mREI under different types of perturbations

Figure 3

Fig. 3. Illustration of the 15 perturbations from [7]. Best viewed in color.

Figure 4

Fig. 4. Robustness against an adversary measured by the test set accuracy under FGSM attack in the left and center plots and by the mean $\mathcal {L}_2$ pixel distance needed to fool the network using DeepFool on the right plot.

Figure 5

Table 2. Median test set accuracy on the CIFAR-10 dataset against the PGD attack

Figure 6

Table 3. Comparison of CIFAR-10 test set accuracy under the black box FGSM attack

Figure 7

Fig. 5. CIFAR-10 test set accuracy under different types of implementation related noise.

Figure 8

Fig. 6. Test set accuracy under Gaussian noise with varying SNRs.

Figure 9

Fig. 7. Robustness against an adversary measured by the test set accuracy under FGSM attack in the left and center plots and by the mean $\mathcal {L}_2$ pixel distance needed to fool the network using DeepFool on the right plot.

Figure 10

Fig. 8. Test set accuracy under different types of implementation related noise.

Figure 11

Table 4. Test set accuracy results on the CIFAR-10 dataset with PGD training

Figure 12

Table 5. Test set accuracy results on the CIFAR-100 dataset

Figure 13

Table 6. Test set accuracy results on the Imagenet32x32 dataset

Figure 14

Fig. A1. Depiction of the studied network.

Figure 15

Table A1. Networks error under different types of perturbations

Figure 16

Table A2. Comparison of the total time that it takes to train each method