art.attacks.evasion

Module providing evasion attacks under a common interface.

Adversarial Patch

class art.attacks.evasion.AdversarialPatch(classifier: Union[art.estimators.classification.classifier.ClassifierNeuralNetwork, art.estimators.classification.classifier.ClassifierGradients], rotation_max: float = 22.5, scale_min: float = 0.1, scale_max: float = 1.0, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16, patch_shape: Optional[Tuple[int, int, int]] = None)

Implementation of the adversarial patch attack.

__init__(classifier: Union[art.estimators.classification.classifier.ClassifierNeuralNetwork, art.estimators.classification.classifier.ClassifierGradients], rotation_max: float = 22.5, scale_min: float = 0.1, scale_max: float = 1.0, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16, patch_shape: Optional[Tuple[int, int, int]] = None)

Create an instance of the AdversarialPatch.

Parameters
  • classifier – A trained classifier.

  • rotation_max (float) – The maximum rotation applied to random patches. The value is expected to be in the range [0, 180].

  • scale_min (float) – The minimum scaling applied to random patches. The value should be in the range [0, 1], but less than scale_max.

  • scale_max (float) – The maximum scaling applied to random patches. The value should be in the range [0, 1], but larger than scale_min.

  • learning_rate (float) – The learning rate of the optimization.

  • max_iter (int) – The number of optimization steps.

  • batch_size (int) – The size of the training batch.

  • patch_shape – The shape of the adversarial patch as a tuple of shape (width, height, nb_channels). Currently only supported for TensorFlowV2Classifier. For classifiers of other frameworks the patch_shape is set to the shape of the image samples.

apply_patch(x: numpy.ndarray, scale: float, patch_external: Optional[numpy.ndarray] = None) → numpy.ndarray

A function to apply the learned adversarial patch to images.

Return type

ndarray

Parameters
  • x (ndarray) – Instances to apply randomly transformed patch.

  • scale (float) – Scale of the applied patch in relation to the classifier input shape.

  • patch_external – External patch to apply to images x.

Returns

The patched instances.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs. x is expected to have spatial dimensions.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial patch.

set_params(**kwargs) → None

Take in a dictionary of parameters and apply attack-specific checks before saving them as attributes.

Parameters

kwargs – A dictionary of attack-specific parameters.

Adversarial Patch - Numpy

class art.attacks.evasion.AdversarialPatchNumpy(classifier: Union[art.estimators.classification.classifier.ClassifierNeuralNetwork, art.estimators.classification.classifier.ClassifierGradients], target: int = 0, rotation_max: float = 22.5, scale_min: float = 0.1, scale_max: float = 1.0, learning_rate: float = 5.0, max_iter: int = 500, clip_patch: Optional[Union[list, tuple]] = None, batch_size: int = 16)

Implementation of the adversarial patch attack.

__init__(classifier: Union[art.estimators.classification.classifier.ClassifierNeuralNetwork, art.estimators.classification.classifier.ClassifierGradients], target: int = 0, rotation_max: float = 22.5, scale_min: float = 0.1, scale_max: float = 1.0, learning_rate: float = 5.0, max_iter: int = 500, clip_patch: Optional[Union[list, tuple]] = None, batch_size: int = 16) → None

Create an instance of the AdversarialPatchNumpy.

Parameters
  • classifier – A trained classifier.

  • target (int) – The target label for the created patch.

  • rotation_max (float) – The maximum rotation applied to random patches. The value is expected to be in the range [0, 180].

  • scale_min (float) – The minimum scaling applied to random patches. The value should be in the range [0, 1], but less than scale_max.

  • scale_max (float) – The maximum scaling applied to random patches. The value should be in the range [0, 1], but larger than scale_min.

  • learning_rate (float) – The learning rate of the optimization.

  • max_iter (int) – The number of optimization steps.

  • clip_patch – The minimum and maximum values for each channel in the form [(float, float), (float, float), (float, float)].

  • batch_size (int) – The size of the training batch.

apply_patch(x: numpy.ndarray, scale: float, patch_external: numpy.ndarray = None) → numpy.ndarray

A function to apply the learned adversarial patch to images.

Return type

ndarray

Parameters
  • x (ndarray) – Instances to apply randomly transformed patch.

  • scale (float) – Scale of the applied patch in relation to the classifier input shape.

  • patch_external (ndarray) – External patch to apply to images x.

Returns

The patched instances.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs. x is expected to have spatial dimensions.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial patch.

Adversarial Patch - TensorFlowV2

class art.attacks.evasion.AdversarialPatchTensorFlowV2(classifier: Union[art.estimators.classification.classifier.ClassifierNeuralNetwork, art.estimators.classification.classifier.ClassifierGradients], rotation_max: float = 22.5, scale_min: float = 0.1, scale_max: float = 1.0, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16, patch_shape: Optional[Tuple[int, int, int]] = None)

Implementation of the adversarial patch attack.

__init__(classifier: Union[art.estimators.classification.classifier.ClassifierNeuralNetwork, art.estimators.classification.classifier.ClassifierGradients], rotation_max: float = 22.5, scale_min: float = 0.1, scale_max: float = 1.0, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16, patch_shape: Optional[Tuple[int, int, int]] = None)

Create an instance of the AdversarialPatchTensorFlowV2.

Parameters
  • classifier – A trained classifier.

  • rotation_max (float) – The maximum rotation applied to random patches. The value is expected to be in the range [0, 180].

  • scale_min (float) – The minimum scaling applied to random patches. The value should be in the range [0, 1], but less than scale_max.

  • scale_max (float) – The maximum scaling applied to random patches. The value should be in the range [0, 1], but larger than scale_min.

  • learning_rate (float) – The learning rate of the optimization.

  • max_iter (int) – The number of optimization steps.

  • batch_size (int) – The size of the training batch.

  • patch_shape – The shape of the adversarial patch as a tuple of shape (width, height, nb_channels). Currently only supported for TensorFlowV2Classifier. For classifiers of other frameworks the patch_shape is set to the shape of the image samples.

apply_patch(x: numpy.ndarray, scale: float, patch_external: Optional[numpy.ndarray] = None) → numpy.ndarray

A function to apply the learned adversarial patch to images.

Return type

ndarray

Parameters
  • x (ndarray) – Instances to apply randomly transformed patch.

  • scale (float) – Scale of the applied patch in relation to the classifier input shape.

  • patch_external – External patch to apply to images x.

Returns

The patched samples.

reset_patch(initial_patch_value: numpy.ndarray) → None

Reset the adversarial patch.

Parameters

initial_patch_value (ndarray) – Patch value to use for resetting the patch.

Auto Attack

class art.attacks.evasion.AutoAttack(estimator: art.estimators.classification.classifier.ClassifierGradients, norm: Union[int, float] = inf, eps: float = 0.3, eps_step: float = 0.1, attacks: Optional[List[art.attacks.attack.EvasionAttack]] = None, batch_size: int = 32, estimator_orig: Optional[art.estimators.estimator.BaseEstimator] = None)
__init__(estimator: art.estimators.classification.classifier.ClassifierGradients, norm: Union[int, float] = inf, eps: float = 0.3, eps_step: float = 0.1, attacks: Optional[List[art.attacks.attack.EvasionAttack]] = None, batch_size: int = 32, estimator_orig: Optional[art.estimators.estimator.BaseEstimator] = None)

Create a ProjectedGradientDescent instance.

Parameters
  • estimator (ClassifierGradients) – An trained estimator.

  • norm – The norm of the adversarial perturbation. Possible values: np.inf, 1 or 2.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • attacks – The list of art.attacks.EvasionAttack attacks to be used for AutoAttack. If it is None the original AutoAttack (PGD, APGD-ce, APGD-dlr, FAB, Square) will be used.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

  • estimator_orig – Original estimator to be attacked by adversarial examples.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

Returns

An array holding the adversarial examples.

Auto Projected Gradient Descent (Auto-PGD)

class art.attacks.evasion.AutoProjectedGradientDescent(estimator: art.estimators.estimator.BaseEstimator, norm: Union[float, int] = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, nb_random_init: int = 5, batch_size: int = 32, loss_type: Optional[str] = None)
__init__(estimator: art.estimators.estimator.BaseEstimator, norm: Union[float, int] = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, nb_random_init: int = 5, batch_size: int = 32, loss_type: Optional[str] = None)

Create a AutoProjectedGradientDescent instance.

Parameters
  • estimator (BaseEstimator) – An trained estimator.

  • norm – The norm of the adversarial perturbation. Possible values: np.inf, 1 or 2.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • max_iter (int) – The maximum number of iterations.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • nb_random_init (int) – Number of random initialisations within the epsilon ball. For num_random_init=0 starting at the original input.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

  • mask (np.ndarray) – An array with a mask to be applied to the adversarial perturbations. Shape needs to be broadcastable to the shape of x. Any features for which the mask is zero will not be adversarially perturbed.

Returns

An array holding the adversarial examples.

Decision-Based Attack / Boundary Attack

class art.attacks.evasion.BoundaryAttack(estimator: art.estimators.classification.classifier.Classifier, targeted: bool = True, delta: float = 0.01, epsilon: float = 0.01, step_adapt: float = 0.667, max_iter: int = 5000, num_trial: int = 25, sample_size: int = 20, init_size: int = 100)

Implementation of the boundary attack from Brendel et al. (2018). This is a powerful black-box attack that only requires final class prediction.

__init__(estimator: art.estimators.classification.classifier.Classifier, targeted: bool = True, delta: float = 0.01, epsilon: float = 0.01, step_adapt: float = 0.667, max_iter: int = 5000, num_trial: int = 25, sample_size: int = 20, init_size: int = 100) → None

Create a boundary attack instance.

Parameters
  • estimator (Classifier) – A trained classifier.

  • targeted (bool) – Should the attack target one specific class.

  • delta (float) – Initial step size for the orthogonal step.

  • epsilon (float) – Initial step size for the step towards the target.

  • step_adapt (float) – Factor by which the step sizes are multiplied or divided, must be in the range (0, 1).

  • max_iter (int) – Maximum number of iterations.

  • num_trial (int) – Maximum number of trials per iteration.

  • sample_size (int) – Number of samples per trial.

  • init_size (int) – Maximum number of trials for initial generation of adversarial examples.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). If self.targeted is true, then y represents the target labels.

  • x_adv_init (np.ndarray) – Initial array to act as initial adversarial examples. Same shape as x.

Returns

An array holding the adversarial examples.

Carlini and Wagner L_2 Attack

class art.attacks.evasion.CarliniL2Method(classifier: art.estimators.classification.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, binary_search_steps: int = 10, max_iter: int = 10, initial_const: float = 0.01, max_halving: int = 5, max_doubling: int = 5, batch_size: int = 1, verbose: bool = True)

The L_2 optimized attack of Carlini and Wagner (2016). This attack is among the most effective and should be used among the primary attacks to evaluate potential defences. A major difference wrt to the original implementation (https://github.com/carlini/nn_robust_attacks) is that we use line search in the optimization of the attack objective.

__init__(classifier: art.estimators.classification.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, binary_search_steps: int = 10, max_iter: int = 10, initial_const: float = 0.01, max_halving: int = 5, max_doubling: int = 5, batch_size: int = 1, verbose: bool = True) → None

Create a Carlini L_2 attack instance.

Parameters
  • classifier (ClassifierGradients) – A trained classifier.

  • confidence (float) – Confidence of adversarial examples: a higher value produces examples that are farther away, from the original input, but classified with higher confidence as the target class.

  • targeted (bool) – Should the attack target one specific class.

  • learning_rate (float) – The initial learning rate for the attack algorithm. Smaller values produce better results but are slower to converge.

  • binary_search_steps (int) – Number of times to adjust constant with binary search (positive value). If binary_search_steps is large, then the algorithm is not very sensitive to the value of initial_const. Note that the values gamma=0.999999 and c_upper=10e10 are hardcoded with the same values used by the authors of the method.

  • max_iter (int) – The maximum number of iterations.

  • initial_const (float) – The initial trade-off constant c to use to tune the relative importance of distance and confidence. If binary_search_steps is large, the initial constant is not important, as discussed in Carlini and Wagner (2016).

  • max_halving (int) – Maximum number of halving steps in the line search optimization.

  • max_doubling (int) – Maximum number of doubling steps in the line search optimization.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

  • verbose (bool) – Indicates whether to print verbose messages.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). If self.targeted is true, then y represents the target labels. If self.targeted is true, then y_val represents the target labels. Otherwise, the targets are the original class labels.

Returns

An array holding the adversarial examples.

Carlini and Wagner L_inf Attack

class art.attacks.evasion.CarliniLInfMethod(classifier: art.estimators.classification.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, max_iter: int = 10, max_halving: int = 5, max_doubling: int = 5, eps: float = 0.3, batch_size: int = 128, verbose: bool = True)

This is a modified version of the L_2 optimized attack of Carlini and Wagner (2016). It controls the L_Inf norm, i.e. the maximum perturbation applied to each pixel.

__init__(classifier: art.estimators.classification.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, max_iter: int = 10, max_halving: int = 5, max_doubling: int = 5, eps: float = 0.3, batch_size: int = 128, verbose: bool = True) → None

Create a Carlini L_Inf attack instance.

Parameters
  • classifier (ClassifierGradients) – A trained classifier.

  • confidence (float) – Confidence of adversarial examples: a higher value produces examples that are farther away, from the original input, but classified with higher confidence as the target class.

  • targeted (bool) – Should the attack target one specific class.

  • learning_rate (float) – The initial learning rate for the attack algorithm. Smaller values produce better results but are slower to converge.

  • max_iter (int) – The maximum number of iterations.

  • max_halving (int) – Maximum number of halving steps in the line search optimization.

  • max_doubling (int) – Maximum number of doubling steps in the line search optimization.

  • eps (float) – An upper bound for the L_0 norm of the adversarial perturbation.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

  • verbose (bool) – Indicates whether to print verbose messages.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). If self.targeted is true, then y_val represents the target labels. Otherwise, the targets are the original class labels.

Returns

An array holding the adversarial examples.

Decision Tree Attack

class art.attacks.evasion.DecisionTreeAttack(classifier: art.estimators.classification.scikitlearn.ScikitlearnDecisionTreeClassifier, offset: float = 0.001)

Close implementation of Papernot’s attack on decision trees following Algorithm 2 and communication with the authors.

__init__(classifier: art.estimators.classification.scikitlearn.ScikitlearnDecisionTreeClassifier, offset: float = 0.001) → None
Parameters
  • classifier (ScikitlearnDecisionTreeClassifier) – A trained model of type scikit decision tree.

  • offset (float) – How much the value is pushed away from tree’s threshold.

generate(*args, **kwargs)

Generate adversarial examples and return them as an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

Returns

An array holding the adversarial examples.

DeepFool

class art.attacks.evasion.DeepFool(classifier: art.estimators.classification.classifier.ClassifierGradients, max_iter: int = 100, epsilon: float = 1e-06, nb_grads: int = 10, batch_size: int = 1, verbose: bool = True)

Implementation of the attack from Moosavi-Dezfooli et al. (2015).

__init__(classifier: art.estimators.classification.classifier.ClassifierGradients, max_iter: int = 100, epsilon: float = 1e-06, nb_grads: int = 10, batch_size: int = 1, verbose: bool = True) → None

Create a DeepFool attack instance.

Parameters
  • classifier (ClassifierGradients) – A trained classifier.

  • max_iter (int) – The maximum number of iterations.

  • epsilon (float) – Overshoot parameter.

  • nb_grads (int) – The number of class gradients (top nb_grads w.r.t. prediction) to compute. This way only the most likely classes are considered, speeding up the computation.

  • batch_size (int) – Batch size

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial examples.

DPatch

class art.attacks.evasion.DPatch(estimator: art.estimators.object_detection.object_detector.ObjectDetectorMixin, patch_shape: Tuple[int, int, int] = 40, 40, 3, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16)

Implementation of the DPatch attack.

__init__(estimator: art.estimators.object_detection.object_detector.ObjectDetectorMixin, patch_shape: Tuple[int, int, int] = 40, 40, 3, learning_rate: float = 5.0, max_iter: int = 500, batch_size: int = 16)

Create an instance of the DPatch.

Parameters
  • estimator (ObjectDetectorMixin) – A trained object detector.

  • patch_shape (Tuple) – The shape of the adversarial path as a tuple of shape (height, width, nb_channels).

  • learning_rate (float) – The learning rate of the optimization.

  • max_iter (int) – The number of optimization steps.

  • batch_size (int) – The size of the training batch.

apply_patch(x: numpy.ndarray, patch_external: Optional[numpy.ndarray] = None, random_location: bool = False) → numpy.ndarray

Apply the adversarial patch to images.

Return type

ndarray

Parameters
  • x (ndarray) – Images to be patched.

  • patch_external – External patch to apply to images x. If None the attacks patch will be applied.

  • random_location (bool) – True if patch location should be random.

Returns

The patched images.

generate(*args, **kwargs)

Generate DPatch.

Parameters
  • x – Sample images.

  • y – Target labels for object detector.

Returns

Adversarial patch.

Elastic Net Attack

class art.attacks.evasion.ElasticNet(classifier: art.estimators.classification.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, binary_search_steps: int = 9, max_iter: int = 100, beta: float = 0.001, initial_const: float = 0.001, batch_size: int = 1, decision_rule: str = 'EN')

The elastic net attack of Pin-Yu Chen et al. (2018).

__init__(classifier: art.estimators.classification.classifier.ClassifierGradients, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, binary_search_steps: int = 9, max_iter: int = 100, beta: float = 0.001, initial_const: float = 0.001, batch_size: int = 1, decision_rule: str = 'EN') → None

Create an ElasticNet attack instance.

Parameters
  • classifier (ClassifierGradients) – A trained classifier.

  • confidence (float) – Confidence of adversarial examples: a higher value produces examples that are farther away, from the original input, but classified with higher confidence as the target class.

  • targeted (bool) – Should the attack target one specific class.

  • learning_rate (float) – The initial learning rate for the attack algorithm. Smaller values produce better results but are slower to converge.

  • binary_search_steps (int) – Number of times to adjust constant with binary search (positive value).

  • max_iter (int) – The maximum number of iterations.

  • beta (float) – Hyperparameter trading off L2 minimization for L1 minimization.

  • initial_const (float) – The initial trade-off constant c to use to tune the relative importance of distance and confidence. If binary_search_steps is large, the initial constant is not important, as discussed in Carlini and Wagner (2016).

  • batch_size (int) – Internal size of batches on which adversarial samples are generated.

  • decision_rule (str) – Decision rule. ‘EN’ means Elastic Net rule, ‘L1’ means L1 rule, ‘L2’ means L2 rule.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). If self.targeted is true, then y represents the target labels. Otherwise, the targets are the original class labels.

Returns

An array holding the adversarial examples.

Fast Gradient Method (FGM)

class art.attacks.evasion.FastGradientMethod(estimator: art.estimators.classification.classifier.ClassifierGradients, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, minimal: bool = False)

This attack was originally implemented by Goodfellow et al. (2015) with the infinity norm (and is known as the “Fast Gradient Sign Method”). This implementation extends the attack to other norms, and is therefore called the Fast Gradient Method.

__init__(estimator: art.estimators.classification.classifier.ClassifierGradients, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, minimal: bool = False) → None

Create a FastGradientMethod instance.

Parameters
  • estimator (ClassifierGradients) – A trained classifier.

  • norm (int) – The norm of the adversarial perturbation. Possible values: np.inf, 1 or 2.

  • eps (float) – Attack step size (input variation).

  • eps_step (float) – Step size of input variation for minimal perturbation computation.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False)

  • num_random_init (int) – Number of random initialisations within the epsilon ball. For random_init=0 starting at the original input.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

  • minimal (bool) – Indicates if computing the minimal perturbation (True). If True, also define eps_step for the step size and eps for the maximum perturbation.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

  • mask (np.ndarray) – An array with a mask to be applied to the adversarial perturbations. Shape needs to be broadcastable to the shape of x. Any features for which the mask is zero will not be adversarially perturbed.

Returns

An array holding the adversarial examples.

Feature Adversaries

class art.attacks.evasion.FeatureAdversaries(classifier: art.estimators.classification.classifier.ClassifierNeuralNetwork, delta: Optional[float] = None, layer: Optional[int] = None, batch_size: int = 32)

This class represent a Feature Adversaries evasion attack.

__init__(classifier: art.estimators.classification.classifier.ClassifierNeuralNetwork, delta: Optional[float] = None, layer: Optional[int] = None, batch_size: int = 32)

Create a FeatureAdversaries instance.

Parameters
  • classifier (Classifier) – A trained classifier.

  • delta – The maximum deviation between source and guide images.

  • layer – Index of the representation layer.

  • batch_size (int) – Batch size.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – Source samples.

  • y – Guide samples.

  • kwargs

    The kwargs are used as options for the minimisation with scipy.optimize.minimize using method=”L-BFGS-B”. Valid options are based on the output of scipy.optimize.show_options(solver=’minimize’, method=’L-BFGS-B’): Minimize a scalar function of one or more variables using the L-BFGS-B algorithm.

    dispNone or int

    If disp is None (the default), then the supplied version of iprint is used. If disp is not None, then it overrides the supplied version of iprint with the behaviour you outlined.

    maxcorint

    The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.)

    ftolfloat

    The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol.

    gtolfloat

    The iteration will stop when max{|proj g_i | i = 1, ..., n} <= gtol where pg_i is the i-th component of the projected gradient.

    epsfloat

    Step size used for numerical approximation of the jacobian.

    maxfunint

    Maximum number of function evaluations.

    maxiterint

    Maximum number of iterations.

    iprintint, optional

    Controls the frequency of output. iprint < 0 means no output; iprint = 0 print only one line at the last iteration; 0 < iprint < 99 print also f and |proj g| every iprint iterations; iprint = 99 print details of every iteration except n-vectors; iprint = 100 print also the changes of active set and final x; iprint > 100 print details of every iteration including x and g.

    callbackcallable, optional

    Called after each iteration, as callback(xk), where xk is the current parameter vector.

    maxlsint, optional

    Maximum number of line search steps (per iteration). Default is 20.

    The option ftol is exposed via the scipy.optimize.minimize interface, but calling scipy.optimize.fmin_l_bfgs_b directly exposes factr. The relationship between the two is ftol = factr * numpy.finfo(float).eps. I.e., factr multiplies the default machine floating-point precision to arrive at ftol.

Returns

Adversarial examples.

Raises

KeyError – The argument {} in kwargs is not allowed as option for scipy.optimize.minimize using method=”L-BFGS-B”.

Frame Saliency Attack

class art.attacks.evasion.FrameSaliencyAttack(classifier: art.estimators.classification.classifier.Classifier, attacker: art.attacks.attack.EvasionAttack, method: str = 'iterative_saliency', frame_index: int = 1, batch_size: int = 1)

Implementation of the attack framework proposed by Inkawhich et al. (2018). Prioritizes the frame of a sequential input to be adversarially perturbed based on the saliency score of each frame.

__init__(classifier: art.estimators.classification.classifier.Classifier, attacker: art.attacks.attack.EvasionAttack, method: str = 'iterative_saliency', frame_index: int = 1, batch_size: int = 1)
Parameters
  • classifier (Classifier) – A trained classifier.

  • attacker (EvasionAttack) – An adversarial evasion attacker which supports masking. Currently supported: ProjectedGradientDescent, BasicIterativeMethod, FastGradientMethod.

  • method (str) – Specifies which method to use: “iterative_saliency” (adds perturbation iteratively to frame with highest saliency score until attack is successful), “iterative_saliency_refresh” (updates perturbation after each iteration), “one_shot” (adds all perturbations at once, i.e. defaults to original attack).

  • frame_index (int) – Index of the axis in input (feature) array x representing the frame dimension.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial examples.

HopSkipJump Attack

class art.attacks.evasion.HopSkipJump(classifier: Classifier, targeted: bool = False, norm: int = 2, max_iter: int = 50, max_eval: int = 10000, init_eval: int = 100, init_size: int = 100)

Implementation of the HopSkipJump attack from Jianbo et al. (2019). This is a powerful black-box attack that only requires final class prediction, and is an advanced version of the boundary attack.

__init__(classifier: Classifier, targeted: bool = False, norm: int = 2, max_iter: int = 50, max_eval: int = 10000, init_eval: int = 100, init_size: int = 100) → None

Create a HopSkipJump attack instance.

Parameters
  • classifier – A trained classifier.

  • targeted (bool) – Should the attack target one specific class.

  • norm (int) – Order of the norm. Possible values: np.inf or 2.

  • max_iter (int) – Maximum number of iterations.

  • max_eval (int) – Maximum number of evaluations for estimating gradient.

  • init_eval (int) – Initial number of evaluations for estimating gradient.

  • init_size (int) – Maximum number of trials for initial generation of adversarial examples.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

  • x_adv_init (np.ndarray) – Initial array to act as initial adversarial examples. Same shape as x.

  • resume (bool) – Allow users to continue their previous attack.

Returns

An array holding the adversarial examples.

High Confidence Low Uncertainty Attack

class art.attacks.evasion.HighConfidenceLowUncertainty(classifier: art.estimators.classification.GPy.GPyGaussianProcessClassifier, conf: float = 0.95, unc_increase: float = 100.0, min_val: float = 0.0, max_val: float = 1.0)

Implementation of the High-Confidence-Low-Uncertainty (HCLU) adversarial example formulation by Grosse et al. (2018)

__init__(classifier: art.estimators.classification.GPy.GPyGaussianProcessClassifier, conf: float = 0.95, unc_increase: float = 100.0, min_val: float = 0.0, max_val: float = 1.0) → None
Parameters
  • classifier (GPyGaussianProcessClassifier) – A trained model of type GPYGaussianProcessClassifier.

  • conf (float) – Confidence that examples should have, if there were to be classified as 1.0 maximally.

  • unc_increase (float) – Value uncertainty is allowed to deviate, where 1.0 is original value.

  • min_val (float) – minimal value any feature can take.

  • max_val (float) – maximal value any feature can take.

generate(*args, **kwargs)

Generate adversarial examples and return them as an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

Returns

An array holding the adversarial examples.

Basic Iterative Method (BIM)

class art.attacks.evasion.BasicIterativeMethod(estimator: ClassifierGradients, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, batch_size: int = 32)

The Basic Iterative Method is the iterative version of FGM and FGSM.

__init__(estimator: ClassifierGradients, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, batch_size: int = 32) → None

Create a ProjectedGradientDescent instance.

Parameters
  • estimator – A trained classifier.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • max_iter (int) – The maximum number of iterations.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

Projected Gradient Descent (PGD)

class art.attacks.evasion.ProjectedGradientDescent(estimator, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, random_eps: bool = False)

The Projected Gradient Descent attack is an iterative method in which, after each iteration, the perturbation is projected on an lp-ball of specified radius (in addition to clipping the values of the adversarial sample so that it lies in the permitted data range). This is the attack proposed by Madry et al. for adversarial training.

__init__(estimator, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, random_eps: bool = False)

Create a ProjectedGradientDescent instance.

Parameters
  • estimator – An trained estimator.

  • norm (int) – The norm of the adversarial perturbation supporting np.inf, 1 or 2.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • random_eps (bool) – When True, epsilon is drawn randomly from truncated normal distribution. The literature suggests this for FGSM based training to generalize across different epsilons. eps_step is modified to preserve the ratio of eps / eps_step. The effectiveness of this method with PGD is untested (https://arxiv.org/pdf/1611.01236.pdf).

  • max_iter (int) – The maximum number of iterations.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • num_random_init (int) – Number of random initialisations within the epsilon ball. For num_random_init=0 starting at the original input.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

Returns

An array holding the adversarial examples.

set_params(**kwargs) → None

Take in a dictionary of parameters and apply attack-specific checks before saving them as attributes.

Parameters

kwargs – A dictionary of attack-specific parameters.

Projected Gradient Descent (PGD) - Numpy

class art.attacks.evasion.ProjectedGradientDescentNumpy(estimator, norm=inf, eps=0.3, eps_step=0.1, max_iter=100, targeted=False, num_random_init=0, batch_size=32, random_eps=False)

The Projected Gradient Descent attack is an iterative method in which, after each iteration, the perturbation is projected on an lp-ball of specified radius (in addition to clipping the values of the adversarial sample so that it lies in the permitted data range). This is the attack proposed by Madry et al. for adversarial training.

__init__(estimator, norm=inf, eps=0.3, eps_step=0.1, max_iter=100, targeted=False, num_random_init=0, batch_size=32, random_eps=False)

Create a ProjectedGradientDescentNumpy instance.

Parameters
  • estimator (BaseEstimator) – An trained estimator.

  • norm (int) – The norm of the adversarial perturbation supporting np.inf, 1 or 2.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • random_eps (bool) – When True, epsilon is drawn randomly from truncated normal distribution. The literature suggests this for FGSM based training to generalize across different epsilons. eps_step is modified to preserve the ratio of eps / eps_step. The effectiveness of this method with PGD is untested (https://arxiv.org/pdf/1611.01236.pdf).

  • max_iter (int) – The maximum number of iterations.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False)

  • num_random_init (int) – Number of random initialisations within the epsilon ball. For num_random_init=0 starting at the original input.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

  • mask (np.ndarray) – An array with a mask to be applied to the adversarial perturbations. Shape needs to be broadcastable to the shape of x. Any features for which the mask is zero will not be adversarially perturbed.

Returns

An array holding the adversarial examples.

Projected Gradient Descent (PGD) - PyTorch

class art.attacks.evasion.ProjectedGradientDescentPyTorch(estimator, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, random_eps: bool = False)

The Projected Gradient Descent attack is an iterative method in which, after each iteration, the perturbation is projected on an lp-ball of specified radius (in addition to clipping the values of the adversarial sample so that it lies in the permitted data range). This is the attack proposed by Madry et al. for adversarial training.

__init__(estimator, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, random_eps: bool = False)

Create a ProjectedGradientDescentPytorch instance.

Parameters
  • estimator (BaseEstimator) – An trained estimator.

  • norm (int) – The norm of the adversarial perturbation. Possible values: np.inf, 1 or 2.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • random_eps (bool) – When True, epsilon is drawn randomly from truncated normal distribution. The literature suggests this for FGSM based training to generalize across different epsilons. eps_step is modified to preserve the ratio of eps / eps_step. The effectiveness of this method with PGD is untested (https://arxiv.org/pdf/1611.01236.pdf).

  • max_iter (int) – The maximum number of iterations.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • num_random_init (int) – Number of random initialisations within the epsilon ball. For num_random_init=0 starting at the original input.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

  • mask (np.ndarray) – An array with a mask to be applied to the adversarial perturbations. Shape needs to be broadcastable to the shape of x. Any features for which the mask is zero will not be adversarially perturbed.

Returns

An array holding the adversarial examples.

Projected Gradient Descent (PGD) - TensorFlowV2

class art.attacks.evasion.ProjectedGradientDescentTensorFlowV2(estimator, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, random_eps: bool = False)

The Projected Gradient Descent attack is an iterative method in which, after each iteration, the perturbation is projected on an lp-ball of specified radius (in addition to clipping the values of the adversarial sample so that it lies in the permitted data range). This is the attack proposed by Madry et al. for adversarial training.

__init__(estimator, norm: int = inf, eps: float = 0.3, eps_step: float = 0.1, max_iter: int = 100, targeted: bool = False, num_random_init: int = 0, batch_size: int = 32, random_eps: bool = False)

Create a ProjectedGradientDescentTensorFlowV2 instance.

Parameters
  • estimator – An trained estimator.

  • norm (int) – The norm of the adversarial perturbation. Possible values: np.inf, 1 or 2.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • random_eps (bool) – When True, epsilon is drawn randomly from truncated normal distribution. The literature suggests this for FGSM based training to generalize across different epsilons. eps_step is modified to preserve the ratio of eps / eps_step. The effectiveness of this method with PGD is untested (https://arxiv.org/pdf/1611.01236.pdf).

  • max_iter (int) – The maximum number of iterations.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • num_random_init (int) – Number of random initialisations within the epsilon ball. For num_random_init=0 starting at the original input.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

  • mask (np.ndarray) – An array with a mask to be applied to the adversarial perturbations. Shape needs to be broadcastable to the shape of x. Any features for which the mask is zero will not be adversarially perturbed.

Returns

An array holding the adversarial examples.

NewtonFool

class art.attacks.evasion.NewtonFool(classifier: art.estimators.classification.classifier.ClassifierGradients, max_iter: int = 100, eta: float = 0.01, batch_size: int = 1)

Implementation of the attack from Uyeong Jang et al. (2017).

__init__(classifier: art.estimators.classification.classifier.ClassifierGradients, max_iter: int = 100, eta: float = 0.01, batch_size: int = 1) → None

Create a NewtonFool attack instance.

Parameters
  • classifier (ClassifierGradients) – A trained classifier.

  • max_iter (int) – The maximum number of iterations.

  • eta (float) – The eta coefficient.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in a Numpy array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial examples.

PixelAttack

class art.attacks.evasion.PixelAttack(classifier: Classifier, th: Optional[int] = None, es: int = 0, targeted: bool = False, verbose: bool = False)

This attack was originally implemented by Vargas et al. (2019). It is generalisation of One Pixel Attack originally implemented by Su et al. (2019).

__init__(classifier: Classifier, th: Optional[int] = None, es: int = 0, targeted: bool = False, verbose: bool = False) → None

Create a PixelAttack instance.

Parameters
  • classifier – A trained classifier.

  • th – threshold value of the Pixel/ Threshold attack. th=None indicates finding a minimum threshold.

  • es (int) – Indicates whether the attack uses CMAES (0) or DE (1) as Evolutionary Strategy.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • verbose (bool) – Indicates whether to print verbose messages of ES used.

ThresholdAttack

class art.attacks.evasion.ThresholdAttack(classifier: Classifier, th: Optional[int] = None, es: int = 0, targeted: bool = False, verbose: bool = False)

This attack was originally implemented by Vargas et al. (2019).

__init__(classifier: Classifier, th: Optional[int] = None, es: int = 0, targeted: bool = False, verbose: bool = False) → None

Create a PixelThreshold instance.

Parameters
  • classifier – A trained classifier.

  • th – threshold value of the Pixel/ Threshold attack. th=None indicates finding a minimum threshold.

  • es (int) – Indicates whether the attack uses CMAES (0) or DE (1) as Evolutionary Strategy.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • verbose (bool) – Indicates whether to print verbose messages of ES used.

Jacobian Saliency Map Attack (JSMA)

class art.attacks.evasion.SaliencyMapMethod(classifier: art.estimators.classification.classifier.ClassifierGradients, theta: float = 0.1, gamma: float = 1.0, batch_size: int = 1)

Implementation of the Jacobian-based Saliency Map Attack (Papernot et al. 2016).

__init__(classifier: art.estimators.classification.classifier.ClassifierGradients, theta: float = 0.1, gamma: float = 1.0, batch_size: int = 1) → None

Create a SaliencyMapMethod instance.

Parameters
  • classifier (ClassifierGradients) – A trained classifier.

  • theta (float) – Amount of Perturbation introduced to each modified feature per step (can be positive or negative).

  • gamma (float) – Maximum fraction of features being perturbed (between 0 and 1).

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

Returns

An array holding the adversarial examples.

Shadow Attack

class art.attacks.evasion.ShadowAttack(estimator: art.estimators.classification.classifier.Classifier, sigma: float = 0.5, nb_steps: int = 300, learning_rate: float = 0.1, lambda_tv: float = 0.3, lambda_c: float = 1.0, lambda_s: float = 0.5, batch_size: int = 400, targeted: bool = False)

Implementation of the Shadow Attack.

__init__(estimator: art.estimators.classification.classifier.Classifier, sigma: float = 0.5, nb_steps: int = 300, learning_rate: float = 0.1, lambda_tv: float = 0.3, lambda_c: float = 1.0, lambda_s: float = 0.5, batch_size: int = 400, targeted: bool = False)

Create an instance of the ShadowAttack.

Parameters
  • estimator (Classifier) – A trained classifier.

  • sigma (float) – Standard deviation random Gaussian Noise.

  • nb_steps (int) – Number of SGD steps.

  • learning_rate (float) – Learning rate for SGD.

  • lambda_tv (float) – Scalar penalty weight for total variation of the perturbation.

  • lambda_c (float) – Scalar penalty weight for change in the mean of each color channel of the perturbation.

  • lambda_s (float) – Scalar penalty weight for similarity of color channels in perturbation.

  • batch_size (int) – The size of the training batch.

  • targeted (bool) – True if the attack is targeted.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array. This requires a lot of memory, therefore it accepts only a single samples as input, e.g. a batch of size 1.

Parameters
  • x – An array of a single original input sample.

  • y – An array of a single target label.

Returns

An array with the adversarial examples.

Spatial Transformations Attack

class art.attacks.evasion.SpatialTransformation(classifier: Classifier, max_translation: float = 0.0, num_translations: int = 1, max_rotation: float = 0.0, num_rotations: int = 1)

Implementation of the spatial transformation attack using translation and rotation of inputs. The attack conducts black-box queries to the target model in a grid search over possible translations and rotations to find optimal attack parameters.

__init__(classifier: Classifier, max_translation: float = 0.0, num_translations: int = 1, max_rotation: float = 0.0, num_rotations: int = 1) → None
Parameters
  • classifier – A trained classifier.

  • max_translation (float) – The maximum translation in any direction as percentage of image size. The value is expected to be in the range [0, 100].

  • num_translations (int) – The number of translations to search on grid spacing per direction.

  • max_rotation (float) – The maximum rotation in either direction in degrees. The value is expected to be in the range [0, 180].

  • num_rotations (int) – The number of rotations to search on grid spacing.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial examples.

Square Attack

class art.attacks.evasion.SquareAttack(estimator: art.estimators.classification.classifier.ClassifierGradients, norm: Union[float, int] = inf, max_iter: int = 100, eps: float = 0.3, p_init: float = 0.8, nb_restarts: int = 1)
__init__(estimator: art.estimators.classification.classifier.ClassifierGradients, norm: Union[float, int] = inf, max_iter: int = 100, eps: float = 0.3, p_init: float = 0.8, nb_restarts: int = 1)
Parameters

estimator (ClassifierGradients) – An estimator.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x (np.ndarray) – An array with the original inputs.

  • y (np.ndarray) – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

Returns

An array holding the adversarial examples.

Return type

np.ndarray

Universal Perturbation Attack

class art.attacks.evasion.UniversalPerturbation(classifier: Union[art.estimators.classification.classifier.ClassifierGradients, art.estimators.classification.classifier.ClassifierNeuralNetwork], attacker: str = 'deepfool', attacker_params: Optional[Dict[str, Any]] = None, delta: float = 0.2, max_iter: int = 20, eps: float = 10.0, norm: int = inf, batch_size: int = 32)

Implementation of the attack from Moosavi-Dezfooli et al. (2016). Computes a fixed perturbation to be applied to all future inputs. To this end, it can use any adversarial attack method.

__init__(classifier: Union[art.estimators.classification.classifier.ClassifierGradients, art.estimators.classification.classifier.ClassifierNeuralNetwork], attacker: str = 'deepfool', attacker_params: Optional[Dict[str, Any]] = None, delta: float = 0.2, max_iter: int = 20, eps: float = 10.0, norm: int = inf, batch_size: int = 32) → None
Parameters
  • classifier – A trained classifier.

  • attacker (str) – Adversarial attack name. Default is ‘deepfool’. Supported names: ‘carlini’, ‘carlini_inf’, ‘deepfool’, ‘fgsm’, ‘bim’, ‘pgd’, ‘margin’, ‘ead’, ‘newtonfool’, ‘jsma’, ‘vat’.

  • attacker_params – Parameters specific to the adversarial attack. If this parameter is not specified, the default parameters of the chosen attack will be used.

  • delta (float) – desired accuracy

  • max_iter (int) – The maximum number of iterations for computing universal perturbation.

  • eps (float) – Attack step size (input variation).

  • norm (int) – The norm of the adversarial perturbation. Possible values: np.inf, 2.

  • batch_size (int) – Batch size for model evaluations in UniversalPerturbation.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial examples.

Virtual Adversarial Method

class art.attacks.evasion.VirtualAdversarialMethod(classifier: Union[art.estimators.classification.classifier.ClassifierGradients, art.estimators.classification.classifier.ClassifierNeuralNetwork], max_iter: int = 10, finite_diff: float = 1e-06, eps: float = 0.1, batch_size: int = 1)

This attack was originally proposed by Miyato et al. (2016) and was used for virtual adversarial training.

__init__(classifier: Union[art.estimators.classification.classifier.ClassifierGradients, art.estimators.classification.classifier.ClassifierNeuralNetwork], max_iter: int = 10, finite_diff: float = 1e-06, eps: float = 0.1, batch_size: int = 1) → None

Create a VirtualAdversarialMethod instance.

Parameters
  • classifier – A trained classifier.

  • eps (float) – Attack step (max input variation).

  • finite_diff (float) – The finite difference parameter.

  • max_iter (int) – The maximum number of iterations.

  • batch_size (int) – Size of the batch on which adversarial samples are generated.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – An array with the original labels to be predicted.

Returns

An array holding the adversarial examples.

Wasserstein Attack

class art.attacks.evasion.Wasserstein(estimator: art.estimators.estimator.BaseEstimator, targeted: bool = False, regularization: float = 3000.0, p: int = 2, kernel_size: int = 5, eps_step: float = 0.1, norm: str = 'wasserstein', ball: str = 'wasserstein', eps: float = 0.3, eps_iter: int = 10, eps_factor: float = 1.1, max_iter: int = 400, conjugate_sinkhorn_max_iter: int = 400, projected_sinkhorn_max_iter: int = 400, batch_size: int = 1)

Implements Wasserstein Adversarial Examples via Projected Sinkhorn Iterations as evasion attack.

__init__(estimator: art.estimators.estimator.BaseEstimator, targeted: bool = False, regularization: float = 3000.0, p: int = 2, kernel_size: int = 5, eps_step: float = 0.1, norm: str = 'wasserstein', ball: str = 'wasserstein', eps: float = 0.3, eps_iter: int = 10, eps_factor: float = 1.1, max_iter: int = 400, conjugate_sinkhorn_max_iter: int = 400, projected_sinkhorn_max_iter: int = 400, batch_size: int = 1)

Create a Wasserstein attack instance.

Parameters
  • estimator (BaseEstimator) – A trained estimator.

  • targeted (bool) – Indicates whether the attack is targeted (True) or untargeted (False).

  • regularization (float) – Entropy regularization.

  • p (int) – The p-wasserstein distance.

  • kernel_size (int) – Kernel size for computing the cost matrix.

  • eps_step (float) – Attack step size (input variation) at each iteration.

  • norm (str) – The norm of the adversarial perturbation. Possible values: inf, 1, 2 or wasserstein.

  • ball (str) – The ball of the adversarial perturbation. Possible values: inf, 1, 2 or wasserstein.

  • eps (float) – Maximum perturbation that the attacker can introduce.

  • eps_iter (int) – Number of iterations to increase the epsilon.

  • eps_factor (float) – Factor to increase the epsilon.

  • max_iter (int) – The maximum number of iterations.

  • conjugate_sinkhorn_max_iter (int) – The maximum number of iterations for the conjugate sinkhorn optimizer.

  • projected_sinkhorn_max_iter (int) – The maximum number of iterations for the projected sinkhorn optimizer.

  • batch_size (int) – Size of batches.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,). Only provide this parameter if you’d like to use true labels when crafting adversarial samples. Otherwise, model predictions are used as labels to avoid the “label leaking” effect (explained in this paper: https://arxiv.org/abs/1611.01236). Default is None.

  • cost_matrix (np.ndarray) – A non-negative cost matrix.

Returns

An array holding the adversarial examples.

Zeroth-Order Optimization Attack (ZOO)

class art.attacks.evasion.ZooAttack(classifier: Classifier, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, max_iter: int = 10, binary_search_steps: int = 1, initial_const: float = 0.001, abort_early: bool = True, use_resize: bool = True, use_importance: bool = True, nb_parallel: int = 128, batch_size: int = 1, variable_h: float = 0.0001)

The black-box zeroth-order optimization attack from Pin-Yu Chen et al. (2018). This attack is a variant of the C&W attack which uses ADAM coordinate descent to perform numerical estimation of gradients.

__init__(classifier: Classifier, confidence: float = 0.0, targeted: bool = False, learning_rate: float = 0.01, max_iter: int = 10, binary_search_steps: int = 1, initial_const: float = 0.001, abort_early: bool = True, use_resize: bool = True, use_importance: bool = True, nb_parallel: int = 128, batch_size: int = 1, variable_h: float = 0.0001)

Create a ZOO attack instance.

Parameters
  • classifier – A trained classifier.

  • confidence (float) – Confidence of adversarial examples: a higher value produces examples that are farther away, from the original input, but classified with higher confidence as the target class.

  • targeted (bool) – Should the attack target one specific class.

  • learning_rate (float) – The initial learning rate for the attack algorithm. Smaller values produce better results but are slower to converge.

  • max_iter (int) – The maximum number of iterations.

  • binary_search_steps (int) – Number of times to adjust constant with binary search (positive value).

  • initial_const (float) – The initial trade-off constant c to use to tune the relative importance of distance and confidence. If binary_search_steps is large, the initial constant is not important, as discussed in Carlini and Wagner (2016).

  • abort_early (bool) – True if gradient descent should be abandoned when it gets stuck.

  • use_resize (bool) – True if to use the resizing strategy from the paper: first, compute attack on inputs resized to 32x32, then increase size if needed to 64x64, followed by 128x128.

  • use_importance (bool) – True if to use importance sampling when choosing coordinates to update.

  • nb_parallel (int) – Number of coordinate updates to run in parallel. A higher value for nb_parallel should be preferred over a large batch size.

  • batch_size (int) – Internal size of batches on which adversarial samples are generated. Small batch sizes are encouraged for ZOO, as the algorithm already runs nb_parallel coordinate updates in parallel for each sample. The batch size is a multiplier of nb_parallel in terms of memory consumption.

  • variable_h (float) – Step size for numerical estimation of derivatives.

generate(*args, **kwargs)

Generate adversarial samples and return them in an array.

Parameters
  • x – An array with the original inputs to be attacked.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

Returns

An array holding the adversarial examples.