art.estimators.certification.randomized_smoothing

Mixin Base Class Randomized Smoothing

class art.estimators.certification.randomized_smoothing.RandomizedSmoothingMixin(sample_size: int, *args, scale: float = 0.1, alpha: float = 0.001, **kwargs)

Implementation of Randomized Smoothing applied to classifier predictions and gradients, as introduced in Cohen et al. (2019).

certify(x: numpy.ndarray, n: int, batch_size: int = 32) → Tuple[numpy.ndarray, numpy.ndarray]

Computes certifiable radius around input x and returns radius r and prediction.

Return type

Tuple

Parameters
  • x (ndarray) – Sample input with shape as expected by the model.

  • n (int) – Number of samples for estimate certifiable radius.

  • batch_size (int) – Batch size.

Returns

Tuple of length 2 of the selected class and certified radius.

fit(x: numpy.ndarray, y: numpy.ndarray, batch_size: int = 128, nb_epochs: int = 10, **kwargs) → None

Fit the classifier on the training set (x, y).

Parameters
  • x (ndarray) – Training data.

  • y (ndarray) – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

  • batch_size (int) – Batch size.

  • nb_epochs (int) – Number of epochs to use for training.

  • kwargs – Dictionary of framework-specific arguments. This parameter is not currently supported for PyTorch and providing it takes no effect.

predict(x: numpy.ndarray, batch_size: int = 128, **kwargs) → numpy.ndarray

Perform prediction of the given classifier for a batch of inputs, taking an expectation over transformations.

Return type

ndarray

Parameters
  • x (ndarray) – Test set.

  • batch_size (int) – Batch size.

  • is_abstain (boolean) – True if function will abstain from prediction and return 0s. Default: True

Returns

Array of predictions of shape (nb_inputs, nb_classes).

PyTorch Randomized Smoothing Classifier

class art.estimators.certification.randomized_smoothing.PyTorchRandomizedSmoothing(model: torch.nn.Module, loss: torch.nn.modules.loss._Loss, input_shape: Tuple[int, ], nb_classes: int, optimizer: Optional[torch.optim.Optimizer] = None, channels_first: bool = True, clip_values: Optional[Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]]] = None, preprocessing_defences: Optional[Union[Preprocessor, List[Preprocessor]]] = None, postprocessing_defences: Optional[Union[Postprocessor, List[Postprocessor]]] = None, preprocessing: Optional[Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]]] = 0, 1, device_type: str = 'gpu', sample_size: int = 32, scale: float = 0.1, alpha: float = 0.001)

Implementation of Randomized Smoothing applied to classifier predictions and gradients, as introduced in Cohen et al. (2019).

__init__(model: torch.nn.Module, loss: torch.nn.modules.loss._Loss, input_shape: Tuple[int, ], nb_classes: int, optimizer: Optional[torch.optim.Optimizer] = None, channels_first: bool = True, clip_values: Optional[Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]]] = None, preprocessing_defences: Optional[Union[Preprocessor, List[Preprocessor]]] = None, postprocessing_defences: Optional[Union[Postprocessor, List[Postprocessor]]] = None, preprocessing: Optional[Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]]] = 0, 1, device_type: str = 'gpu', sample_size: int = 32, scale: float = 0.1, alpha: float = 0.001)

Create a randomized smoothing classifier.

Parameters
  • model – PyTorch model. The output of the model can be logits, probabilities or anything else. Logits output should be preferred where possible to ensure attack efficiency.

  • loss – The loss function for which to compute gradients for training. The target label must be raw categorical, i.e. not converted to one-hot encoding.

  • input_shape (Tuple) – The shape of one input instance.

  • nb_classes (int) – The number of classes of the model.

  • optimizer – The optimizer used to train the classifier.

  • channels_first (bool) – Set channels first or last.

  • clip_values – Tuple of the form (min, max) of floats or np.ndarray representing the minimum and maximum values allowed for features. If floats are provided, these will be used as the range of all features. If arrays are provided, each value will be considered the bound for a feature, thus the shape of clip values needs to match the total number of features.

  • preprocessing_defences – Preprocessing defence(s) to be applied by the classifier.

  • postprocessing_defences – Postprocessing defence(s) to be applied by the classifier.

  • preprocessing – Tuple of the form (subtractor, divider) of floats or np.ndarray of values to be used for data preprocessing. The first value will be subtracted from the input. The input will then be divided by the second one.

  • device_type (str) – Type of device on which the classifier is run, either gpu or cpu.

  • sample_size (int) – Number of samples for smoothing.

  • scale (float) – Standard deviation of Gaussian noise added.

  • alpha (float) – The failure probability of smoothing.

certify(x: numpy.ndarray, n: int, batch_size: int = 32) → Tuple[numpy.ndarray, numpy.ndarray]

Computes certifiable radius around input x and returns radius r and prediction.

Return type

Tuple

Parameters
  • x (ndarray) – Sample input with shape as expected by the model.

  • n (int) – Number of samples for estimate certifiable radius.

  • batch_size (int) – Batch size.

Returns

Tuple of length 2 of the selected class and certified radius.

property channel_index
Returns

Index of the axis containing the color channels in the samples x.

property channels_first
Returns

Boolean to indicate index of the color channels in the sample x.

class_gradient(*args, **kwargs)

Compute per-class derivatives of the given classifier w.r.t. x of original classifier.

Parameters
  • x – Sample input with shape as expected by the model.

  • label – Index of a specific per-class derivative. If an integer is provided, the gradient of that class output is computed for all samples. If multiple values as provided, the first dimension should match the batch size of x, and each value will be used as target for its corresponding sample in x. If None, then gradients for all classes will be computed for each sample.

Returns

Array of gradients of input features w.r.t. each class in the form (batch_size, nb_classes, input_shape) when computing for all classes, otherwise shape becomes (batch_size, 1, input_shape) when label parameter is specified.

property clip_values

Return the clip values of the input samples.

Returns

Clip values (min, max).

property device

Get current used device.

Returns

Current used device.

fit(*args, **kwargs)

Fit the classifier on the training set (x, y).

Parameters
  • x (np.ndarray) – Training data.

  • y (np.ndarray) – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

  • batch_size (int) – Batch size.

  • kwargs (dict) – Dictionary of framework-specific arguments. This parameter is not currently supported for PyTorch and providing it takes no effect.

Key nb_epochs

Number of epochs to use for training

Returns

None

fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) → None

Fit the classifier using the generator that yields batches as specified.

Parameters
  • generator – Batch generator providing (x, y) for each epoch.

  • nb_epochs (int) – Number of epochs to use for training.

  • kwargs – Dictionary of framework-specific arguments. This parameter is not currently supported for PyTorch and providing it takes no effect.

get_activations(*args, **kwargs)

Return the output of the specified layer for input x. layer is specified by layer index (between 0 and nb_layers - 1) or by name. The number of layers can be determined by counting the results returned by calling layer_names.

Parameters
  • x – Input for computing the activations.

  • layer – Layer for computing the activations

  • batch_size – Size of batches.

  • framework – If true, return the intermediate tensor representation of the activation.

Returns

The output of layer, where the first dimension is the batch size corresponding to x.

get_params() → Dict[str, Any]

Get all parameters and their values of this estimator.

Returns

A dictionary of string parameter names to their value.

property input_shape

Return the shape of one input sample.

Returns

Shape of one input sample.

property layer_names

Return the names of the hidden layers in the model, if applicable.

Returns

The names of the hidden layers in the model, input and output layers are ignored.

Warning

layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.

property learning_phase

The learning phase set by the user. Possible values are True for training or False for prediction and None if it has not been set by the library. In the latter case, the library does not do any explicit learning phase manipulation and the current value of the backend framework is used. If a value has been set by the user for this property, it will impact all following computations for model fitting, prediction and gradients.

Returns

Learning phase.

loss_gradient(*args, **kwargs)

Compute the gradient of the loss function w.r.t. x.

Parameters
  • x – Sample input with shape as expected by the model.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

  • sampling (bool) – True if loss gradients should be determined with Monte Carlo sampling.

Returns

Array of gradients of the same shape as x.

loss_gradient_framework(x: torch.Tensor, y: torch.Tensor, **kwargs) → torch.Tensor

Compute the gradient of the loss function w.r.t. x.

Parameters
  • x – Input with shape as expected by the model.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

Returns

Gradients of the same shape as x.

property model

Return the model.

Returns

The model.

property nb_classes

Return the number of output classes.

Returns

Number of classes in the data.

predict(*args, **kwargs)

Perform prediction of the given classifier for a batch of inputs, taking an expectation over transformations.

Parameters
  • x (np.ndarray) – Test set.

  • batch_size (int) – Batch size.

  • is_abstain (boolean) – True if function will abstain from prediction and return 0s. Default: True

Returns

Array of predictions of shape (nb_inputs, nb_classes).

Return type

np.ndarray

save(filename: str, path: Optional[str] = None) → None

Save a model to file in the format specific to the backend framework.

Parameters
  • filename (str) – Name of the file where to store the model.

  • path – Path of the folder where to store the model. If no path is specified, the model will be stored in the default data location of the library ART_DATA_PATH.

set_learning_phase(train: bool) → None

Set the learning phase for the backend framework.

Parameters

train (bool) – True to set the learning phase to training, False to set it to prediction.

set_params(**kwargs) → None

Take a dictionary of parameters and apply checks before setting them as attributes.

Parameters

kwargs – A dictionary of attributes.

TensorFlow V2 Randomized Smoothing Classifier

class art.estimators.certification.randomized_smoothing.TensorFlowV2RandomizedSmoothing(model, nb_classes: int, input_shape: Tuple[int, ], loss_object: Optional[tf.Tensor] = None, train_step: Optional[Callable] = None, channels_first: bool = False, clip_values: Optional[CLIP_VALUES_TYPE] = None, preprocessing_defences: Optional[Union[Preprocessor, List[Preprocessor]]] = None, postprocessing_defences: Optional[Union[Postprocessor, List[Postprocessor]]] = None, preprocessing: PREPROCESSING_TYPE = 0, 1, sample_size: int = 32, scale: float = 0.1, alpha: float = 0.001)

Implementation of Randomized Smoothing applied to classifier predictions and gradients, as introduced in Cohen et al. (2019).

__init__(model, nb_classes: int, input_shape: Tuple[int, ], loss_object: Optional[tf.Tensor] = None, train_step: Optional[Callable] = None, channels_first: bool = False, clip_values: Optional[CLIP_VALUES_TYPE] = None, preprocessing_defences: Optional[Union[Preprocessor, List[Preprocessor]]] = None, postprocessing_defences: Optional[Union[Postprocessor, List[Postprocessor]]] = None, preprocessing: PREPROCESSING_TYPE = 0, 1, sample_size: int = 32, scale: float = 0.1, alpha: float = 0.001)

Create a randomized smoothing classifier.

Parameters
  • model (function or callable class) – a python functions or callable class defining the model and providing it prediction as output.

  • nb_classes (int) – the number of classes in the classification task.

  • input_shape (Tuple) – Shape of one input for the classifier, e.g. for MNIST input_shape=(28, 28, 1).

  • loss_object – The loss function for which to compute gradients. This parameter is applied for training the model and computing gradients of the loss w.r.t. the input.

  • train_step – A function that applies a gradient update to the trainable variables.

  • channels_first (bool) – Set channels first or last.

  • clip_values – Tuple of the form (min, max) of floats or np.ndarray representing the minimum and maximum values allowed for features. If floats are provided, these will be used as the range of all features. If arrays are provided, each value will be considered the bound for a feature, thus the shape of clip values needs to match the total number of features.

  • preprocessing_defences – Preprocessing defence(s) to be applied by the classifier.

  • postprocessing_defences – Postprocessing defence(s) to be applied by the classifier.

  • preprocessing – Tuple of the form (substractor, divider) of floats or np.ndarray of values to be used for data preprocessing. The first value will be substracted from the input. The input will then be divided by the second one.

  • sample_size (int) – Number of samples for smoothing.

  • scale (float) – Standard deviation of Gaussian noise added.

  • alpha (float) – The failure probability of smoothing.

certify(x: numpy.ndarray, n: int, batch_size: int = 32) → Tuple[numpy.ndarray, numpy.ndarray]

Computes certifiable radius around input x and returns radius r and prediction.

Return type

Tuple

Parameters
  • x (ndarray) – Sample input with shape as expected by the model.

  • n (int) – Number of samples for estimate certifiable radius.

  • batch_size (int) – Batch size.

Returns

Tuple of length 2 of the selected class and certified radius.

property channel_index
Returns

Index of the axis containing the color channels in the samples x.

property channels_first
Returns

Boolean to indicate index of the color channels in the sample x.

class_gradient(*args, **kwargs)

Compute per-class derivatives of the given classifier w.r.t. x of original classifier.

Parameters
  • x – Sample input with shape as expected by the model.

  • label – Index of a specific per-class derivative. If an integer is provided, the gradient of that class output is computed for all samples. If multiple values as provided, the first dimension should match the batch size of x, and each value will be used as target for its corresponding sample in x. If None, then gradients for all classes will be computed for each sample.

Returns

Array of gradients of input features w.r.t. each class in the form (batch_size, nb_classes, input_shape) when computing for all classes, otherwise shape becomes (batch_size, 1, input_shape) when label parameter is specified.

property clip_values

Return the clip values of the input samples.

Returns

Clip values (min, max).

fit(*args, **kwargs)

Fit the classifier on the training set (x, y).

Parameters
  • x (np.ndarray) – Training data.

  • y (np.ndarray) – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

  • batch_size (int) – Batch size.

  • kwargs (dict) – Dictionary of framework-specific arguments. This parameter is not currently supported for PyTorch and providing it takes no effect.

Key nb_epochs

Number of epochs to use for training

Returns

None

fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) → None

Fit the classifier using the generator that yields batches as specified.

Parameters
  • generator – Batch generator providing (x, y) for each epoch. If the generator can be used for native training in TensorFlow, it will.

  • nb_epochs (int) – Number of epochs to use for training.

  • kwargs – Dictionary of framework-specific arguments. This parameter is not currently supported for TensorFlow and providing it takes no effect.

get_activations(*args, **kwargs)

Return the output of the specified layer for input x. layer is specified by layer index (between 0 and nb_layers - 1) or by name. The number of layers can be determined by counting the results returned by calling layer_names.

Parameters
  • x – Input for computing the activations.

  • layer – Layer for computing the activations.

  • batch_size – Batch size.

Returns

The output of layer, where the first dimension is the batch size corresponding to x.

get_params() → Dict[str, Any]

Get all parameters and their values of this estimator.

Returns

A dictionary of string parameter names to their value.

property input_shape

Return the shape of one input sample.

Returns

Shape of one input sample.

property layer_names

Return the hidden layers in the model, if applicable.

Returns

The hidden layers in the model, input and output layers excluded.

Warning

layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.

property learning_phase

The learning phase set by the user. Possible values are True for training or False for prediction and None if it has not been set by the library. In the latter case, the library does not do any explicit learning phase manipulation and the current value of the backend framework is used. If a value has been set by the user for this property, it will impact all following computations for model fitting, prediction and gradients.

Returns

Learning phase.

loss_gradient(*args, **kwargs)

Compute the gradient of the loss function w.r.t. x.

Parameters
  • x – Sample input with shape as expected by the model.

  • y – Correct labels, one-vs-rest encoding.

  • sampling (bool) – True if loss gradients should be determined with Monte Carlo sampling.

Returns

Array of gradients of the same shape as x.

loss_gradient_framework(x: tf.Tensor, y: tf.Tensor, **kwargs) → tf.Tensor

Compute the gradient of the loss function w.r.t. x.

Parameters
  • x – Input with shape as expected by the model.

  • y – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).

Returns

Gradients of the same shape as x.

property model

Return the model.

Returns

The model.

property nb_classes

Return the number of output classes.

Returns

Number of classes in the data.

predict(*args, **kwargs)

Perform prediction of the given classifier for a batch of inputs, taking an expectation over transformations.

Parameters
  • x (np.ndarray) – Test set.

  • batch_size (int) – Batch size.

  • is_abstain (boolean) – True if function will abstain from prediction and return 0s. Default: True

Returns

Array of predictions of shape (nb_inputs, nb_classes).

Return type

np.ndarray

save(filename: str, path: Optional[str] = None) → None

Save a model to file in the format specific to the backend framework. For TensorFlow, .ckpt is used.

Parameters
  • filename (str) – Name of the file where to store the model.

  • path – Path of the folder where to store the model. If no path is specified, the model will be stored in the default data location of the library ART_DATA_PATH.

set_learning_phase(train: bool) → None

Set the learning phase for the backend framework.

Parameters

train (bool) – True to set the learning phase to training, False to set it to prediction.

set_params(**kwargs) → None

Take a dictionary of parameters and apply checks before setting them as attributes.

Parameters

kwargs – A dictionary of attributes.