art.defences.transformer.evasion
¶
Module implementing transformer-based defences against evasion attacks.
Defensive Distillation¶
-
class
art.defences.transformer.evasion.
DefensiveDistillation
(classifier: CLASSIFIER_TYPE, batch_size: int = 128, nb_epochs: int = 10)¶ Implement the defensive distillation mechanism.
Paper link: https://arxiv.org/abs/1511.04508-
__call__
(x: numpy.ndarray, transformed_classifier: CLASSIFIER_TYPE) → CLASSIFIER_TYPE¶ Perform the defensive distillation defence mechanism and return a robuster classifier.
- Parameters
x (
ndarray
) – Dataset for training the transformed classifier.transformed_classifier – A classifier to be transformed for increased robustness. Note that, the objective loss function used for fitting inside the input transformed_classifier must support soft labels, i.e. probability labels.
- Returns
The transformed classifier.
-
__init__
(classifier: CLASSIFIER_TYPE, batch_size: int = 128, nb_epochs: int = 10) → None¶ Create an instance of the defensive distillation defence.
- Parameters
classifier – A trained classifier.
batch_size (
int
) – Size of batches.nb_epochs (
int
) – Number of epochs to use for training.
-
fit
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None, **kwargs) → None¶ No parameters to learn for this method; do nothing.
-