art.estimators.encoding

Encoder API.

Mixin Base Class Encoder

class art.estimators.encoding.EncoderMixin

Mixin abstract base class defining functionality for encoders.

abstract property encoding_length: int

Returns the length of the encoding size output.

Returns:

The length of the encoding size output.

TensorFlow Encoder

class art.estimators.encoding.TensorFlowEncoder(input_ph: tf.Placeholder, model: tf.Tensor, loss: tf.Tensor | None = None, sess: tf.compat.v1.Session | None = None, channels_first: bool = False, clip_values: CLIP_VALUES_TYPE | None = None, preprocessing_defences: Preprocessor | List[Preprocessor] | None = None, postprocessing_defences: Postprocessor | List[Postprocessor] | None = None, preprocessing: PREPROCESSING_TYPE = (0.0, 1.0), feed_dict: Dict[Any, Any] | None = None)

This class implements an encoder model using the TensorFlow framework.

__init__(input_ph: tf.Placeholder, model: tf.Tensor, loss: tf.Tensor | None = None, sess: tf.compat.v1.Session | None = None, channels_first: bool = False, clip_values: CLIP_VALUES_TYPE | None = None, preprocessing_defences: Preprocessor | List[Preprocessor] | None = None, postprocessing_defences: Postprocessor | List[Postprocessor] | None = None, preprocessing: PREPROCESSING_TYPE = (0.0, 1.0), feed_dict: Dict[Any, Any] | None = None)

Initialization specific to encoder estimator implementation in TensorFlow.

Parameters:
  • input_ph – The input placeholder.

  • model – TensorFlow model, neural network or other.

  • loss – The loss function for which to compute gradients. This parameter is necessary when training the model and when computing gradients w.r.t. the loss function.

  • sess – Computation session.

  • channels_first (bool) – Set channels first or last.

  • clip_values – Tuple of the form (min, max) of floats or np.ndarray representing the minimum and maximum values allowed for features. If floats are provided, these will be used as the range of all features. If arrays are provided, each value will be considered the bound for a feature, thus the shape of clip values needs to match the total number of features.

  • preprocessing_defences – Preprocessing defence(s) to be applied by the classifier.

  • postprocessing_defences – Postprocessing defence(s) to be applied by the classifier.

  • preprocessing – Tuple of the form (subtrahend, divisor) of floats or np.ndarray of values to be used for data preprocessing. The first value will be subtracted from the input. The input will then be divided by the second one.

  • feed_dict – A feed dictionary for the session run evaluating the classifier. This dictionary includes all additionally required placeholders except the placeholders defined in this class.

property channels_first: bool
Returns:

Boolean to indicate index of the color channels in the sample x.

property clip_values: CLIP_VALUES_TYPE | None

Return the clip values of the input samples.

Returns:

Clip values (min, max).

clone_for_refitting() ESTIMATOR_TYPE

Clone estimator for refitting.

compute_loss(x: np.ndarray, y: np.ndarray, **kwargs) np.ndarray

Compute the loss of the estimator for samples x.

Parameters:
  • x – Input samples.

  • y – Target values.

Returns:

Loss values.

Return type:

Format as expected by the model

compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray

Compute the loss of the estimator for predictions pred.

Return type:

ndarray

Parameters:
  • pred (ndarray) – Model predictions.

  • y (ndarray) – Target values.

Returns:

Loss values.

property encoding_length: int

Returns the length of the encoding size output.

Returns:

The length of the encoding size output.

property feed_dict: Dict[Any, Any]

Return the feed dictionary for the session run evaluating the classifier.

Returns:

The feed dictionary for the session run evaluating the classifier.

fit(x: np.ndarray, y: np.ndarray, batch_size: int = 128, nb_epochs: int = 10, **kwargs) None

Do nothing.

fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) None

Fit the estimator using a generator yielding training batches. Implementations can provide framework-specific versions of this function to speed-up computation.

Parameters:
  • generator – Batch generator providing (x, y) for each epoch.

  • nb_epochs (int) – Number of training epochs.

get_activations(x: np.ndarray, layer: int | str, batch_size: int, framework: bool = False) np.ndarray

Do nothing.

get_params() Dict[str, Any]

Get all parameters and their values of this estimator.

Returns:

A dictionary of string parameter names to their value.

property input_ph: tf.Placeholder

Return the input placeholder.

Returns:

The input placeholder.

property input_shape: Tuple[int, ...]

Return the shape of one input sample.

Returns:

Shape of one input sample.

property layer_names: List[str] | None

Return the names of the hidden layers in the model, if applicable.

Returns:

The names of the hidden layers in the model, input and output layers are ignored.

Warning

layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.

property loss: tf.Tensor

Return the loss function.

Returns:

The loss function.

loss_gradient(x: np.ndarray, y: np.ndarray, **kwargs) np.ndarray

No gradients to compute for this method; do nothing.

property model

Return the model.

Returns:

The model.

predict(x: np.ndarray, batch_size: int = 128, **kwargs)

Perform prediction for a batch of inputs.

Parameters:
  • x – Input samples.

  • batch_size (int) – Batch size.

Returns:

Array of encoding predictions of shape (num_inputs, encoding_length).

property sess: tf.python.client.session.Session

Get current TensorFlow session.

Returns:

The current TensorFlow session.

set_params(**kwargs) None

Take a dictionary of parameters and apply checks before setting them as attributes.

Parameters:

kwargs – A dictionary of attributes.