art.estimators
¶
This module contains the Estimator API.
Base Class Estimator¶
- class art.estimators.BaseEstimator(model, clip_values: Optional[CLIP_VALUES_TYPE], preprocessing_defences: Optional[Union[Preprocessor, List[Preprocessor]]] = None, postprocessing_defences: Optional[Union[Postprocessor, List[Postprocessor]]] = None, preprocessing: Union[PREPROCESSING_TYPE, Preprocessor] = (0.0, 1.0))¶
The abstract base class BaseEstimator defines the basic requirements of an estimator in ART. The BaseEstimator is is the highest abstraction of a machine learning model in ART.
- property clip_values: Optional[CLIP_VALUES_TYPE]¶
Return the clip values of the input samples.
- Returns
Clip values (min, max).
- clone_for_refitting() ESTIMATOR_TYPE ¶
Clone estimator for refitting.
- compute_loss(x: ndarray, y: Any, **kwargs) ndarray ¶
Compute the loss of the estimator for samples x.
- Parameters
x (
ndarray
) – Input samples.y – Target values.
- Returns
Loss values.
- Return type
Format as expected by the model
- compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the estimator for predictions pred.
- Return type
ndarray
- Parameters
pred (
ndarray
) – Model predictions.y (
ndarray
) – Target values.
- Returns
Loss values.
- abstract fit(x, y, **kwargs) None ¶
Fit the estimator using the training data (x, y).
- Parameters
x (Format as expected by the model) – Training data.
y (Format as expected by the model) – Target values.
- get_params() Dict[str, Any] ¶
Get all parameters and their values of this estimator.
- Returns
A dictionary of string parameter names to their value.
- abstract property input_shape: Tuple[int, ...]¶
Return the shape of one input sample.
- Returns
Shape of one input sample.
- property model¶
Return the model.
- Returns
The model.
- abstract predict(x, **kwargs) Any ¶
Perform prediction of the estimator for input x.
- Parameters
x (Format as expected by the model) – Samples.
- Returns
Predictions by the model.
- Return type
Format as produced by the model
- set_params(**kwargs) None ¶
Take a dictionary of parameters and apply checks before setting them as attributes.
- Parameters
kwargs – A dictionary of attributes.
Mixin Base Class Loss Gradients¶
- class art.estimators.LossGradientsMixin¶
Mixin abstract base class defining additional functionality for estimators providing loss gradients. An estimator of this type can be combined with white-box attacks. This mixin abstract base class has to be mixed in with class BaseEstimator.
- abstract loss_gradient(x, y, **kwargs)¶
Compute the gradient of the loss function w.r.t. x.
- Parameters
x (Format as expected by the model) – Samples.
y (Format as expected by the model) – Target values.
- Returns
Loss gradients w.r.t. x in the same format as x.
- Return type
Format as expected by the model
Mixin Base Class Neural Networks¶
- class art.estimators.NeuralNetworkMixin(channels_first: bool, **kwargs)¶
Mixin abstract base class defining additional functionality required for neural network estimators. This base class has to be mixed in with class BaseEstimator.
- property channels_first: bool¶
- Returns
Boolean to indicate index of the color channels in the sample x.
- abstract fit(x: ndarray, y, batch_size: int = 128, nb_epochs: int = 20, **kwargs) None ¶
Fit the model of the estimator on the training data x and y.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).y (Format as expected by the model) – Target values.
batch_size (
int
) – Batch size.nb_epochs (
int
) – Number of training epochs.
- fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) None ¶
Fit the estimator using a generator yielding training batches. Implementations can provide framework-specific versions of this function to speed-up computation.
- Parameters
generator – Batch generator providing (x, y) for each epoch.
nb_epochs (
int
) – Number of training epochs.
- abstract get_activations(x: ndarray, layer: Union[int, str], batch_size: int, framework: bool = False) ndarray ¶
Return the output of a specific layer for samples x where layer is the index of the layer between 0 and nb_layers - 1 or the name of the layer. The number of layers can be determined by counting the results returned by calling `layer_names.
- Return type
ndarray
- Parameters
x (
ndarray
) – Sampleslayer – Index or name of the layer.
batch_size (
int
) – Batch size.framework (
bool
) – If true, return the intermediate tensor representation of the activation.
- Returns
The output of layer, where the first dimension is the batch size corresponding to x.
- property layer_names: Optional[List[str]]¶
Return the names of the hidden layers in the model, if applicable.
- Returns
The names of the hidden layers in the model, input and output layers are ignored.
Warning
layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.
- abstract predict(x: ndarray, batch_size: int = 128, **kwargs)¶
Perform prediction of the neural network for samples x.
- Parameters
x (
ndarray
) – Input samples.batch_size (
int
) – Batch size.
- Returns
Predictions.
- Return type
Format as expected by the model
Mixin Base Class Decision Trees¶
- class art.estimators.DecisionTreeMixin¶
Mixin abstract base class defining additional functionality for decision-tree-based estimators. This mixin abstract base class has to be mixed in with class BaseEstimator.
- abstract get_trees() List[Tree] ¶
Get the decision trees.
- Returns
A list of decision trees.
Base Class KerasEstimator¶
- class art.estimators.KerasEstimator(**kwargs)¶
Estimator class for Keras models.
- __init__(**kwargs) None ¶
Estimator class for Keras models.
- property channels_first: bool¶
- Returns
Boolean to indicate index of the color channels in the sample x.
- property clip_values: Optional[CLIP_VALUES_TYPE]¶
Return the clip values of the input samples.
- Returns
Clip values (min, max).
- clone_for_refitting() KERAS_ESTIMATOR_TYPE ¶
Create a copy of the estimator that can be refit from scratch. Will inherit same architecture, optimizer and initialization as cloned model, but without weights.
- Returns
new estimator
- compute_loss(x: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the neural network for samples x.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).y (
ndarray
) – Target values (class labels) one-hot-encoded of shape (nb_samples, nb_classes) or indices of shape (nb_samples,).
- Returns
Loss values.
- Return type
Format as expected by the model
- compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the estimator for predictions pred.
- Return type
ndarray
- Parameters
pred (
ndarray
) – Model predictions.y (
ndarray
) – Target values.
- Returns
Loss values.
- fit(x: ndarray, y, batch_size: int = 128, nb_epochs: int = 20, **kwargs) None ¶
Fit the model of the estimator on the training data x and y.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).y (Format as expected by the model) – Target values.
batch_size (
int
) – Batch size.nb_epochs (
int
) – Number of training epochs.
- fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) None ¶
Fit the estimator using a generator yielding training batches. Implementations can provide framework-specific versions of this function to speed-up computation.
- Parameters
generator – Batch generator providing (x, y) for each epoch.
nb_epochs (
int
) – Number of training epochs.
- abstract get_activations(x: ndarray, layer: Union[int, str], batch_size: int, framework: bool = False) ndarray ¶
Return the output of a specific layer for samples x where layer is the index of the layer between 0 and nb_layers - 1 or the name of the layer. The number of layers can be determined by counting the results returned by calling `layer_names.
- Return type
ndarray
- Parameters
x (
ndarray
) – Sampleslayer – Index or name of the layer.
batch_size (
int
) – Batch size.framework (
bool
) – If true, return the intermediate tensor representation of the activation.
- Returns
The output of layer, where the first dimension is the batch size corresponding to x.
- get_params() Dict[str, Any] ¶
Get all parameters and their values of this estimator.
- Returns
A dictionary of string parameter names to their value.
- abstract property input_shape: Tuple[int, ...]¶
Return the shape of one input sample.
- Returns
Shape of one input sample.
- property layer_names: Optional[List[str]]¶
Return the names of the hidden layers in the model, if applicable.
- Returns
The names of the hidden layers in the model, input and output layers are ignored.
Warning
layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.
- abstract loss_gradient(x, y, **kwargs)¶
Compute the gradient of the loss function w.r.t. x.
- Parameters
x (Format as expected by the model) – Samples.
y (Format as expected by the model) – Target values.
- Returns
Loss gradients w.r.t. x in the same format as x.
- Return type
Format as expected by the model
- property model¶
Return the model.
- Returns
The model.
- predict(x: ndarray, batch_size: int = 128, **kwargs)¶
Perform prediction of the neural network for samples x.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).batch_size (
int
) – Batch size.
- Returns
Predictions.
- Return type
Format as expected by the model
- set_params(**kwargs) None ¶
Take a dictionary of parameters and apply checks before setting them as attributes.
- Parameters
kwargs – A dictionary of attributes.
Base Class MXEstimator¶
- class art.estimators.MXEstimator(**kwargs)¶
Estimator for MXNet Gluon models.
- __init__(**kwargs) None ¶
Estimator class for MXNet Gluon models.
- property channels_first: bool¶
- Returns
Boolean to indicate index of the color channels in the sample x.
- property clip_values: Optional[CLIP_VALUES_TYPE]¶
Return the clip values of the input samples.
- Returns
Clip values (min, max).
- clone_for_refitting() ESTIMATOR_TYPE ¶
Clone estimator for refitting.
- compute_loss(x: ndarray, y: Any, **kwargs) ndarray ¶
Compute the loss of the estimator for samples x.
- Parameters
x (
ndarray
) – Input samples.y – Target values.
- Returns
Loss values.
- Return type
Format as expected by the model
- compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the estimator for predictions pred.
- Return type
ndarray
- Parameters
pred (
ndarray
) – Model predictions.y (
ndarray
) – Target values.
- Returns
Loss values.
- fit(x: ndarray, y, batch_size: int = 128, nb_epochs: int = 20, **kwargs) None ¶
Fit the model of the estimator on the training data x and y.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).y (Format as expected by the model) – Target values.
batch_size (
int
) – Batch size.nb_epochs (
int
) – Number of training epochs.
- fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) None ¶
Fit the estimator using a generator yielding training batches. Implementations can provide framework-specific versions of this function to speed-up computation.
- Parameters
generator – Batch generator providing (x, y) for each epoch.
nb_epochs (
int
) – Number of training epochs.
- abstract get_activations(x: ndarray, layer: Union[int, str], batch_size: int, framework: bool = False) ndarray ¶
Return the output of a specific layer for samples x where layer is the index of the layer between 0 and nb_layers - 1 or the name of the layer. The number of layers can be determined by counting the results returned by calling `layer_names.
- Return type
ndarray
- Parameters
x (
ndarray
) – Sampleslayer – Index or name of the layer.
batch_size (
int
) – Batch size.framework (
bool
) – If true, return the intermediate tensor representation of the activation.
- Returns
The output of layer, where the first dimension is the batch size corresponding to x.
- get_params() Dict[str, Any] ¶
Get all parameters and their values of this estimator.
- Returns
A dictionary of string parameter names to their value.
- abstract property input_shape: Tuple[int, ...]¶
Return the shape of one input sample.
- Returns
Shape of one input sample.
- property layer_names: Optional[List[str]]¶
Return the names of the hidden layers in the model, if applicable.
- Returns
The names of the hidden layers in the model, input and output layers are ignored.
Warning
layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.
- abstract loss_gradient(x, y, **kwargs)¶
Compute the gradient of the loss function w.r.t. x.
- Parameters
x (Format as expected by the model) – Samples.
y (Format as expected by the model) – Target values.
- Returns
Loss gradients w.r.t. x in the same format as x.
- Return type
Format as expected by the model
- property model¶
Return the model.
- Returns
The model.
- predict(x: ndarray, batch_size: int = 128, **kwargs)¶
Perform prediction of the neural network for samples x.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).batch_size (
int
) – Batch size.
- Returns
Predictions.
- Return type
Format as expected by the model
- set_params(**kwargs) None ¶
Take a dictionary of parameters and apply checks before setting them as attributes.
- Parameters
kwargs – A dictionary of attributes.
Base Class PyTorchEstimator¶
- class art.estimators.PyTorchEstimator(device_type: str = 'gpu', **kwargs)¶
Estimator class for PyTorch models.
- __init__(device_type: str = 'gpu', **kwargs) None ¶
Estimator class for PyTorch models.
- Parameters
channels_first – Set channels first or last.
clip_values – Tuple of the form (min, max) of floats or np.ndarray representing the minimum and maximum values allowed for features. If floats are provided, these will be used as the range of all features. If arrays are provided, each value will be considered the bound for a feature, thus the shape of clip values needs to match the total number of features.
preprocessing_defences – Preprocessing defence(s) to be applied by the classifier.
postprocessing_defences – Postprocessing defence(s) to be applied by the classifier.
preprocessing – Tuple of the form (subtrahend, divisor) of floats or np.ndarray of values to be used for data preprocessing. The first value will be subtracted from the input. The input will then be divided by the second one.
device_type (
str
) – Type of device on which the classifier is run, either gpu or cpu.
- property channels_first: bool¶
- Returns
Boolean to indicate index of the color channels in the sample x.
- property clip_values: Optional[CLIP_VALUES_TYPE]¶
Return the clip values of the input samples.
- Returns
Clip values (min, max).
- clone_for_refitting() ESTIMATOR_TYPE ¶
Clone estimator for refitting.
- compute_loss(x: ndarray, y: Any, **kwargs) ndarray ¶
Compute the loss of the estimator for samples x.
- Parameters
x (
ndarray
) – Input samples.y – Target values.
- Returns
Loss values.
- Return type
Format as expected by the model
- compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the estimator for predictions pred.
- Return type
ndarray
- Parameters
pred (
ndarray
) – Model predictions.y (
ndarray
) – Target values.
- Returns
Loss values.
- property device_type: str¶
Return the type of device on which the estimator is run.
- Returns
Type of device on which the estimator is run, either gpu or cpu.
- fit(x: ndarray, y, batch_size: int = 128, nb_epochs: int = 20, **kwargs) None ¶
Fit the model of the estimator on the training data x and y.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).y (Format as expected by the model) – Target values.
batch_size (
int
) – Batch size.nb_epochs (
int
) – Number of training epochs.
- fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) None ¶
Fit the estimator using a generator yielding training batches. Implementations can provide framework-specific versions of this function to speed-up computation.
- Parameters
generator – Batch generator providing (x, y) for each epoch.
nb_epochs (
int
) – Number of training epochs.
- abstract get_activations(x: ndarray, layer: Union[int, str], batch_size: int, framework: bool = False) ndarray ¶
Return the output of a specific layer for samples x where layer is the index of the layer between 0 and nb_layers - 1 or the name of the layer. The number of layers can be determined by counting the results returned by calling `layer_names.
- Return type
ndarray
- Parameters
x (
ndarray
) – Sampleslayer – Index or name of the layer.
batch_size (
int
) – Batch size.framework (
bool
) – If true, return the intermediate tensor representation of the activation.
- Returns
The output of layer, where the first dimension is the batch size corresponding to x.
- get_params() Dict[str, Any] ¶
Get all parameters and their values of this estimator.
- Returns
A dictionary of string parameter names to their value.
- abstract property input_shape: Tuple[int, ...]¶
Return the shape of one input sample.
- Returns
Shape of one input sample.
- property layer_names: Optional[List[str]]¶
Return the names of the hidden layers in the model, if applicable.
- Returns
The names of the hidden layers in the model, input and output layers are ignored.
Warning
layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.
- abstract loss_gradient(x, y, **kwargs)¶
Compute the gradient of the loss function w.r.t. x.
- Parameters
x (Format as expected by the model) – Samples.
y (Format as expected by the model) – Target values.
- Returns
Loss gradients w.r.t. x in the same format as x.
- Return type
Format as expected by the model
- property model¶
Return the model.
- Returns
The model.
- predict(x: ndarray, batch_size: int = 128, **kwargs)¶
Perform prediction of the neural network for samples x.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).batch_size (
int
) – Batch size.
- Returns
Predictions.
- Return type
Format as expected by the model
- set_batchnorm(train: bool) None ¶
Set all batch normalization layers into train or eval mode.
- Parameters
train (
bool
) – False for evaluation mode.
- set_dropout(train: bool) None ¶
Set all dropout layers into train or eval mode.
- Parameters
train (
bool
) – False for evaluation mode.
- set_multihead_attention(train: bool) None ¶
Set all multi-head attention layers into train or eval mode.
- Parameters
train (
bool
) – False for evaluation mode.
- set_params(**kwargs) None ¶
Take a dictionary of parameters and apply checks before setting them as attributes.
- Parameters
kwargs – A dictionary of attributes.
Base Class ScikitlearnEstimator¶
- class art.estimators.ScikitlearnEstimator(model, clip_values: Optional[CLIP_VALUES_TYPE], preprocessing_defences: Optional[Union[Preprocessor, List[Preprocessor]]] = None, postprocessing_defences: Optional[Union[Postprocessor, List[Postprocessor]]] = None, preprocessing: Union[PREPROCESSING_TYPE, Preprocessor] = (0.0, 1.0))¶
Estimator class for scikit-learn models.
- __init__(model, clip_values: Optional[CLIP_VALUES_TYPE], preprocessing_defences: Optional[Union[Preprocessor, List[Preprocessor]]] = None, postprocessing_defences: Optional[Union[Postprocessor, List[Postprocessor]]] = None, preprocessing: Union[PREPROCESSING_TYPE, Preprocessor] = (0.0, 1.0))¶
Initialize a BaseEstimator object.
- Parameters
model – The model
clip_values – Tuple of the form (min, max) of floats or np.ndarray representing the minimum and maximum values allowed for features. If floats are provided, these will be used as the range of all features. If arrays are provided, each value will be considered the bound for a feature, thus the shape of clip values needs to match the total number of features.
preprocessing_defences – Preprocessing defence(s) to be applied by the estimator.
postprocessing_defences – Postprocessing defence(s) to be applied by the estimator.
preprocessing – Tuple of the form (subtrahend, divisor) of floats or np.ndarray of values to be used for data preprocessing. The first value will be subtracted from the input and the results will be divided by the second value.
- property clip_values: Optional[CLIP_VALUES_TYPE]¶
Return the clip values of the input samples.
- Returns
Clip values (min, max).
- clone_for_refitting() ESTIMATOR_TYPE ¶
Clone estimator for refitting.
- compute_loss(x: ndarray, y: Any, **kwargs) ndarray ¶
Compute the loss of the estimator for samples x.
- Parameters
x (
ndarray
) – Input samples.y – Target values.
- Returns
Loss values.
- Return type
Format as expected by the model
- compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the estimator for predictions pred.
- Return type
ndarray
- Parameters
pred (
ndarray
) – Model predictions.y (
ndarray
) – Target values.
- Returns
Loss values.
- abstract fit(x, y, **kwargs) None ¶
Fit the estimator using the training data (x, y).
- Parameters
x (Format as expected by the model) – Training data.
y (Format as expected by the model) – Target values.
- get_params() Dict[str, Any] ¶
Get all parameters and their values of this estimator.
- Returns
A dictionary of string parameter names to their value.
- abstract property input_shape: Tuple[int, ...]¶
Return the shape of one input sample.
- Returns
Shape of one input sample.
- property model¶
Return the model.
- Returns
The model.
- abstract predict(x, **kwargs) Any ¶
Perform prediction of the estimator for input x.
- Parameters
x (Format as expected by the model) – Samples.
- Returns
Predictions by the model.
- Return type
Format as produced by the model
- set_params(**kwargs) None ¶
Take a dictionary of parameters and apply checks before setting them as attributes.
- Parameters
kwargs – A dictionary of attributes.
Base Class TensorFlowEstimator¶
- class art.estimators.TensorFlowEstimator(**kwargs)¶
Estimator class for TensorFlow models.
- __init__(**kwargs) None ¶
Estimator class for TensorFlow models.
- property channels_first: bool¶
- Returns
Boolean to indicate index of the color channels in the sample x.
- property clip_values: Optional[CLIP_VALUES_TYPE]¶
Return the clip values of the input samples.
- Returns
Clip values (min, max).
- clone_for_refitting() ESTIMATOR_TYPE ¶
Clone estimator for refitting.
- compute_loss(x: ndarray, y: Any, **kwargs) ndarray ¶
Compute the loss of the estimator for samples x.
- Parameters
x (
ndarray
) – Input samples.y – Target values.
- Returns
Loss values.
- Return type
Format as expected by the model
- compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the estimator for predictions pred.
- Return type
ndarray
- Parameters
pred (
ndarray
) – Model predictions.y (
ndarray
) – Target values.
- Returns
Loss values.
- fit(x: ndarray, y, batch_size: int = 128, nb_epochs: int = 20, **kwargs) None ¶
Fit the model of the estimator on the training data x and y.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).y (Format as expected by the model) – Target values.
batch_size (
int
) – Batch size.nb_epochs (
int
) – Number of training epochs.
- fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) None ¶
Fit the estimator using a generator yielding training batches. Implementations can provide framework-specific versions of this function to speed-up computation.
- Parameters
generator – Batch generator providing (x, y) for each epoch.
nb_epochs (
int
) – Number of training epochs.
- abstract get_activations(x: ndarray, layer: Union[int, str], batch_size: int, framework: bool = False) ndarray ¶
Return the output of a specific layer for samples x where layer is the index of the layer between 0 and nb_layers - 1 or the name of the layer. The number of layers can be determined by counting the results returned by calling `layer_names.
- Return type
ndarray
- Parameters
x (
ndarray
) – Sampleslayer – Index or name of the layer.
batch_size (
int
) – Batch size.framework (
bool
) – If true, return the intermediate tensor representation of the activation.
- Returns
The output of layer, where the first dimension is the batch size corresponding to x.
- get_params() Dict[str, Any] ¶
Get all parameters and their values of this estimator.
- Returns
A dictionary of string parameter names to their value.
- abstract property input_shape: Tuple[int, ...]¶
Return the shape of one input sample.
- Returns
Shape of one input sample.
- property layer_names: Optional[List[str]]¶
Return the names of the hidden layers in the model, if applicable.
- Returns
The names of the hidden layers in the model, input and output layers are ignored.
Warning
layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.
- abstract loss_gradient(x, y, **kwargs)¶
Compute the gradient of the loss function w.r.t. x.
- Parameters
x (Format as expected by the model) – Samples.
y (Format as expected by the model) – Target values.
- Returns
Loss gradients w.r.t. x in the same format as x.
- Return type
Format as expected by the model
- property model¶
Return the model.
- Returns
The model.
- predict(x: ndarray, batch_size: int = 128, **kwargs)¶
Perform prediction of the neural network for samples x.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).batch_size (
int
) – Batch size.
- Returns
Predictions.
- Return type
Format as expected by the model
- property sess: tf.python.client.session.Session¶
Get current TensorFlow session.
- Returns
The current TensorFlow session.
- set_params(**kwargs) None ¶
Take a dictionary of parameters and apply checks before setting them as attributes.
- Parameters
kwargs – A dictionary of attributes.
Base Class TensorFlowV2Estimator¶
- class art.estimators.TensorFlowV2Estimator(**kwargs)¶
Estimator class for TensorFlow v2 models.
- __init__(**kwargs)¶
Estimator class for TensorFlow v2 models.
- property channels_first: bool¶
- Returns
Boolean to indicate index of the color channels in the sample x.
- property clip_values: Optional[CLIP_VALUES_TYPE]¶
Return the clip values of the input samples.
- Returns
Clip values (min, max).
- clone_for_refitting() ESTIMATOR_TYPE ¶
Clone estimator for refitting.
- compute_loss(x: ndarray, y: Any, **kwargs) ndarray ¶
Compute the loss of the estimator for samples x.
- Parameters
x (
ndarray
) – Input samples.y – Target values.
- Returns
Loss values.
- Return type
Format as expected by the model
- compute_loss_from_predictions(pred: ndarray, y: ndarray, **kwargs) ndarray ¶
Compute the loss of the estimator for predictions pred.
- Return type
ndarray
- Parameters
pred (
ndarray
) – Model predictions.y (
ndarray
) – Target values.
- Returns
Loss values.
- fit(x: ndarray, y, batch_size: int = 128, nb_epochs: int = 20, **kwargs) None ¶
Fit the model of the estimator on the training data x and y.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).y (Format as expected by the model) – Target values.
batch_size (
int
) – Batch size.nb_epochs (
int
) – Number of training epochs.
- fit_generator(generator: DataGenerator, nb_epochs: int = 20, **kwargs) None ¶
Fit the estimator using a generator yielding training batches. Implementations can provide framework-specific versions of this function to speed-up computation.
- Parameters
generator – Batch generator providing (x, y) for each epoch.
nb_epochs (
int
) – Number of training epochs.
- abstract get_activations(x: ndarray, layer: Union[int, str], batch_size: int, framework: bool = False) ndarray ¶
Return the output of a specific layer for samples x where layer is the index of the layer between 0 and nb_layers - 1 or the name of the layer. The number of layers can be determined by counting the results returned by calling `layer_names.
- Return type
ndarray
- Parameters
x (
ndarray
) – Sampleslayer – Index or name of the layer.
batch_size (
int
) – Batch size.framework (
bool
) – If true, return the intermediate tensor representation of the activation.
- Returns
The output of layer, where the first dimension is the batch size corresponding to x.
- get_params() Dict[str, Any] ¶
Get all parameters and their values of this estimator.
- Returns
A dictionary of string parameter names to their value.
- abstract property input_shape: Tuple[int, ...]¶
Return the shape of one input sample.
- Returns
Shape of one input sample.
- property layer_names: Optional[List[str]]¶
Return the names of the hidden layers in the model, if applicable.
- Returns
The names of the hidden layers in the model, input and output layers are ignored.
Warning
layer_names tries to infer the internal structure of the model. This feature comes with no guarantees on the correctness of the result. The intended order of the layers tries to match their order in the model, but this is not guaranteed either.
- abstract loss_gradient(x, y, **kwargs)¶
Compute the gradient of the loss function w.r.t. x.
- Parameters
x (Format as expected by the model) – Samples.
y (Format as expected by the model) – Target values.
- Returns
Loss gradients w.r.t. x in the same format as x.
- Return type
Format as expected by the model
- property model¶
Return the model.
- Returns
The model.
- predict(x: ndarray, batch_size: int = 128, **kwargs)¶
Perform prediction of the neural network for samples x.
- Parameters
x (
ndarray
) – Samples of shape (nb_samples, nb_features) or (nb_samples, nb_pixels_1, nb_pixels_2, nb_channels) or (nb_samples, nb_channels, nb_pixels_1, nb_pixels_2).batch_size (
int
) – Batch size.
- Returns
Predictions.
- Return type
Format as expected by the model
- set_params(**kwargs) None ¶
Take a dictionary of parameters and apply checks before setting them as attributes.
- Parameters
kwargs – A dictionary of attributes.