art.defences.preprocessor
¶
Module implementing preprocessing defences against adversarial attacks.
Base Class Preprocessor¶

class
art.defences.preprocessor.
Preprocessor
(is_fitted: bool = False, apply_fit: bool = True, apply_predict: bool = True)¶ Abstract base class for preprocessing defences.
 By default, the gradient is estimated using BPDA with the identity function.
To modify, override estimate_gradient

abstract
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Perform data preprocessing and return preprocessed data as tuple.
 Return type
Tuple
 Parameters
x (
ndarray
) – Dataset to be preprocessed.y – Labels to be preprocessed.
 Returns
Preprocessed data.

__init__
(is_fitted: bool = False, apply_fit: bool = True, apply_predict: bool = True) → None¶ Create a preprocessing object.
Optionally, set attributes.

property
apply_fit
¶ Property of the defence indicating if it should be applied at training time.
 Returns
True if the defence should be applied when fitting a model, False otherwise.

property
apply_predict
¶ Property of the defence indicating if it should be applied at test time.
 Returns
True if the defence should be applied at prediction time, False otherwise.

estimate_gradient
(x: numpy.ndarray, grad: numpy.ndarray) → numpy.ndarray¶ Provide an estimate of the gradients of the defence for the backward pass. If the defence is not differentiable, this is an estimate of the gradient, most often replacing the computation performed by the defence with the identity function (the default).
 Return type
ndarray
 Parameters
x (
ndarray
) – Input data for which the gradient is estimated. First dimension is the batch size.grad (
ndarray
) – Gradient value so far.
 Returns
The gradient (estimate) of the defence.

fit
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None, **kwargs) → None¶ Fit the parameters of the data preprocessor if it has any.
 Parameters
x (
ndarray
) – Training set to fit the preprocessor.y – Labels for the training set.
kwargs – Other parameters.

forward
(x: Any, y: Optional[Any] = None) → Tuple[Any, Any]¶ Perform data preprocessing and return preprocessed data.
 Return type
Tuple
 Parameters
x – Dataset to be preprocessed.
y – Labels to be preprocessed.
 Returns
Preprocessed data.

property
is_fitted
¶ Return the state of the preprocessing object.
 Returns
True if the preprocessing model has been fitted (if this applies).

set_params
(**kwargs) → None¶ Take in a dictionary of parameters and apply checks before saving them as attributes.
Feature Squeezing¶

class
art.defences.preprocessor.
FeatureSqueezing
(clip_values: Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]], bit_depth: int = 8, apply_fit: bool = False, apply_predict: bool = True)¶ Reduces the sensibility of the features of a sample.
Paper link: https://arxiv.org/abs/1704.01155Please keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1803.09868 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply feature squeezing to sample x.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to squeeze. x values are expected to be in the data range provided by clip_values.y – Labels of the sample x. This function does not affect them in any way.
 Returns
Squeezed sample.

__init__
(clip_values: Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]], bit_depth: int = 8, apply_fit: bool = False, apply_predict: bool = True) → None¶ Create an instance of feature squeezing.
 Parameters
clip_values (
Tuple
) – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.bit_depth (
int
) – The number of bits per channel for encoding the data.apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

Gaussian Data Augmentation¶

class
art.defences.preprocessor.
GaussianAugmentation
(sigma: float = 1.0, augmentation: bool = True, ratio: float = 1.0, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = True, apply_predict: bool = False)¶ Add Gaussian noise to a dataset in one of two ways: either add noise to each sample (keeping the size of the original dataset) or perform augmentation by keeping all original samples and adding noisy counterparts. When used as part of a
Classifier
instance, the defense will be applied automatically only when training if augmentation is true, and only when performing prediction otherwise.
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Augment the sample (x, y) with Gaussian noise. The result is either an extended dataset containing the original sample, as well as the newly created noisy samples (augmentation=True) or just the noisy counterparts to the original samples.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to augment with shape (batch_size, width, height, depth).y – Labels for the sample. If this argument is provided, it will be augmented with the corresponded original labels of each sample point.
 Returns
The augmented dataset and (if provided) corresponding labels.

__init__
(sigma: float = 1.0, augmentation: bool = True, ratio: float = 1.0, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = True, apply_predict: bool = False)¶ Initialize a Gaussian augmentation object.
 Parameters
sigma (
float
) – Standard deviation of Gaussian noise to be added.augmentation (
bool
) – If true, perform dataset augmentation using ratio, otherwise replace samples with noisy counterparts.ratio (
float
) – Percentage of data augmentation. E.g. for a rate of 1, the size of the dataset will double. If augmentation is false, ratio value is ignored.clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

InverseGAN¶

class
art.defences.preprocessor.
InverseGAN
(sess: tf.compat.v1.Session, gan: TensorFlowGenerator, inverse_gan: Optional[TensorFlowEncoder], apply_fit: bool = False, apply_predict: bool = False)¶ Given a latent variable generating a given adversarial sample, either inferred by an inverse GAN or randomly generated, the InverseGAN optimizes that latent variable to project a sample as close as possible to the adversarial sample without the adversarial noise.

__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None, **kwargs) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Applies the
InverseGAN
defence upon the sample input. Return type
Tuple
 Parameters
x (
ndarray
) – Sample input.y – Labels of the sample x. This function does not affect them in any way.
 Returns
Defended input.

__init__
(sess: tf.compat.v1.Session, gan: TensorFlowGenerator, inverse_gan: Optional[TensorFlowEncoder], apply_fit: bool = False, apply_predict: bool = False)¶ Create an instance of an InverseGAN.
 Parameters
sess – TF session for computations.
gan – GAN model.
inverse_gan – Inverse GAN model.
apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

compute_loss
(z_encoding: numpy.ndarray, image_adv: numpy.ndarray) → numpy.ndarray¶ Given a encoding z, computes the loss between the projected sample and the original sample.
 Return type
ndarray
 Parameters
z_encoding (
ndarray
) – The encoding z.image_adv (
ndarray
) – The adversarial image.
 Returns
The loss value

estimate_gradient
(x: numpy.ndarray, grad: numpy.ndarray) → numpy.ndarray¶ Compute the gradient of the loss function w.r.t. a z_encoding input within a GAN against a corresponding adversarial sample.
 Return type
ndarray
 Parameters
x (
ndarray
) – The encoding z.grad (
ndarray
) – Target values of shape (nb_samples, nb_classes).
 Returns
Array of gradients of the same shape as z_encoding.

DefenseGAN¶
JPEG Compression¶

class
art.defences.preprocessor.
JpegCompression
(clip_values: CLIP_VALUES_TYPE, quality: int = 50, channels_first: bool = False, apply_fit: bool = True, apply_predict: bool = True, verbose: bool = False)¶ Implement the JPEG compression defence approach.
For input images or videos with 3 color channels the compression is applied in mode RGB (3x8bit pixels, true color), for all other numbers of channels the compression is applied for each channel with mode L (8bit pixels, black and white).
Please keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1802.00420 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply JPEG compression to sample x.
For input images or videos with 3 color channels the compression is applied in mode RGB (3x8bit pixels, true color), for all other numbers of channels the compression is applied for each channel with mode L (8bit pixels, black and white).
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to compress with shape of NCHW, NHWC, NCFHW or NFHWC. x values are expected to be in the data range [0, 1] or [0, 255].y – Labels of the sample x. This function does not affect them in any way.
 Returns
compressed sample.

__init__
(clip_values: CLIP_VALUES_TYPE, quality: int = 50, channels_first: bool = False, apply_fit: bool = True, apply_predict: bool = True, verbose: bool = False)¶ Create an instance of JPEG compression.
 Parameters
clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
quality (
int
) – The image quality, on a scale from 1 (worst) to 95 (best). Values above 95 should be avoided.channels_first (
bool
) – Set channels first or last.apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.verbose (
bool
) – Show progress bars.

Label Smoothing¶

class
art.defences.preprocessor.
LabelSmoothing
(max_value: float = 0.9, apply_fit: bool = True, apply_predict: bool = False)¶ Computes a vector of smooth labels from a vector of hard ones. The hard labels have to contain ones for the correct classes and zeros for all the others. The remaining probability mass between max_value and 1 is distributed uniformly between the incorrect classes for each instance.
Please keep in mind the limitations of defences. For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705 .
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply label smoothing.
 Return type
Tuple
 Parameters
x (
ndarray
) – Input data, will not be modified by this method.y – Original vector of label probabilities (onevsrest).
 Returns
Unmodified input data and the vector of smooth probabilities as correct labels.
 Raises
ValueError – If no labels are provided.

__init__
(max_value: float = 0.9, apply_fit: bool = True, apply_predict: bool = False) → None¶ Create an instance of label smoothing.
 Parameters
max_value (
float
) – Value to affect to correct labelapply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

Mp3 Compression¶

class
art.defences.preprocessor.
Mp3Compression
(sample_rate: int, channels_first: bool = False, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False)¶ Implement the MP3 compression defense approach.

__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply MP3 compression to sample x.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to compress with shape (batch_size, length, channel) or an array of sample arrays with shape (length,) or (length, channel). x values are recommended to be of type np.int16.y – Labels of the sample x. This function does not affect them in any way.
 Returns
Compressed sample.

__init__
(sample_rate: int, channels_first: bool = False, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False) → None¶ Create an instance of MP3 compression.
 Parameters
sample_rate (
int
) – Specifies the sampling rate of sample.channels_first (
bool
) – Set channels first or last.apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.verbose (
bool
) – Show progress bars.

PixelDefend¶

class
art.defences.preprocessor.
PixelDefend
(clip_values: CLIP_VALUES_TYPE = (0.0, 1.0), eps: int = 16, pixel_cnn: Optional[CLASSIFIER_NEURALNETWORK_TYPE] = None, batch_size: int = 128, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False)¶ Implement the pixel defence approach. Defense based on PixelCNN that projects samples back to the data manifold.
Paper link: https://arxiv.org/abs/1710.10766Please keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1802.00420 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply pixel defence to sample x.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to defense with shape (batch_size, width, height, depth). x values are expected to be in the data range [0, 1].y – Labels of the sample x. This function does not affect them in any way.
 Returns
Purified sample.

__init__
(clip_values: CLIP_VALUES_TYPE = (0.0, 1.0), eps: int = 16, pixel_cnn: Optional[CLASSIFIER_NEURALNETWORK_TYPE] = None, batch_size: int = 128, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False) → None¶ Create an instance of pixel defence.
 Parameters
clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
eps (
int
) – Defense parameter 0255.pixel_cnn – Pretrained PixelCNN model.
verbose (
bool
) – Show progress bars.

Resample¶

class
art.defences.preprocessor.
Resample
(sr_original: int, sr_new: int, channels_first: bool = False, apply_fit: bool = False, apply_predict: bool = True)¶ Implement the resampling defense approach.
Resampling implicitly consists of a step that applies a lowpass filter. The underlying filter in this implementation is a Windowed Sinc Interpolation function.

__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Resample x to a new sampling rate.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to resample of shape (batch_size, length, channel) or (batch_size, channel, length).y – Labels of the sample x. This function does not affect them in any way.
 Returns
Resampled audio sample.

__init__
(sr_original: int, sr_new: int, channels_first: bool = False, apply_fit: bool = False, apply_predict: bool = True)¶ Create an instance of the resample preprocessor.
 Parameters
sr_original (
int
) – Original sampling rate of sample.sr_new (
int
) – New sampling rate of sample.channels_first (
bool
) – Set channels first or last.apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

Spatial Smoothing¶

class
art.defences.preprocessor.
SpatialSmoothing
(window_size: int = 3, channels_first: bool = False, clip_values: Optional[Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]]] = None, apply_fit: bool = False, apply_predict: bool = True)¶ Implement the local spatial smoothing defence approach.
Paper link: https://arxiv.org/abs/1704.01155Please keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1803.09868 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply local spatial smoothing to sample x.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to smooth with shape (batch_size, width, height, depth).y – Labels of the sample x. This function does not affect them in any way.
 Returns
Smoothed sample.

__init__
(window_size: int = 3, channels_first: bool = False, clip_values: Optional[Tuple[Union[int, float, numpy.ndarray], Union[int, float, numpy.ndarray]]] = None, apply_fit: bool = False, apply_predict: bool = True) → None¶ Create an instance of local spatial smoothing.
 Parameters
channels_first (
bool
) – Set channels first or last.window_size (
int
) – The size of the sliding window.clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

Spatial Smoothing  PyTorch¶

class
art.defences.preprocessor.
SpatialSmoothingPyTorch
(window_size: int = 3, channels_first: bool = False, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = False, apply_predict: bool = True, device_type: str = 'gpu')¶ Implement the local spatial smoothing defence approach in PyTorch.
Paper link: https://arxiv.org/abs/1704.01155Please keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1803.09868 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__init__
(window_size: int = 3, channels_first: bool = False, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = False, apply_predict: bool = True, device_type: str = 'gpu') → None¶ Create an instance of local spatial smoothing.
 Parameters
window_size (
int
) – Size of spatial smoothing window.channels_first (
bool
) – Set channels first or last.clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.device_type (
str
) – Type of device on which the classifier is run, either gpu or cpu.

forward
(x: torch.Tensor, y: Optional[torch.Tensor] = None) → Tuple[torch.Tensor, Optional[torch.Tensor]]¶ Apply local spatial smoothing to sample x.

Spatial Smoothing  TensorFlow v2¶

class
art.defences.preprocessor.
SpatialSmoothingTensorFlowV2
(window_size: int = 3, channels_first: bool = False, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = False, apply_predict: bool = True)¶ Implement the local spatial smoothing defence approach in TensorFlow v2.
Paper link: https://arxiv.org/abs/1704.01155Please keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1803.09868 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__init__
(window_size: int = 3, channels_first: bool = False, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = False, apply_predict: bool = True) → None¶ Create an instance of local spatial smoothing.
 Window_size
Size of spatial smoothing window.
 Parameters
channels_first (
bool
) – Set channels first or last.clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

forward
(x: tf.Tensor, y: Optional[tf.Tensor] = None) → Tuple[tf.Tensor, Optional[tf.Tensor]]¶ Apply local spatial smoothing to sample x.

Thermometer Encoding¶

class
art.defences.preprocessor.
ThermometerEncoding
(clip_values: CLIP_VALUES_TYPE, num_space: int = 10, channels_first: bool = False, apply_fit: bool = True, apply_predict: bool = True)¶ Implement the thermometer encoding defence approach.
Paper link: https://openreview.net/forum?id=S18Su–CWPlease keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1802.00420 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply thermometer encoding to sample x. The new axis with the encoding is added as last dimension.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to encode with shape (batch_size, width, height, depth).y – Labels of the sample x. This function does not affect them in any way.
 Returns
Encoded sample with shape (batch_size, width, height, depth x num_space).

__init__
(clip_values: CLIP_VALUES_TYPE, num_space: int = 10, channels_first: bool = False, apply_fit: bool = True, apply_predict: bool = True) → None¶ Create an instance of thermometer encoding.
 Parameters
clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
num_space (
int
) – Number of evenly spaced levels within the interval of minimum and maximum clip values.channels_first (
bool
) – Set channels first or last.apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.

estimate_gradient
(x: numpy.ndarray, grad: numpy.ndarray) → numpy.ndarray¶ Provide an estimate of the gradients of the defence for the backward pass. For thermometer encoding, the gradient estimate is the one used in https://arxiv.org/abs/1802.00420, where the thermometer encoding is replaced with a differentiable approximation: g(x_{i,j,c})_k = min(max(x_{i,j,c}  k / self.num_space, 0), 1).
 Return type
ndarray
 Parameters
x (
ndarray
) – Input data for which the gradient is estimated. First dimension is the batch size.grad (
ndarray
) – Gradient value so far.
 Returns
The gradient (estimate) of the defence.

Total Variance Minimization¶

class
art.defences.preprocessor.
TotalVarMin
(prob: float = 0.3, norm: int = 2, lamb: float = 0.5, solver: str = 'LBFGSB', max_iter: int = 10, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False)¶ Implement the total variance minimization defence approach.
Paper link: https://openreview.net/forum?id=SyJ7ClWCbPlease keep in mind the limitations of defences. For more information on the limitations of this defence, see https://arxiv.org/abs/1802.00420 . For details on how to evaluate classifier security in general, see https://arxiv.org/abs/1902.06705
__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply total variance minimization to sample x.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to compress with shape (batch_size, width, height, depth).y – Labels of the sample x. This function does not affect them in any way.
 Returns
Similar samples.

__init__
(prob: float = 0.3, norm: int = 2, lamb: float = 0.5, solver: str = 'LBFGSB', max_iter: int = 10, clip_values: Optional[CLIP_VALUES_TYPE] = None, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False)¶ Create an instance of total variance minimization.
 Parameters
prob (
float
) – Probability of the Bernoulli distribution.norm (
int
) – The norm (positive integer).lamb (
float
) – The lambda parameter in the objective function.solver (
str
) – Current support: LBFGSB, CG, NewtonCG.max_iter (
int
) – Maximum number of iterations when performing optimization.clip_values – Tuple of the form (min, max) representing the minimum and maximum values allowed for features.
apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.verbose (
bool
) – Show progress bars.

Video Compression¶

class
art.defences.preprocessor.
VideoCompression
(*, video_format: str, constant_rate_factor: int = 28, channels_first: bool = False, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False)¶ Implement FFmpeg wrapper for video compression defence based on H.264/MPEG4 AVC.
Video compression uses H.264 video encoding. The video quality is controlled with the constant rate factor parameter. More information on the constant rate factor: https://trac.ffmpeg.org/wiki/Encode/H.264.

__call__
(x: numpy.ndarray, y: Optional[numpy.ndarray] = None) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Apply video compression to sample x.
 Return type
Tuple
 Parameters
x (
ndarray
) – Sample to compress of shape NCFHW or NFHWC. x values are expected to be in the data range [0, 255].y – Labels of the sample x. This function does not affect them in any way.
 Returns
Compressed sample.

__init__
(*, video_format: str, constant_rate_factor: int = 28, channels_first: bool = False, apply_fit: bool = False, apply_predict: bool = True, verbose: bool = False)¶ Create an instance of VideoCompression.
 Parameters
video_format (
str
) – Specify one of supported video file extensions, e.g. avi, mp4 or mkv.constant_rate_factor (
int
) – Specify constant rate factor (range 0 to 51, where 0 is lossless).channels_first (
bool
) – Set channels first or last.apply_fit (
bool
) – True if applied during fitting/training.apply_predict (
bool
) – True if applied during predicting.verbose (
bool
) – Show progress bars.
