Welcome to the Adversarial Robustness Toolbox

Adversarial Robustness Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats (including evasion, extraction and poisoning) and helps making AI systems more secure and trustworthy. Machine Learning models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately crafted to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defences and test them with adversarial attacks.

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial examples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. ART includes attacks for testing defenses with state-of-the-art threat models.

The code of ART is on GitHub.

The library is under continuous development and feedback, bug reports and contributions are very welcome.

Implemented Attacks, Defences, Detections, Metrics, Certifications and Verifications

Evasion Attacks:

Extraction Attacks:

Poisoning Attacks:

Defences - Preprocessor:

Defences - Postprocessor:

Defences - Trainer:

Defences - Transformer:

Robustness Metrics, Certifications and Verifications:

Detection of adversarial Examples:

  • Basic detector based on inputs

  • Detector trained on the activations of a specific layer

  • Detector based on Fast Generalized Subset Scan (Speakman et al., 2018)

Detection of poisoning attacks:


Indices and tables