Getting Started with Anomaly Detection Using Deep Learning
Anomaly detection using deep learning is an increasingly popular approach to automating visual inspection tasks. The goal of anomaly detection is to perform a binary classification of images as normal or anomalous. For example, in the manufacturing industry, anomaly detection can determine which manufactured units meet quality standards and which units are defective.
You can train an anomaly detector using semi-supervised training. A complete semi-supervised workflow consists of training a model on normal image data and determining an anomaly threshold that separates normal images from anomalous images. Anomaly detection techniques generally follow these steps:
Prepare training data and calibration data.
Train a model to produce anomaly scores.
Calibrate the model by determining an anomaly score threshold, and evaluate the performance of the model by classifying metrics and visualizing the classification results.
Classify test images using the trained model and the optimal anomaly threshold.
Deploy the model onto a production line.
Note
An alternative to anomaly detection is anomaly classification. Classification can differentiate between normal images and multiple known anomalous patterns. Classification can provide more insight on the type of anomaly, but it requires a greater amount of training data of the anomaly classes. For an example, see Classify Defects on Wafer Maps Using Deep Learning.
The Automated Visual Inspection Library for Computer Vision Toolbox™ offers functions that enable you to train, calibrate, and evaluate anomaly detection networks. You can install the Automated Visual Inspection Library for Computer Vision Toolbox from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons. The functionality also requires Deep Learning Toolbox™.
Prepare Training and Calibration Data
You can train an anomaly detector using one-class learning, which means that the network trains on data consisting only of normal images. Despite training on samples only of normal images, the model learns how to distinguish between normal and anomalous images. One-class learning offers many advantages for anomaly detection problems:
Representations of anomalies can be scarce.
Anomalies can represent expensive or catastrophic outcomes.
There can be many types of anomalies, and the types of anomalies can change over the lifetime of the model. Describing what a normal image looks like is often more feasible than providing data that represents all possible anomalies in real world settings.
You can use the Image Labeler
app to interactively label images with scene labels. The app returns the ground truth
labeled data as a groundTruth
object. Convert the ground
truth scene labels to an image datastore for training by using the sceneLabelTrainingData
function. For more information, see Get Started with the Image Labeler.
In semi-supervised learning, you can tune the performance of the trained model using
calibration data. The calibration data set consists of labeled samples of normal and
anomalous images. Avoid overfitting the model by using different images in the training
and calibration data sets. You can use functions such as partition
to split a datastore into separate datastores for training and
calibration.
Some techniques, such as Outlier Exposure, add a small number of anomalous images to the training data set.
Choose the Anomaly Detection Model
The following table distinguishes the characteristics and performance of anomaly detection methods available through the Automated Visual Inspection Library for Computer Vision Toolbox during training and inference. The available anomaly detectors include fully convolutional data description (FCDD) [1], FastFlow [2], PatchCore [3], and EfficientAD [4]. Choose an anomaly detector based on the requirements of your workflow.
Consider the training requirements.
Available Anomaly Detection Models: Training
Training Aspect | EfficientAD | PatchCore | FastFlow | FCDD |
---|---|---|---|---|
Input image size | Medium to large images (must be larger than 256x256 pixels) | Small to medium preferred (due to memory limitation for large images) | Small to medium preferred (due to memory limitation for large images) | Small to large (high-resolution images) |
Model size | Moderate to large | Moderate to large (can vary with value of the compression ratio) | Moderate to large | Small (lightest weight model) |
Base feature extraction network | Patch description network (PDN) | Flexible (supports pretrained ResNet-18 or ResNet-50, or custom pretrained network) | Flexible (supports custom pretrained network) | Flexible (supports custom pretrained network) |
Performance speed | Generally faster than Patchcore and FastFlow | Fast | Fast | Fastest |
Low-shot training regime | Supported but limited | Supported | Not supported | Not supported |
Consider the inference and testing requirements.
Available Anomaly Detection Models: Inference
Inference Aspect | EfficientAD | PatchCore | FastFlow | FCDD |
---|---|---|---|---|
Different image size at training and inference | Not supported – image size at inference must match image size at training | Not supported – image size at inference must match image size at training (due to being a feature similarity-based technique) | Supported | Supported |
Performance speed | Fast | Medium to fast | Fast | Fast |
Train the Model
The Automated Visual Inspection Library for Computer Vision Toolbox provides framework for training anomaly detection networks. Start by creating an untrained network object. Then, train the network by passing the network and the training data to the appropriate training function. The table shows the supported networks and training functions.
Network Description | Network Object | Training Function |
---|---|---|
Fully convolutional data description (FCDD) [1] | fcddAnomalyDetector | trainFCDDAnomalyDetector |
FastFlow [2] | fastFlowAnomalyDetector | trainFastFlowAnomalyDetector |
PatchCore [3] | patchCoreAnomalyDetector | trainPatchCoreAnomalyDetector |
EfficientAD [4] | efficientADAnomalyDetector | trainEfficientADAnomalyDetector |
If you want to train an anomaly detection network that uses a different framework, then design a model that accepts images as input and returns scalar valued anomaly scores. The score represents the likelihood that an image is anomalous. Larger scores indicate a higher probability that an image in anomalous samples. Optionally, you can design a network that returns localized anomaly scores in addition to a scalar aggregate score for the image. Localized score information enables you to subjectively understand the network classification decisions.
Calibrate and Evaluate the Model
Calibrate the model by determining an anomaly score threshold that separates the scores of good samples from the scores of anomaly samples. Unfortunately, most data sets cannot be cleanly separated into these two categories, and will have misclassifications in the form of false positives (normal images classified as anomalous) and false negatives (anomalous images classified as normal). By adjusting the threshold, you can tune the false positive and false negative rates to satisfy your operating requirements.
To select a threshold, you can use the anomalyThreshold
function. The function supports different optimization
methods based on statistical techniques such as receiver operating characteristic (ROC)
and precision-recall (PR) curves. The function optionally returns the performance
metrics as an rocmetrics
(Deep Learning Toolbox)
function, which you can use to plot the ROC and PR curves.
Use the evaluateAnomalyDetection
function to evaluate the quality of the anomaly
detection results using metrics such as the confusion matrix and average precision. The
anomalyDetectionMetrics
object stores the metrics.
Normalize and Visualize Anomaly Maps
You can visualize the localized anomaly scores to understand the classification results using available functionality.
Display an anomaly score map as a heatmap overlayed on a calibration or test image using the
anomalyMapOverlay
function.Normalize anomaly score maps using the
percentileNormalizer
object and the correspondingnormalize
function.Display an interactive figure that displays the classification results and score maps for an entire data set using the
viewAnomalyDetectionResults
function.
Visualizing localized anomaly scores is useful for identifying patterns in false negatives and false positives. The patterns can indicate strategies to further improve the classifier performance.
Perform Classification Using the Model
Classify test images as normal or anomalous by using the classify
function. You can get the unnormalized anomaly scores for the test images using the
predict
function.
Deploy the Model
The end goal of visual inspection is often to deploy trained models for use in making business decisions in real-time production systems on assembly lines. The model determines acceptance or rejection given an image. The MATLAB® Coder™ and GPU Coder™ products are effective tools for deploying visual inspection systems to CPU and NVIDIA® GPU targets.
References
[1] Liznerski, Philipp, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Marius Kloft, and Klaus-Robert Müller. "Explainable Deep One-Class Classification." Preprint, submitted March 18, 2021. https://arxiv.org/abs/2007.01760.
[2] Yu, Jiawei, Ye Zheng, Xiang Wang, Wei Li, Yushuang Wu, Rui Zhao, and Liwei Wu. "FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows." arXiv, November 16, 2021. https://doi.org/10.48550/arXiv.2111.07677.
[3] Roth, Karsten, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, and Peter Gehler. “Towards Total Recall in Industrial Anomaly Detection.” arXiv, May 5, 2022. https://arxiv.org/abs/2106.08265.
[4] Batzner, Kilian, Lars Heckler, and Rebecca König. “EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies.” arXiv, February 8, 2024. https://doi.org/10.48550/arXiv.2303.14535.
Related Examples
- Localize Industrial Defects Using PatchCore Anomaly Detector
- Detect Image Anomalies Using Pretrained ResNet-18 Feature Embeddings
- Detect Image Anomalies Using Explainable FCDD Network
More About
- ROC Curve and Performance Metrics (Deep Learning Toolbox)