batchNormalizationLayer
Batch normalization layer
Description
A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers.
After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
Creation
Description
creates a batch normalization layer.layer = batchNormalizationLayer
creates a batch normalization layer and sets the optional layer = batchNormalizationLayer(Name,Value)TrainedMean, TrainedVariance, Epsilon, Parameters and Initialization, Learning Rate and Regularization, and
Name properties using one or more name-value pairs.
For example, batchNormalizationLayer('Name','batchnorm')
creates a batch normalization layer with the name
'batchnorm'.
Properties
Batch Normalization
Mean statistic used for prediction, specified as a numeric vector of per-channel mean values.
Depending on the type of layer input, the trainnet and
dlnetwork functions automatically reshape this property to have of
the following sizes:
| Layer Input | Property Size |
|---|---|
| feature input | NumChannels-by-1 |
| vector sequence input | |
| 1-D image input | 1-by-NumChannels |
| 1-D image sequence input | |
| 2-D image input | 1-by-1-by-NumChannels |
| 2-D image sequence input | |
| 3-D image input | 1-by-1-by-1-by-NumChannels |
| 3-D image sequence input |
If the BatchNormalizationStatistics training option is 'moving',
then the software approximates the batch normalization statistics during training using a
running estimate and, after training, sets the TrainedMean and
TrainedVariance properties to the latest values of the moving
estimates of the mean and variance, respectively.
If the BatchNormalizationStatistics training option is
'population', then after network training finishes, the software
passes through the data once more and sets the TrainedMean and
TrainedVariance properties to the mean and variance computed from
the entire training data set, respectively.
The layer uses TrainedMean and TrainedVariance to
normalize the input during prediction.
Data Types: single | double
Variance statistic used for prediction, specified as a numeric vector of per-channel variance values.
Depending on the type of layer input, the trainnet and
dlnetwork functions automatically reshape this property to have of
the following sizes:
| Layer Input | Property Size |
|---|---|
| feature input | NumChannels-by-1 |
| vector sequence input | |
| 1-D image input | 1-by-NumChannels |
| 1-D image sequence input | |
| 2-D image input | 1-by-1-by-NumChannels |
| 2-D image sequence input | |
| 3-D image input | 1-by-1-by-1-by-NumChannels |
| 3-D image sequence input |
If the BatchNormalizationStatistics training option is 'moving',
then the software approximates the batch normalization statistics during training using a
running estimate and, after training, sets the TrainedMean and
TrainedVariance properties to the latest values of the moving
estimates of the mean and variance, respectively.
If the BatchNormalizationStatistics training option is
'population', then after network training finishes, the software
passes through the data once more and sets the TrainedMean and
TrainedVariance properties to the mean and variance computed from
the entire training data set, respectively.
The layer uses TrainedMean and TrainedVariance to
normalize the input during prediction.
Data Types: single | double
Constant to add to the mini-batch variances, specified as a positive scalar.
The software adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.
Before R2023a: Epsilon must be greater than
or equal to 1e-5.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
This property is read-only.
Number of input channels, specified as one of the following:
"auto"— Automatically determine the number of input channels at training time.Positive integer — Configure the layer for the specified number of input channels.
NumChannelsand the number of channels in the layer input data must match. For example, if the input is an RGB image, thenNumChannelsmust be 3. If the input is the output of a convolutional layer with 16 filters, thenNumChannelsmust be 16.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string
Parameters and Initialization
Function to initialize the channel scale factors, specified as one of the following:
'ones'– Initialize the channel scale factors with ones.'zeros'– Initialize the channel scale factors with zeros.'narrow-normal'– Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form
scale = func(sz), whereszis the size of the scale. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the channel scale factors when the Scale property is empty.
Data Types: char | string | function_handle
Function to initialize the channel offsets, specified as one of the following:
'zeros'– Initialize the channel offsets with zeros.'ones'– Initialize the channel offsets with ones.'narrow-normal'– Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form
offset = func(sz), whereszis the size of the scale. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the channel offsets when the Offset property is empty.
Data Types: char | string | function_handle
Channel scale factors γ, specified as a numeric array.
The channel scale factors are learnable parameters. When you train a network using the
trainnet
function or initialize a dlnetwork object, if Scale is nonempty, then the software uses the Scale property as the initial value. If Scale is empty, then the software uses the initializer specified by
ScaleInitializer.
Depending on the type of layer input, the trainnet and
dlnetwork functions automatically reshape this property to have of
the following sizes:
| Layer Input | Property Size |
|---|---|
| feature input | NumChannels-by-1 |
| vector sequence input | |
| 1-D image input | 1-by-NumChannels |
| 1-D image sequence input | |
| 2-D image input | 1-by-1-by-NumChannels |
| 2-D image sequence input | |
| 3-D image input | 1-by-1-by-1-by-NumChannels |
| 3-D image sequence input |
Data Types: single | double
Channel offsets β, specified as a numeric vector.
The channel offsets are learnable parameters. When you train a network using the trainnet
function or initialize a dlnetwork object, if Offset is nonempty, then the software uses the Offset property as the initial value. If Offset is empty, then the software uses the initializer specified by
OffsetInitializer.
Depending on the type of layer input, the trainnet and
dlnetwork functions automatically reshape this property to have of
the following sizes:
| Layer Input | Property Size |
|---|---|
| feature input | NumChannels-by-1 |
| vector sequence input | |
| 1-D image input | 1-by-NumChannels |
| 1-D image sequence input | |
| 2-D image input | 1-by-1-by-NumChannels |
| 2-D image sequence input | |
| 3-D image input | 1-by-1-by-1-by-NumChannels |
| 3-D image sequence input |
Data Types: single | double
Decay value for the moving mean computation, specified as a numeric
scalar between 0 and 1.
When you use the trainNetwork or
trainnet function and the BatchNormalizationStatistics training option is
'moving', at each iteration, the layer updates
the moving mean value using
where denotes the updated mean, denotes the mean decay value, denotes the mean of the layer input, and denotes the latest value of the moving mean value.
When you use the trainNetwork or
trainnet function and the BatchNormalizationStatistics training option is
'population', this option has no effect.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Decay value for the moving variance computation, specified as a
numeric scalar between 0 and
1.
When you use the trainNetwork or
trainnet function and the BatchNormalizationStatistics training option is
'moving', at each iteration, the layer updates
the moving variance value using
where denotes the updated variance, denotes the variance decay value, denotes the variance of the layer input, and denotes the latest value of the moving variance value.
When you use the trainNetwork or
trainnet function and the BatchNormalizationStatistics training option is
'population', this option has no effect.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Learning Rate and Regularization
Learning rate factor for the scale factors, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor is 2, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Learning rate factor for the offsets, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate
for the offsets in a layer. For example, if OffsetLearnRateFactor
is 2, then the learning rate for the offsets in the layer is twice
the current global learning rate. The software determines the global learning rate based
on the settings specified with the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
L2 regularization factor for the scale factors, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization
factor to determine the learning rate for the scale factors in a layer. For example, if
ScaleL2Factor is 2, then the
L2 regularization for the offsets in the layer is twice the
global L2 regularization factor. You can specify the global
L2 regularization factor using the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
L2 regularization factor for the offsets, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization
factor to determine the learning rate for the offsets in a layer. For example, if
OffsetL2Factor is 2, then the
L2 regularization for the offsets in the layer is twice the
global L2 regularization factor. You can specify the global
L2 regularization factor using the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Layer
This property is read-only.
Number of inputs to the layer, stored as 1. This layer accepts a
single input only.
Data Types: double
This property is read-only.
Input names, stored as {'in'}. This layer accepts a single input
only.
Data Types: cell
This property is read-only.
Number of outputs from the layer, stored as 1. This layer has a
single output only.
Data Types: double
This property is read-only.
Output names, stored as {'out'}. This layer has a single output
only.
Data Types: cell
Examples
Create a batch normalization layer with the name BN1.
layer = batchNormalizationLayer(Name="BN1")layer =
BatchNormalizationLayer with properties:
Name: 'BN1'
NumChannels: 'auto'
Hyperparameters
MeanDecay: 0.1000
VarianceDecay: 0.1000
Epsilon: 1.0000e-05
Learnable Parameters
Offset: []
Scale: []
State Parameters
TrainedMean: []
TrainedVariance: []
Show all properties
Include batch normalization layers in a Layer array.
layers = [
imageInputLayer([32 32 3])
convolution2dLayer(3,16,Padding=1)
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,Stride=2)
convolution2dLayer(3,32,Padding=1)
batchNormalizationLayer
reluLayer
fullyConnectedLayer(10)
softmaxLayer
]layers =
10×1 Layer array with layers:
1 '' Image Input 32×32×3 images with 'zerocenter' normalization
2 '' 2-D Convolution 16 3×3 convolutions with stride [1 1] and padding [1 1 1 1]
3 '' Batch Normalization Batch normalization
4 '' ReLU ReLU
5 '' 2-D Max Pooling 2×2 max pooling with stride [2 2] and padding [0 0 0 0]
6 '' 2-D Convolution 32 3×3 convolutions with stride [1 1] and padding [1 1 1 1]
7 '' Batch Normalization Batch normalization
8 '' ReLU ReLU
9 '' Fully Connected 10 fully connected layer
10 '' Softmax softmax
Tips
If you train a neural network using a custom training loop, you must update the neural network batch normalization state manually. To learn more, see Update Batch Normalization Statistics in Custom Training Loop.
Algorithms
A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers.
The layer first normalizes the activations of each channel by subtracting the mini-batch mean and dividing by the mini-batch standard deviation. Then, the layer shifts the input by a learnable offset β and scales it by a learnable scale factor γ. β and γ are themselves learnable parameters that are updated during network training.
Batch normalization layers normalize the activations and gradients propagating through a
neural network, making network training an easier optimization problem. To take full
advantage of this fact, you can try increasing the learning rate. Since the optimization
problem is easier, the parameter updates can be larger and the network can learn faster. You
can also try reducing the L2 and dropout regularization. With batch
normalization layers, the activations of a specific image during training depend on which
images happen to appear in the same mini-batch. To take full advantage of this regularizing
effect, try shuffling the training data before every training epoch. To specify how often to
shuffle the data during training, use the 'Shuffle' name-value pair
argument of trainingOptions.
The batch normalization operation normalizes the elements xi of the input by first calculating the mean μB and variance σB2 over the spatial, time, and observation dimensions for each channel independently. Then, it calculates the normalized activations as
where ϵ is a constant that improves numerical stability when the variance is very small.
To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow batch normalization, the batch normalization operation further shifts and scales the activations using the transformation
where the offset β and scale factor γ are learnable parameters that are updated during network training.
To make predictions with the network after training, batch normalization requires a fixed mean and variance to normalize the data. This fixed mean and variance can be calculated from the training data after training, or approximated during training using running statistic computations.
If the BatchNormalizationStatistics training option is 'moving',
then the software approximates the batch normalization statistics during training using a
running estimate and, after training, sets the TrainedMean and
TrainedVariance properties to the latest values of the moving
estimates of the mean and variance, respectively.
If the BatchNormalizationStatistics training option is
'population', then after network training finishes, the software
passes through the data once more and sets the TrainedMean and
TrainedVariance properties to the mean and variance computed from
the entire training data set, respectively.
The layer uses TrainedMean and TrainedVariance to
normalize the input during prediction.
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray objects.
The format of a dlarray object is a string of characters in which each
character describes the corresponding dimension of the data. The format consists of one or
more of these characters:
"S"— Spatial"C"— Channel"B"— Batch"T"— Time"U"— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the
first two dimensions correspond to the spatial dimensions of the images, the third
dimension corresponds to the channels of the images, and the fourth dimension
corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray objects in automatic differentiation
workflows, such as those for developing a custom layer, using a functionLayer
object, or using the forward and predict functions with
dlnetwork objects.
This table shows the supported input formats of BatchNormalizationLayer objects and the
corresponding output format. If the software passes the output of the layer to a custom
layer that does not inherit from the nnet.layer.Formattable class, or a
FunctionLayer object with the Formattable property
set to 0 (false), then the layer receives an
unformatted dlarray object with dimensions ordered according to the formats
in this table. The formats listed here are only a subset. The layer may support additional
formats such as formats with additional "S" (spatial) or
"U" (unspecified) dimensions.
| Input Format | Output Format |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In dlnetwork objects, BatchNormalizationLayer objects also
support these input and output format combinations.
| Input Format | Output Format |
|---|---|
|
|
|
|
|
|
|
|
References
[1] Ioffe, Sergey, and Christian Szegedy. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” Preprint, submitted March 2, 2015. https://arxiv.org/abs/1502.03167.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Version History
Introduced in R2017bThe Epsilon option also
supports positive values less than 1e-5.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Seleccione un país/idioma
Seleccione un país/idioma para obtener contenido traducido, si está disponible, y ver eventos y ofertas de productos y servicios locales. Según su ubicación geográfica, recomendamos que seleccione: .
También puede seleccionar uno de estos países/idiomas:
Cómo obtener el mejor rendimiento
Seleccione China (en idioma chino o inglés) para obtener el mejor rendimiento. Los sitios web de otros países no están optimizados para ser accedidos desde su ubicación geográfica.
América
- América Latina (Español)
- Canada (English)
- United States (English)
Europa
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)