Main Content

attention

Dot-product attention

Since R2022b

    Description

    The attention operation focuses on parts of the input using weighted multiplication operations.

    Y = attention(queries,keys,values,numHeads) applies the dot-product attention operation to the specified queries, keys, and values using the number of attention heads numHeads. The queries input argument must be a formatted dlarray object.

    example

    [Y,weights] = attention(queries,keys,values,numHeads) applies the dot-product attention operation and also returns the attention weights..

    example

    [Y,weights] = attention(queries,keys,values,numHeads,DataFormat=FMT) applies the dot-product attention operation to the unformatted dlarray object queries with format specified by FMT. For example, DataFormat="CBT" specifies data in the format "CBT" (channel, batch, time).

    example

    [Y,weights] = attention(queries,keys,values,numHeads,Name=Value) specifies additional options using one or more name-value arguments. For example, DropoutProbability=0.01 specifies a dropout probability of 0.01.

    Examples

    collapse all

    Specify the sizes of the queries, keys, and values.

    querySize = 100;
    valueSize = 120;
    numQueries = 64;
    numValues = 80;
    numObservations = 32;

    Create random arrays containing the queries, keys, and values. For the queries, specify the dlarray format "CBT" (channel, batch, time).

    queries = dlarray(rand(querySize,numObservations, numQueries),"CBT");
    keys = dlarray(rand(querySize,numObservations, numValues));
    values = dlarray(rand(valueSize,numObservations, numValues));

    Specify the number of attention heads.

    numHeads = 5;

    Apply the attention operation.

    [Y,weights] = attention(queries,keys,values,numHeads);

    View the sizes and format of the output.

    size(Y)
    ans = 1×3
    
       120    32    64
    
    
    dims(Y)
    ans = 
    'CBT'
    

    View the sizes and format of the weights.

    size(weights)
    ans = 1×4
    
        80    64     5    32
    
    
    dims(weights)
    ans =
    
      0x0 empty char array
    

    You can use the attention function to implement the multihead self attention operation [1] that focuses on parts of the input.

    Create the multiheadSelfAttention function, listed in the Multihead Self Attention Function section of the example. The multiheadSelfAttention function takes as input the data X, the number of heads, and the learnable weights for the queries, keys, values, and output data, and returns the multihead attention values.

    The X input must be an unformatted dlarray object, where the first dimension corresponds to the input channels, the second dimension corresponds to the time or spatial dimension, and the third dimension corresponds to the batch dimension.

    Create an array of sequence data.

    numChannels = 10;
    numObservations = 128;
    numTimeSteps = 100;
    
    X = rand(numChannels,numObservations,numTimeSteps);
    X = dlarray(X);
    size(X)
    ans = 1×3
    
        10   128   100
    
    

    Specify the number of heads for multihead attention.

    numHeads = 8;

    Initialize the learnable parameters for multihead attention.

    • The learnable query, key, and value weights must be (numChannels*numHeads)-by-numChannels arrays.

    • The learnable output weights must be a (numChannels*numHeads)-by-(numChannels*numHeads) array.

    outputSize = numChannels*numHeads;
    
    WQ = rand(outputSize,numChannels);
    WK = rand(outputSize,numChannels);
    WV = rand(outputSize,numChannels);
    WO = rand(outputSize,outputSize);

    Apply the multihead self attention operation.

    Y = multiheadSelfAttention(X,numHeads,WQ,WK,WV,WO);

    View the size of the output. The output has size (numChannels*numHeads)-by-numObservations-by-(numTimeSteps).

    size(Y)
    ans = 1×3
    
        80   128   100
    
    

    Multihead Self Attention Function

    The multiheadSelfAttention function takes as input the data X, the number of heads, and the learnable weights for the queries, keys, values, and output data, and returns the multihead attention values.

    • The X input must be an unformatted dlarray object, where the first dimension corresponds to the input channels, the second dimension corresponds to the time or spatial dimension, and the third dimension corresponds to the batch dimension.

    • The learnable query, key, and value weight matrices are (numChannels*numHeads)-by-numChannels matrices.

    • The learnable output weights matrix is a (numChannels*numHeads)-by-(numChannels*numHeads) matrix.

    function Y = multiheadSelfAttention(X,numHeads,WQ,WK,WV,WO)
    
    queries = pagemtimes(WQ,X);
    keys = pagemtimes(WK,X);
    values = pagemtimes(WV,X);
    
    A = attention(queries,keys,values,numHeads,DataFormat="CBT");
    
    Y = pagemtimes(WO,A);
    
    end

    You can use the attention function to create a function that applies the Luong attention operation to its input. Create the luongAttention function, listed at the end of the example, that applies the Luong attention operation.

    Specify the array sizes.

    numHiddenUnits = 100;
    latentSize = 16;

    Create random arrays containing the input data.

    hiddenState = dlarray(rand(numHiddenUnits,1));
    Z = dlarray(rand(latentSize,1));
    weights = dlarray(rand(numHiddenUnits,latentSize));

    Apply the luongAttention function.

    [context,scores] = luongAttention(hiddenState,Z,weights);

    View the sizes of the outputs.

    size(context)
    ans = 1×2
    
        16     1
    
    
    size(scores)
    ans = 1×2
    
         1     1
    
    

    Luong Attention Function

    The luongAttention function returns the context vector and attention scores according to the Luong "general" scoring [2]. This operation is equivalent to dot-product attention with queries, keys, and values specified as the hidden state, the weighted latent representation, and the latent representation, respectively.

    function [context,scores] = luongAttention(hiddenState,Z,weights)
    
    numHeads = 1;
    queries = hiddenState;
    keys = pagemtimes(weights,Z);
    values = Z;
    
    [context,scores] = attention(queries,keys,values,numHeads, ...
        Scale=1, ...
        DataFormat="CBT");
    
    end

    Input Arguments

    collapse all

    Queries, specified as a dlarray object.

    queries can have at most one "S" (spatial) or "T" (time) dimension. Any dimensions in queries labeled "U" (unspecified) must be singleton. If queries is an unformatted dlarray object, then specify the data format using the DataFormat option.

    The size of the "C" (channel) dimension in keys must match the size of the corresponding dimension in queries.

    The size of the "B" (batch) dimension in queries, keys, and values must match.

    Keys, specified as a dlarray object or a numeric array.

    If keys is a formatted dlarray object, then its format must match the format of queries. If keys is not a formatted dlarray object, then the function uses the same format as queries.

    The size of any "S" (spatial) or "T" (time) dimensions in keys must match the size of the corresponding dimension in values.

    The size of the "C" (channel) dimension in keys must match the size of the corresponding dimension in queries.

    The size of the "B" (batch) dimension in queries, keys, and values must match.

    Values, specified as a dlarray object or a numeric array.

    If values is a formatted dlarray object, then its format must match the format of queries. Otherwise, the function uses the same format as queries.

    The size of any "S" (spatial) or "T" (time) dimensions in keys must match the size of the corresponding dimension in values.

    The size of the "B" (batch) dimension in queries, keys, and values must match.

    Number of heads, specified as a positive integer.

    Each head performs a separate linear transformation of the input and computes attention weights independently. The layer uses these attention weights to compute a weighted sum of the input representations, generating a context vector. Increasing the number of heads lets the model capture different types of dependencies and attend to different parts of the input simultaneously. Reducing the number of heads can lower the computational cost of the layer.

    The value of numHeads must evenly divide the size of the "C" (channel) dimension of queries, keys, and values.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

    Example: attention(queries,keys,values,numHeads,DataFormat="CBT") applies the attention operation for unformatted data and specifies the data format "CBT" (channel, batch, time).

    Description of the data dimensions, specified as a character vector or string scalar.

    A data format is a string of characters, where each character describes the type of the corresponding data dimension.

    The characters are:

    • "S" — Spatial

    • "C" — Channel

    • "B" — Batch

    • "T" — Time

    • "U" — Unspecified

    For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

    You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" at most once. The software ignores singleton trailing "U" dimensions after the second dimension.

    If the input data is not a formatted dlarray object, then you must specify the DataFormat option.

    For more information, see Deep Learning Data Formats.

    Data Types: char | string

    Multiplicative factor for scaled dot-product attention [1], specified as one of these values:

    • "auto" — Multiply the dot-product by λ=1dk, where dk denotes the number of channels in the keys divided by the number of heads.

    • Numeric scalar — Multiply the dot-product by the specified scale factor.

    Data Types: single | double | char | string

    Mask indicating which elements of the input correspond to padding values, specified as a dlarray object, a logical array, or a binary-valued numeric array.

    The function prevents and allows attention to elements of input data key-value pairs when the corresponding element in PaddingMask is 0 and 1, respectively.

    If PaddingMask is a formatted dlarray object, then its format must match that of keys. If PaddingMask is not a formatted dlarray object, then the function uses the same format as keys. The size of the "S" (spatial), "T" (time), and "B" (batch) dimensions in PaddingMask must match the size of the corresponding dimensions in keys and values.

    The padding mask can have any number of channels. The software uses the values in the first channel only to indicate padding values.

    The default value is a logical array of ones with the same size as keys.

    Attention mask indicating which elements to include when applying the attention operation, specified as one of these values:

    • "none" — Do not prevent attention to elements with respect to their positions. If AttentionMask is "none", then the software prevents attention using only the padding mask.

    • "causal" — Prevent elements in position m in the "S" (spatial) or "T" (time) dimension of the input queries from providing attention to the elements in positions n, where n is greater than m in the corresponding dimension of the input keys and values. Use this option for auto-regressive models.

    • Logical or numeric array — Prevent attention to elements of the input keys and values when the corresponding element in the specified array is 0. The specified array must be an Nk-by-Nq matrix or a Nk-by-Nq-by-numObservations array, Nk is the size of the "S" (spatial) or "T" (time) dimension of the input keys, Nq is the size of the corresponding dimension of the input queries, and numObservations is the size of the "B" dimension in the input queries.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical | char | string

    Dropout probability for the attention weights, specified as a scalar in the range [0, 1).

    Data Types: single | double

    Output Arguments

    collapse all

    Result of attention operation, returned as a dlarray object.

    If queries is a formatted dlarray object, then Y is a formatted dlarray object with the same dimension labels as queries. The size of the "C" (channel) dimension of Y is the same as the size of the corresponding dimension in values. The size of the "S" (spatial) or "T" dimension of Y is the same size as the corresponding dimension in queries.

    If queries is not a formatted dlarray object, then Y is an unformatted dlarray object.

    Attention weights, returned as an unformatted dlarray object.

    weights is a Nk-by-Nq-by-numHeads-by-numObservations array, where Nk is the size of the "S" (spatial) or "T" (time) dimension of keys, Nq is the size of the corresponding dimension in queries, and numObservations is the size of the "B" (batch) dimension in queries.

    Algorithms

    collapse all

    Dot-Product Attention

    The attention operation focuses on parts of the input using weighted multiplication operations.

    The single-head dot-product attention operation is given by

    attention(Q,K,V)=dropout(softmax(mask(λQK,M)),p)V,

    where:

    • Q denotes the queries.

    • K denotes the keys.

    • V denotes the values.

    • λ denotes the scaling factor.

    • M is a mask array of ones and zeros.

    • p is the dropout probability.

    The mask operation includes or excludes the values of the matrix multiplication setting values of the input to for zero-valued mask elements. The mask is the union of the padding and attention masks. The softmax function normalizes the value of the input data across the channel dimension such that it sums to one. The dropout operation sets elements to zero with probability p.

    Multihead Self-Attention

    The multihead self-attention operation for the input X is given by

    multiheadSelfAttention(X,h,WQ,WK,WV,WO)=concatenate(head1,,headh)WO,

    where:

    • h is the number of heads.

    • WQ is a learnable projection matrix for the queries.

    • WK is a learnable projection matrix for the keys.

    • WV is a learnable projection matrix for the values.

    • WO is a learnable projection matrix for the output.

    Each weight matrix is composed of concatenated weight matrices Wi for each head. Each headi denotes the output of the head operation given by

    headi=selfAttention(XWiQ,XWiK,XWiV).

    Deep Learning Array Formats

    Most deep learning networks and functions operate on different dimensions of the input data in different ways.

    For example, an LSTM operation iterates over the time dimension of the input data and a batch normalization operation normalizes over the batch dimension of the input data.

    To provide input data with labeled dimensions or input data with additional layout information, you can use data formats.

    A data format is a string of characters, where each character describes the type of the corresponding data dimension.

    The characters are:

    • "S" — Spatial

    • "C" — Channel

    • "B" — Batch

    • "T" — Time

    • "U" — Unspecified

    For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

    To create formatted input data, create a dlarray object and specify the format using the second argument.

    To provide additional layout information with unformatted data, specify the format using the DataFormat argument.

    For more information, see Deep Learning Data Formats.

    References

    [1] Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." Advances in neural information processing systems 30 (December 2017): 6000-6010. https://papers.nips.cc/paper/7181-attention-is-all-you-need.

    [2] Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. "Effective approaches to attention-based neural machine translation." arXiv preprint arXiv:1508.04025 (2015).

    Extended Capabilities

    Version History

    Introduced in R2022b