Main Content

templateKernel

Kernel learner template

Description

example

t = templateKernel returns a kernel learner template suitable for training a Gaussian kernel model for nonlinear classification or regression.

example

t = templateKernel(Name,Value) returns a template with additional options specified by one or more name-value arguments.

For example, you can specify the learner or the number of dimensions of the expanded space.

If you specify the type of model by using the Type name-value argument, then the display of t in the Command Window shows all options as empty ([]), except those that you specify using name-value arguments. If you do not specify the type of model, then the display suppresses the empty options. During training, the software uses default values for empty options.

Examples

collapse all

Create a default kernel model template and use it to train an error-correcting output codes (ECOC) multiclass model.

Load Fisher's iris data set.

load fisheriris

Create a default kernel model template.

t = templateKernel
t = 
Fit template for Kernel.
    Learner: 'svm'

During training, the software fills in the empty properties with their respective default values.

Specify t as a binary learner for an ECOC multiclass model.

Mdl = fitcecoc(meas,species,'Learners',t)
Mdl = 
  CompactClassificationECOC
      ResponseName: 'Y'
        ClassNames: {'setosa'  'versicolor'  'virginica'}
    ScoreTransform: 'none'
    BinaryLearners: {3x1 cell}
      CodingMatrix: [3x3 double]


Mdl is a CompactClassificationECOC multiclass classifier.

Create a kernel model template with additional options to implement logistic regression with a kernel scale parameter selected by a heuristic procedure.

t = templateKernel('Learner','logistic','KernelScale','auto')
t = 
Fit template for classification Kernel.

             BetaTolerance: []
                 BlockSize: []
             BoxConstraint: []
                   Epsilon: []
    NumExpansionDimensions: []
         GradientTolerance: []
        HessianHistorySize: []
            IterationLimit: []
               KernelScale: 'auto'
                    Lambda: []
                   Learner: 'logistic'
              LossFunction: []
                    Stream: []
            VerbosityLevel: []
           StandardizeData: []
                   Version: 1
                    Method: 'Kernel'
                      Type: 'classification'

Input Arguments

collapse all

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: 'Learner','logistic','NumExpansionDimensions',2^15,'KernelScale','auto' specifies to implement logistic regression after mapping the predictor data to the 2^15 dimensional space using feature expansion with a kernel scale parameter selected by a heuristic procedure.

For Classification Models and Regression Models

collapse all

Kernel learner type, specified as "svm", "logistic", or "leastsquares".

In the following table, f(x)=T(x)β+b.

  • x is an observation (row vector) from p predictor variables.

  • T(·) is a transformation of an observation (row vector) for feature expansion. T(x) maps x in p to a high-dimensional space (m).

  • β is a vector of coefficients.

  • b is the scalar bias.

ValueAlgorithmResponse RangeLoss Function
"svm"Support vector machine (classification or regression)

Classification: y ∊ {–1,1}; 1 for the positive class and –1 otherwise

Regression: y ∊ (-∞,∞)

Classification: Hinge [y,f(x)]=max[0,1yf(x)]

Regression: Epsilon-insensitive [y,f(x)]=max[0,|yf(x)|ε]

"logistic"Logistic regression (classification only)y ∊ {–1,1}; 1 for the positive class and –1 otherwiseDeviance (logistic) [y,f(x)]=log{1+exp[yf(x)]}
"leastsquares"Linear regression via ordinary least squares (regression only)y ∊ (-∞,∞)Mean squared error (MSE) [y,f(x)]=12[yf(x)]2

Example: "Learner","logistic"

Number of dimensions of the expanded space, specified as the comma-separated pair consisting of 'NumExpansionDimensions' and 'auto' or a positive integer. For 'auto', the templateKernel function selects the number of dimensions using 2.^ceil(min(log2(p)+5,15)), where p is the number of predictors.

For details, see Random Feature Expansion.

Example: 'NumExpansionDimensions',2^15

Data Types: char | string | single | double

Kernel scale parameter, specified as "auto" or a positive scalar. The software obtains a random basis for random feature expansion by using the kernel scale parameter. For details, see Random Feature Expansion.

If you specify "auto", then the software selects an appropriate kernel scale parameter using a heuristic procedure. This heuristic procedure uses subsampling, so estimates can vary from one call to another. Therefore, to reproduce results, set a random number seed by using rng before training.

Example: KernelScale="auto"

Data Types: char | string | single | double

Box constraint, specified as the comma-separated pair consisting of 'BoxConstraint' and a positive scalar.

This argument is valid only when 'Learner' is 'svm'(default) and you do not specify a value for the regularization term strength 'Lambda'. You can specify either 'BoxConstraint' or 'Lambda' because the box constraint (C) and the regularization term strength (λ) are related by C = 1/(λn), where n is the number of observations.

Example: 'BoxConstraint',100

Data Types: single | double

Regularization term strength, specified as the comma-separated pair consisting of 'Lambda' and 'auto' or a nonnegative scalar.

For 'auto', the value of Lambda is 1/n, where n is the number of observations.

When Learner is 'svm', you can specify either BoxConstraint or Lambda because the box constraint (C) and the regularization term strength (λ) are related by C = 1/(λn).

Example: 'Lambda',0.01

Data Types: char | string | single | double

Since R2023b

Flag to standardize the predictor data, specified as a numeric or logical 0 (false) or 1 (true). If you set Standardize to true, then the software centers and scales each numeric predictor variable by the corresponding column mean and standard deviation. The software does not standardize the categorical predictors.

Example: "Standardize",true

Data Types: single | double | logical

Since R2023b

Kernel model type, specified as "classification" or "regression".

ValueDescription
"classification"Create a classification kernel learner template. If you do not specify Type as "classification", the fitting functions fitcecoc, testckfold, and fitsemigraph set this value when you pass t to them.
"regression"Create a regression kernel learner template. If you do not specify Type as "regression", the fitting function directforecaster sets this value when you pass t to it.

Example: "Type","classification"

Data Types: char | string

For Regression Models Only

collapse all

Half the width of the epsilon-insensitive band, specified as a nonnegative scalar value. This argument applies to support vector machine learners only.

The default Epsilon value is iqr(Y)/13.49, which is an estimate of standard deviation using the interquartile range of the response variable Y. If iqr(Y) is equal to zero, then the default Epsilon value is 0.1.

Example: "Epsilon",0.3

Data Types: single | double

Convergence Controls

collapse all

Relative tolerance on the linear coefficients and the bias term (intercept), specified as a nonnegative scalar.

Let Bt=[βtbt], that is, the vector of the coefficients and the bias term at optimization iteration t. If BtBt1Bt2<BetaTolerance, then optimization terminates.

If you also specify GradientTolerance, then optimization terminates when the software satisfies either stopping criterion.

Example: BetaTolerance=1e–6

Data Types: single | double

Absolute gradient tolerance, specified as a nonnegative scalar.

Let t be the gradient vector of the objective function with respect to the coefficients and bias term at optimization iteration t. If t=max|t|<GradientTolerance, then optimization terminates.

If you also specify BetaTolerance, then optimization terminates when the software satisfies either stopping criterion.

Example: GradientTolerance=1e–5

Data Types: single | double

Maximum number of optimization iterations, specified as a positive integer.

The default value is 1000 if the transformed data fits in memory, as specified by the BlockSize name-value argument. Otherwise, the default value is 100.

Example: IterationLimit=500

Data Types: single | double

Other Kernel Options

collapse all

Maximum amount of allocated memory (in megabytes), specified as the comma-separated pair consisting of 'BlockSize' and a positive scalar.

If templateKernel requires more memory than the value of 'BlockSize' to hold the transformed predictor data, then the software uses a block-wise strategy. For details about the block-wise strategy, see Algorithms.

Example: 'BlockSize',1e4

Data Types: single | double

Random number stream for reproducibility of data transformation, specified as a random stream object. For details, see Random Feature Expansion.

Use RandomStream to reproduce the random basis functions used by templateKernel to transform the predictor data to a high-dimensional space. For details, see Managing the Global Stream Using RandStream and Creating and Controlling a Random Number Stream.

Example: RandomStream=RandStream("mlfg6331_64")

Size of the history buffer for Hessian approximation, specified as the comma-separated pair consisting of 'HessianHistorySize' and a positive integer. At each iteration, templateKernel composes the Hessian approximation by using statistics from the latest HessianHistorySize iterations.

Example: 'HessianHistorySize',10

Data Types: single | double

Verbosity level, specified as the comma-separated pair consisting of 'Verbose' and either 0 or 1. Verbose controls the display of diagnostic information at the command line.

ValueDescription
0templateKernel does not display diagnostic information.
1templateKernel displays the value of the objective function, gradient magnitude, and other diagnostic information.

Example: 'Verbose',1

Data Types: single | double

Output Arguments

collapse all

Kernel learner template suitable for training a Gaussian kernel model for nonlinear classification or regression, returned as a template object. During training, the software uses default values for empty options.

More About

collapse all

Random Feature Expansion

Random feature expansion, such as Random Kitchen Sinks[1] or Fastfood[2], is a scheme to approximate Gaussian kernels of the kernel classification algorithm to use for big data in a computationally efficient way. Random feature expansion is more practical for big data applications that have large training sets, but can also be applied to smaller data sets that fit in memory.

The kernel classification algorithm searches for an optimal hyperplane that separates the data into two classes after mapping features into a high-dimensional space. Nonlinear features that are not linearly separable in a low-dimensional space can be separable in the expanded high-dimensional space. All the calculations for hyperplane classification use only dot products. You can obtain a nonlinear classification model by replacing the dot product x1x2' with the nonlinear kernel function G(x1,x2)=φ(x1),φ(x2), where xi is the ith observation (row vector) and φ(xi) is a transformation that maps xi to a high-dimensional space (called the “kernel trick”). However, evaluating G(x1,x2) (Gram matrix) for each pair of observations is computationally expensive for a large data set (large n).

The random feature expansion scheme finds a random transformation so that its dot product approximates the Gaussian kernel. That is,

G(x1,x2)=φ(x1),φ(x2)T(x1)T(x2)',

where T(x) maps x in p to a high-dimensional space (m). The Random Kitchen Sinks scheme uses the random transformation

T(x)=m1/2exp(iZx')',

where Zm×p is a sample drawn from N(0,σ2) and σ is a kernel scale. This scheme requires O(mp) computation and storage.

The Fastfood scheme introduces another random basis V instead of Z using Hadamard matrices combined with Gaussian scaling matrices. This random basis reduces the computation cost to O(mlogp) and reduces storage to O(m).

You can specify values for m and σ by setting NumExpansionDimensions and KernelScale, respectively, of templateKernel.

The templateKernel function uses the Fastfood scheme for random feature expansion, and uses linear classification to train a Gaussian kernel classification model. Unlike solvers in the templateSVM function, which require computation of the n-by-n Gram matrix, the solver in templateKernel only needs to form a matrix of size n-by-m, with m typically much less than n for big data.

Box Constraint

A box constraint is a parameter that controls the maximum penalty imposed on margin-violating observations, and aids in preventing overfitting (regularization). Increasing the box constraint can lead to longer training times.

The box constraint (C) and the regularization term strength (λ) are related by C = 1/(λn), where n is the number of observations.

Algorithms

templateKernel minimizes the regularized objective function using a Limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) solver with ridge (L2) regularization. To find the type of LBFGS solver used for training, type FitInfo.Solver in the Command Window.

  • 'LBFGS-fast' — LBFGS solver.

  • 'LBFGS-blockwise' — LBFGS solver with a block-wise strategy. If templateKernel requires more memory than the value of BlockSize to hold the transformed predictor data, then the function uses a block-wise strategy.

  • 'LBFGS-tall' — LBFGS solver with a block-wise strategy for tall arrays.

When templateKernel uses a block-wise strategy, it implements LBFGS by distributing the calculation of the loss and gradient among different parts of the data at each iteration. Also, templateKernel refines the initial estimates of the linear coefficients and the bias term by fitting the model locally to parts of the data and combining the coefficients by averaging. If you specify 'Verbose',1, then templateKernel displays diagnostic information for each data pass and stores the information in the History field of FitInfo.

When templateKernel does not use a block-wise strategy, the initial estimates are zeros. If you specify 'Verbose',1, then templateKernel displays diagnostic information for each iteration and stores the information in the History field of FitInfo.

References

[1] Rahimi, A., and B. Recht. “Random Features for Large-Scale Kernel Machines.” Advances in Neural Information Processing Systems. Vol. 20, 2008, pp. 1177–1184.

[2] Le, Q., T. Sarlós, and A. Smola. “Fastfood — Approximating Kernel Expansions in Loglinear Time.” Proceedings of the 30th International Conference on Machine Learning. Vol. 28, No. 3, 2013, pp. 244–252.

[3] Huang, P. S., H. Avron, T. N. Sainath, V. Sindhwani, and B. Ramabhadran. “Kernel methods match Deep Neural Networks on TIMIT.” 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. 2014, pp. 205–209.

Extended Capabilities

Version History

Introduced in R2018b

expand all