Main Content

ClassificationKernel

Gaussian kernel classification model using random feature expansion

Description

ClassificationKernel is a trained model object for a binary Gaussian kernel classification model using random feature expansion. ClassificationKernel is more practical for big data applications that have large training sets but can also be applied to smaller data sets that fit in memory.

Unlike other classification models, and for economical memory usage, ClassificationKernel model objects do not store the training data. However, they do store information such as the number of dimensions of the expanded space, the kernel scale parameter, prior-class probabilities, and the regularization strength.

You can use trained ClassificationKernel models to continue training using the training data and to predict labels or classification scores for new data. For details, see resume and predict.

Creation

Create a ClassificationKernel object using the fitckernel function. This function maps data in a low-dimensional space into a high-dimensional space, then fits a linear model in the high-dimensional space by minimizing the regularized objective function. The linear model in the high-dimensional space is equivalent to the model with a Gaussian kernel in the low-dimensional space. Available linear classification models include regularized support vector machine (SVM) and logistic regression models.

Properties

expand all

Kernel Classification Properties

Linear classification model type, specified as 'logistic' or 'svm'.

In the following table, f(x)=T(x)β+b.

  • x is an observation (row vector) from p predictor variables.

  • T(·) is a transformation of an observation (row vector) for feature expansion. T(x) maps x in p to a high-dimensional space (m).

  • β is a vector of coefficients.

  • b is the scalar bias.

ValueAlgorithmLoss FunctionFittedLoss Value
'svm'Support vector machineHinge: [y,f(x)]=max[0,1yf(x)]'hinge'
'logistic'Logistic regressionDeviance (logistic): [y,f(x)]=log{1+exp[yf(x)]}'logit'

Number of dimensions of the expanded space, specified as a positive integer.

Data Types: single | double

Kernel scale parameter, specified as a positive scalar.

Data Types: char | single | double

Box constraint, specified as a positive scalar.

Data Types: double | single

Regularization term strength, specified as a nonnegative scalar.

Data Types: single | double

Since R2023b

This property is read-only.

Predictor means, specified as a numeric vector. If you specify Standardize as 1 or true when you train the kernel model, then the length of the Mu vector is equal to the number of expanded predictors (see ExpandedPredictorNames). The vector contains 0 values for dummy variables corresponding to expanded categorical predictors.

If you set Standardize to 0 or false when you train the kernel model, then the Mu value is an empty vector ([]).

Data Types: double

Since R2023b

This property is read-only.

Predictor standard deviations, specified as a numeric vector. If you specify Standardize as 1 or true when you train the kernel model, then the length of the Sigma vector is equal to the number of expanded predictors (see ExpandedPredictorNames). The vector contains 1 values for dummy variables corresponding to expanded categorical predictors.

If you set Standardize to 0 or false when you train the kernel model, then the Sigma value is an empty vector ([]).

Data Types: double

This property is read-only.

Loss function used to fit the linear model, specified as 'hinge' or 'logit'.

ValueAlgorithmLoss FunctionLearner Value
'hinge'Support vector machineHinge: [y,f(x)]=max[0,1yf(x)]'svm'
'logit'Logistic regressionDeviance (logistic): [y,f(x)]=log{1+exp[yf(x)]}'logistic'

Complexity penalty type, which is always 'ridge (L2)'.

The software composes the objective function for minimization from the sum of the average loss function (see FittedLoss) and the regularization term, ridge (L2) penalty.

The ridge (L2) penalty is

λ2j=1pβj2

where λ specifies the regularization term strength (see Lambda). The software excludes the bias term (β0) from the regularization penalty.

Other Classification Properties

Categorical predictor indices, specified as a vector of positive integers. CategoricalPredictors contains index values indicating that the corresponding predictors are categorical. The index values are between 1 and p, where p is the number of predictors used to train the model. If none of the predictors are categorical, then this property is empty ([]).

Data Types: single | double

This property is read-only.

Unique class labels used in training, specified as a categorical or character array, logical or numeric vector, or cell array of character vectors. ClassNames has the same data type as the class labels Y. (The software treats string arrays as cell arrays of character vectors.) ClassNames also determines the class order.

Data Types: categorical | char | logical | single | double | cell

This property is read-only.

Misclassification costs, specified as a square numeric matrix. Cost has K rows and columns, where K is the number of classes.

Cost(i,j) is the cost of classifying a point into class j if its true class is i. The order of the rows and columns of Cost corresponds to the order of the classes in ClassNames.

Data Types: double

Parameters used for training the ClassificationKernel model, specified as a structure.

Access fields of ModelParameters using dot notation. For example, access the relative tolerance on the linear coefficients and the bias term by using Mdl.ModelParameters.BetaTolerance.

Data Types: struct

Predictor names in order of their appearance in the predictor data, specified as a cell array of character vectors. The length of PredictorNames is equal to the number of columns used as predictor variables in the training data X or Tbl.

Data Types: cell

Expanded predictor names, specified as a cell array of character vectors.

If the model uses encoding for categorical variables, then ExpandedPredictorNames includes the names that describe the expanded variables. Otherwise, ExpandedPredictorNames is the same as PredictorNames.

Data Types: cell

This property is read-only.

Prior class probabilities, specified as a numeric vector. Prior has as many elements as classes in ClassNames, and the order of the elements corresponds to the elements of ClassNames.

Data Types: double

Response variable name, specified as a character vector.

Data Types: char

Score transformation function to apply to predicted scores, specified as a function name or function handle.

For kernel classification models and before the score transformation, the predicted classification score for the observation x (row vector) is f(x)=T(x)β+b.

  • T(·) is a transformation of an observation for feature expansion.

  • β is the estimated column vector of coefficients.

  • b is the estimated scalar bias.

To change the score transformation function to function, for example, use dot notation.

  • For a built-in function, enter this code and replace function with a value from the table.

    Mdl.ScoreTransform = 'function';

    ValueDescription
    "doublelogit"1/(1 + e–2x)
    "invlogit"log(x / (1 – x))
    "ismax"Sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0
    "logit"1/(1 + ex)
    "none" or "identity"x (no transformation)
    "sign"–1 for x < 0
    0 for x = 0
    1 for x > 0
    "symmetric"2x – 1
    "symmetricismax"Sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1
    "symmetriclogit"2/(1 + ex) – 1

  • For a MATLAB® function, or a function that you define, enter its function handle.

    Mdl.ScoreTransform = @function;

    function must accept a matrix of the original scores for each class, and then return a matrix of the same size representing the transformed scores for each class.

Data Types: char | function_handle

Object Functions

edgeClassification edge for Gaussian kernel classification model
incrementalLearnerConvert kernel model for binary classification to incremental learner
limeLocal interpretable model-agnostic explanations (LIME)
lossClassification loss for Gaussian kernel classification model
marginClassification margins for Gaussian kernel classification model
partialDependenceCompute partial dependence
plotPartialDependenceCreate partial dependence plot (PDP) and individual conditional expectation (ICE) plots
predictPredict labels for Gaussian kernel classification model
resumeResume training of Gaussian kernel classification model
shapleyShapley values

Examples

collapse all

Train a binary kernel classification model using SVM.

Load the ionosphere data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b') or good ('g').

load ionosphere
[n,p] = size(X)
n = 
351
p = 
34
resp = unique(Y)
resp = 2x1 cell
    {'b'}
    {'g'}

Train a binary kernel classification model that identifies whether the radar return is bad ('b') or good ('g'). Extract a fit summary to determine how well the optimization algorithm fits the model to the data.

rng('default') % For reproducibility
[Mdl,FitInfo] = fitckernel(X,Y)
Mdl = 
  ClassificationKernel
              ResponseName: 'Y'
                ClassNames: {'b'  'g'}
                   Learner: 'svm'
    NumExpansionDimensions: 2048
               KernelScale: 1
                    Lambda: 0.0028
             BoxConstraint: 1


FitInfo = struct with fields:
                  Solver: 'LBFGS-fast'
            LossFunction: 'hinge'
                  Lambda: 0.0028
           BetaTolerance: 1.0000e-04
       GradientTolerance: 1.0000e-06
          ObjectiveValue: 0.2604
       GradientMagnitude: 0.0028
    RelativeChangeInBeta: 8.2512e-05
                 FitTime: 0.0960
                 History: []

Mdl is a ClassificationKernel model. To inspect the in-sample classification error, you can pass Mdl and the training data or new data to the loss function. Or, you can pass Mdl and new predictor data to the predict function to predict class labels for new observations. You can also pass Mdl and the training data to the resume function to continue training.

FitInfo is a structure array containing optimization information. Use FitInfo to determine whether optimization termination measurements are satisfactory.

For better accuracy, you can increase the maximum number of optimization iterations ('IterationLimit') and decrease the tolerance values ('BetaTolerance' and 'GradientTolerance') by using the name-value pair arguments. Doing so can improve measures like ObjectiveValue and RelativeChangeInBeta in FitInfo. You can also optimize model parameters by using the 'OptimizeHyperparameters' name-value pair argument.

Load the ionosphere data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b') or good ('g').

load ionosphere

Partition the data set into training and test sets. Specify a 20% holdout sample for the test set.

rng('default') % For reproducibility
Partition = cvpartition(Y,'Holdout',0.20);
trainingInds = training(Partition); % Indices for the training set
XTrain = X(trainingInds,:);
YTrain = Y(trainingInds);
testInds = test(Partition); % Indices for the test set
XTest = X(testInds,:);
YTest = Y(testInds);

Train a binary kernel classification model that identifies whether the radar return is bad ('b') or good ('g').

Mdl = fitckernel(XTrain,YTrain,'IterationLimit',5,'Verbose',1);
|=================================================================================================================|
| Solver |  Pass  |   Iteration  |   Objective   |     Step      |    Gradient   |    Relative    |  sum(beta~=0) |
|        |        |              |               |               |   magnitude   | change in Beta |               |
|=================================================================================================================|
|  LBFGS |      1 |            0 |  1.000000e+00 |  0.000000e+00 |  2.811388e-01 |                |             0 |
|  LBFGS |      1 |            1 |  7.585395e-01 |  4.000000e+00 |  3.594306e-01 |   1.000000e+00 |          2048 |
|  LBFGS |      1 |            2 |  7.160994e-01 |  1.000000e+00 |  2.028470e-01 |   6.923988e-01 |          2048 |
|  LBFGS |      1 |            3 |  6.825272e-01 |  1.000000e+00 |  2.846975e-02 |   2.388909e-01 |          2048 |
|  LBFGS |      1 |            4 |  6.699435e-01 |  1.000000e+00 |  1.779359e-02 |   1.325304e-01 |          2048 |
|  LBFGS |      1 |            5 |  6.535619e-01 |  1.000000e+00 |  2.669039e-01 |   4.112952e-01 |          2048 |
|=================================================================================================================|

Mdl is a ClassificationKernel model.

Predict the test-set labels, construct a confusion matrix for the test set, and estimate the classification error for the test set.

label = predict(Mdl,XTest);
ConfusionTest = confusionchart(YTest,label);

Figure contains an object of type ConfusionMatrixChart.

L = loss(Mdl,XTest,YTest)
L = 
0.3594

Mdl misclassifies all bad radar returns as good returns.

Continue training by using resume. This function continues training with the same options used for training Mdl.

UpdatedMdl = resume(Mdl,XTrain,YTrain);
|=================================================================================================================|
| Solver |  Pass  |   Iteration  |   Objective   |     Step      |    Gradient   |    Relative    |  sum(beta~=0) |
|        |        |              |               |               |   magnitude   | change in Beta |               |
|=================================================================================================================|
|  LBFGS |      1 |            0 |  6.535619e-01 |  0.000000e+00 |  2.669039e-01 |                |          2048 |
|  LBFGS |      1 |            1 |  6.132547e-01 |  1.000000e+00 |  6.355537e-03 |   1.522092e-01 |          2048 |
|  LBFGS |      1 |            2 |  5.938316e-01 |  4.000000e+00 |  3.202847e-02 |   1.498036e-01 |          2048 |
|  LBFGS |      1 |            3 |  4.169274e-01 |  1.000000e+00 |  1.530249e-01 |   7.234253e-01 |          2048 |
|  LBFGS |      1 |            4 |  3.679212e-01 |  5.000000e-01 |  2.740214e-01 |   2.495886e-01 |          2048 |
|  LBFGS |      1 |            5 |  3.332261e-01 |  1.000000e+00 |  1.423488e-02 |   9.558680e-02 |          2048 |
|  LBFGS |      1 |            6 |  3.235335e-01 |  1.000000e+00 |  7.117438e-03 |   7.137260e-02 |          2048 |
|  LBFGS |      1 |            7 |  3.112331e-01 |  1.000000e+00 |  6.049822e-02 |   1.252157e-01 |          2048 |
|  LBFGS |      1 |            8 |  2.972144e-01 |  1.000000e+00 |  7.117438e-03 |   5.796240e-02 |          2048 |
|  LBFGS |      1 |            9 |  2.837450e-01 |  1.000000e+00 |  8.185053e-02 |   1.484733e-01 |          2048 |
|  LBFGS |      1 |           10 |  2.797642e-01 |  1.000000e+00 |  3.558719e-02 |   5.856842e-02 |          2048 |
|  LBFGS |      1 |           11 |  2.771280e-01 |  1.000000e+00 |  2.846975e-02 |   2.349433e-02 |          2048 |
|  LBFGS |      1 |           12 |  2.741570e-01 |  1.000000e+00 |  3.914591e-02 |   3.113194e-02 |          2048 |
|  LBFGS |      1 |           13 |  2.725701e-01 |  5.000000e-01 |  1.067616e-01 |   8.729821e-02 |          2048 |
|  LBFGS |      1 |           14 |  2.667147e-01 |  1.000000e+00 |  3.914591e-02 |   3.491723e-02 |          2048 |
|  LBFGS |      1 |           15 |  2.621152e-01 |  1.000000e+00 |  7.117438e-03 |   5.104726e-02 |          2048 |
|  LBFGS |      1 |           16 |  2.601652e-01 |  1.000000e+00 |  3.558719e-02 |   3.764904e-02 |          2048 |
|  LBFGS |      1 |           17 |  2.589052e-01 |  1.000000e+00 |  3.202847e-02 |   3.655744e-02 |          2048 |
|  LBFGS |      1 |           18 |  2.583185e-01 |  1.000000e+00 |  7.117438e-03 |   6.490571e-02 |          2048 |
|  LBFGS |      1 |           19 |  2.556482e-01 |  1.000000e+00 |  9.252669e-02 |   4.601390e-02 |          2048 |
|  LBFGS |      1 |           20 |  2.542643e-01 |  1.000000e+00 |  7.117438e-02 |   4.141838e-02 |          2048 |
|=================================================================================================================|
| Solver |  Pass  |   Iteration  |   Objective   |     Step      |    Gradient   |    Relative    |  sum(beta~=0) |
|        |        |              |               |               |   magnitude   | change in Beta |               |
|=================================================================================================================|
|  LBFGS |      1 |           21 |  2.532117e-01 |  1.000000e+00 |  1.067616e-02 |   1.661720e-02 |          2048 |
|  LBFGS |      1 |           22 |  2.529890e-01 |  1.000000e+00 |  2.135231e-02 |   1.231678e-02 |          2048 |
|  LBFGS |      1 |           23 |  2.523232e-01 |  1.000000e+00 |  3.202847e-02 |   1.958586e-02 |          2048 |
|  LBFGS |      1 |           24 |  2.506736e-01 |  1.000000e+00 |  1.779359e-02 |   2.474613e-02 |          2048 |
|  LBFGS |      1 |           25 |  2.501995e-01 |  1.000000e+00 |  1.779359e-02 |   2.514352e-02 |          2048 |
|  LBFGS |      1 |           26 |  2.488242e-01 |  1.000000e+00 |  3.558719e-03 |   1.531810e-02 |          2048 |
|  LBFGS |      1 |           27 |  2.485295e-01 |  5.000000e-01 |  3.202847e-02 |   1.229760e-02 |          2048 |
|  LBFGS |      1 |           28 |  2.482244e-01 |  1.000000e+00 |  4.270463e-02 |   8.970983e-03 |          2048 |
|  LBFGS |      1 |           29 |  2.479714e-01 |  1.000000e+00 |  3.558719e-03 |   7.393900e-03 |          2048 |
|  LBFGS |      1 |           30 |  2.477316e-01 |  1.000000e+00 |  3.202847e-02 |   3.268087e-03 |          2048 |
|  LBFGS |      1 |           31 |  2.476178e-01 |  2.500000e-01 |  3.202847e-02 |   5.445890e-03 |          2048 |
|  LBFGS |      1 |           32 |  2.474874e-01 |  1.000000e+00 |  1.779359e-02 |   3.535903e-03 |          2048 |
|  LBFGS |      1 |           33 |  2.473980e-01 |  1.000000e+00 |  7.117438e-03 |   2.821725e-03 |          2048 |
|  LBFGS |      1 |           34 |  2.472935e-01 |  1.000000e+00 |  3.558719e-03 |   2.699880e-03 |          2048 |
|  LBFGS |      1 |           35 |  2.471418e-01 |  1.000000e+00 |  3.558719e-03 |   1.242523e-02 |          2048 |
|  LBFGS |      1 |           36 |  2.469862e-01 |  1.000000e+00 |  2.846975e-02 |   7.895605e-03 |          2048 |
|  LBFGS |      1 |           37 |  2.469598e-01 |  1.000000e+00 |  2.135231e-02 |   6.657676e-03 |          2048 |
|  LBFGS |      1 |           38 |  2.466941e-01 |  1.000000e+00 |  3.558719e-02 |   4.654690e-03 |          2048 |
|  LBFGS |      1 |           39 |  2.466660e-01 |  5.000000e-01 |  1.423488e-02 |   2.885769e-03 |          2048 |
|  LBFGS |      1 |           40 |  2.465605e-01 |  1.000000e+00 |  3.558719e-03 |   4.562565e-03 |          2048 |
|=================================================================================================================|
| Solver |  Pass  |   Iteration  |   Objective   |     Step      |    Gradient   |    Relative    |  sum(beta~=0) |
|        |        |              |               |               |   magnitude   | change in Beta |               |
|=================================================================================================================|
|  LBFGS |      1 |           41 |  2.465362e-01 |  1.000000e+00 |  1.423488e-02 |   5.652180e-03 |          2048 |
|  LBFGS |      1 |           42 |  2.463528e-01 |  1.000000e+00 |  3.558719e-03 |   2.389759e-03 |          2048 |
|  LBFGS |      1 |           43 |  2.463207e-01 |  1.000000e+00 |  1.511170e-03 |   3.738286e-03 |          2048 |
|  LBFGS |      1 |           44 |  2.462585e-01 |  5.000000e-01 |  7.117438e-02 |   2.321693e-03 |          2048 |
|  LBFGS |      1 |           45 |  2.461742e-01 |  1.000000e+00 |  7.117438e-03 |   2.599725e-03 |          2048 |
|  LBFGS |      1 |           46 |  2.461434e-01 |  1.000000e+00 |  3.202847e-02 |   3.186923e-03 |          2048 |
|  LBFGS |      1 |           47 |  2.461115e-01 |  1.000000e+00 |  7.117438e-03 |   1.530711e-03 |          2048 |
|  LBFGS |      1 |           48 |  2.460814e-01 |  1.000000e+00 |  1.067616e-02 |   1.811714e-03 |          2048 |
|  LBFGS |      1 |           49 |  2.460533e-01 |  5.000000e-01 |  1.423488e-02 |   1.012252e-03 |          2048 |
|  LBFGS |      1 |           50 |  2.460111e-01 |  1.000000e+00 |  1.423488e-02 |   4.166762e-03 |          2048 |
|  LBFGS |      1 |           51 |  2.459414e-01 |  1.000000e+00 |  1.067616e-02 |   3.271946e-03 |          2048 |
|  LBFGS |      1 |           52 |  2.458809e-01 |  1.000000e+00 |  1.423488e-02 |   1.846440e-03 |          2048 |
|  LBFGS |      1 |           53 |  2.458479e-01 |  1.000000e+00 |  1.067616e-02 |   1.180871e-03 |          2048 |
|  LBFGS |      1 |           54 |  2.458146e-01 |  1.000000e+00 |  1.455008e-03 |   1.422954e-03 |          2048 |
|  LBFGS |      1 |           55 |  2.457878e-01 |  1.000000e+00 |  7.117438e-03 |   1.880892e-03 |          2048 |
|  LBFGS |      1 |           56 |  2.457519e-01 |  1.000000e+00 |  2.491103e-02 |   1.074764e-03 |          2048 |
|  LBFGS |      1 |           57 |  2.457420e-01 |  1.000000e+00 |  7.473310e-02 |   9.511878e-04 |          2048 |
|  LBFGS |      1 |           58 |  2.457212e-01 |  1.000000e+00 |  3.558719e-03 |   3.718564e-04 |          2048 |
|  LBFGS |      1 |           59 |  2.457089e-01 |  1.000000e+00 |  4.270463e-02 |   6.237270e-04 |          2048 |
|  LBFGS |      1 |           60 |  2.457047e-01 |  5.000000e-01 |  1.423488e-02 |   3.647573e-04 |          2048 |
|=================================================================================================================|
| Solver |  Pass  |   Iteration  |   Objective   |     Step      |    Gradient   |    Relative    |  sum(beta~=0) |
|        |        |              |               |               |   magnitude   | change in Beta |               |
|=================================================================================================================|
|  LBFGS |      1 |           61 |  2.456991e-01 |  1.000000e+00 |  1.423488e-02 |   5.666884e-04 |          2048 |
|  LBFGS |      1 |           62 |  2.456898e-01 |  1.000000e+00 |  1.779359e-02 |   4.697056e-04 |          2048 |
|  LBFGS |      1 |           63 |  2.456792e-01 |  1.000000e+00 |  1.779359e-02 |   5.984927e-04 |          2048 |
|  LBFGS |      1 |           64 |  2.456603e-01 |  1.000000e+00 |  1.403782e-03 |   5.414985e-04 |          2048 |
|  LBFGS |      1 |           65 |  2.456482e-01 |  1.000000e+00 |  3.558719e-03 |   6.506293e-04 |          2048 |
|  LBFGS |      1 |           66 |  2.456358e-01 |  1.000000e+00 |  1.476262e-03 |   1.284139e-03 |          2048 |
|  LBFGS |      1 |           67 |  2.456124e-01 |  1.000000e+00 |  3.558719e-03 |   8.636596e-04 |          2048 |
|  LBFGS |      1 |           68 |  2.455980e-01 |  1.000000e+00 |  1.067616e-02 |   9.861527e-04 |          2048 |
|  LBFGS |      1 |           69 |  2.455780e-01 |  1.000000e+00 |  1.067616e-02 |   5.102487e-04 |          2048 |
|  LBFGS |      1 |           70 |  2.455633e-01 |  1.000000e+00 |  3.558719e-03 |   1.228077e-03 |          2048 |
|  LBFGS |      1 |           71 |  2.455449e-01 |  1.000000e+00 |  1.423488e-02 |   7.864590e-04 |          2048 |
|  LBFGS |      1 |           72 |  2.455261e-01 |  1.000000e+00 |  3.558719e-02 |   1.090815e-03 |          2048 |
|  LBFGS |      1 |           73 |  2.455142e-01 |  1.000000e+00 |  1.067616e-02 |   1.701506e-03 |          2048 |
|  LBFGS |      1 |           74 |  2.455075e-01 |  1.000000e+00 |  1.779359e-02 |   1.504577e-03 |          2048 |
|  LBFGS |      1 |           75 |  2.455008e-01 |  1.000000e+00 |  3.914591e-02 |   1.144021e-03 |          2048 |
|  LBFGS |      1 |           76 |  2.454943e-01 |  1.000000e+00 |  2.491103e-02 |   3.015254e-04 |          2048 |
|  LBFGS |      1 |           77 |  2.454918e-01 |  5.000000e-01 |  3.202847e-02 |   9.837523e-04 |          2048 |
|  LBFGS |      1 |           78 |  2.454870e-01 |  1.000000e+00 |  1.779359e-02 |   4.328953e-04 |          2048 |
|  LBFGS |      1 |           79 |  2.454865e-01 |  5.000000e-01 |  3.558719e-03 |   7.126815e-04 |          2048 |
|  LBFGS |      1 |           80 |  2.454775e-01 |  1.000000e+00 |  5.693950e-02 |   8.992562e-04 |          2048 |
|=================================================================================================================|
| Solver |  Pass  |   Iteration  |   Objective   |     Step      |    Gradient   |    Relative    |  sum(beta~=0) |
|        |        |              |               |               |   magnitude   | change in Beta |               |
|=================================================================================================================|
|  LBFGS |      1 |           81 |  2.454686e-01 |  1.000000e+00 |  1.183730e-03 |   1.590246e-04 |          2048 |
|  LBFGS |      1 |           82 |  2.454612e-01 |  1.000000e+00 |  2.135231e-02 |   1.389570e-04 |          2048 |
|  LBFGS |      1 |           83 |  2.454506e-01 |  1.000000e+00 |  3.558719e-03 |   6.162089e-04 |          2048 |
|  LBFGS |      1 |           84 |  2.454436e-01 |  1.000000e+00 |  1.423488e-02 |   1.877414e-03 |          2048 |
|  LBFGS |      1 |           85 |  2.454378e-01 |  1.000000e+00 |  1.423488e-02 |   3.370852e-04 |          2048 |
|  LBFGS |      1 |           86 |  2.454249e-01 |  1.000000e+00 |  1.423488e-02 |   8.133615e-04 |          2048 |
|  LBFGS |      1 |           87 |  2.454101e-01 |  1.000000e+00 |  1.067616e-02 |   3.872088e-04 |          2048 |
|  LBFGS |      1 |           88 |  2.453963e-01 |  1.000000e+00 |  1.779359e-02 |   5.670260e-04 |          2048 |
|  LBFGS |      1 |           89 |  2.453866e-01 |  1.000000e+00 |  1.067616e-02 |   1.444984e-03 |          2048 |
|  LBFGS |      1 |           90 |  2.453821e-01 |  1.000000e+00 |  7.117438e-03 |   2.457270e-03 |          2048 |
|  LBFGS |      1 |           91 |  2.453790e-01 |  5.000000e-01 |  6.761566e-02 |   8.228766e-04 |          2048 |
|  LBFGS |      1 |           92 |  2.453603e-01 |  1.000000e+00 |  2.135231e-02 |   1.084233e-03 |          2048 |
|  LBFGS |      1 |           93 |  2.453540e-01 |  1.000000e+00 |  2.135231e-02 |   2.060005e-04 |          2048 |
|  LBFGS |      1 |           94 |  2.453482e-01 |  1.000000e+00 |  1.779359e-02 |   1.560883e-04 |          2048 |
|  LBFGS |      1 |           95 |  2.453461e-01 |  1.000000e+00 |  1.779359e-02 |   1.614693e-03 |          2048 |
|  LBFGS |      1 |           96 |  2.453371e-01 |  1.000000e+00 |  3.558719e-02 |   2.145835e-04 |          2048 |
|  LBFGS |      1 |           97 |  2.453305e-01 |  1.000000e+00 |  4.270463e-02 |   7.602088e-04 |          2048 |
|  LBFGS |      1 |           98 |  2.453283e-01 |  2.500000e-01 |  2.135231e-02 |   3.422253e-04 |          2048 |
|  LBFGS |      1 |           99 |  2.453246e-01 |  1.000000e+00 |  3.558719e-03 |   3.872561e-04 |          2048 |
|  LBFGS |      1 |          100 |  2.453214e-01 |  1.000000e+00 |  3.202847e-02 |   1.732237e-04 |          2048 |
|=================================================================================================================|
| Solver |  Pass  |   Iteration  |   Objective   |     Step      |    Gradient   |    Relative    |  sum(beta~=0) |
|        |        |              |               |               |   magnitude   | change in Beta |               |
|=================================================================================================================|
|  LBFGS |      1 |          101 |  2.453168e-01 |  1.000000e+00 |  1.067616e-02 |   3.065286e-04 |          2048 |
|  LBFGS |      1 |          102 |  2.453155e-01 |  5.000000e-01 |  4.626335e-02 |   3.402368e-04 |          2048 |
|  LBFGS |      1 |          103 |  2.453136e-01 |  1.000000e+00 |  1.779359e-02 |   2.215029e-04 |          2048 |
|  LBFGS |      1 |          104 |  2.453119e-01 |  1.000000e+00 |  3.202847e-02 |   4.142355e-04 |          2048 |
|  LBFGS |      1 |          105 |  2.453093e-01 |  1.000000e+00 |  1.423488e-02 |   2.186007e-04 |          2048 |
|  LBFGS |      1 |          106 |  2.453090e-01 |  1.000000e+00 |  2.846975e-02 |   1.338602e-03 |          2048 |
|  LBFGS |      1 |          107 |  2.453048e-01 |  1.000000e+00 |  1.423488e-02 |   3.208296e-04 |          2048 |
|  LBFGS |      1 |          108 |  2.453040e-01 |  1.000000e+00 |  3.558719e-02 |   1.294488e-03 |          2048 |
|  LBFGS |      1 |          109 |  2.452977e-01 |  1.000000e+00 |  1.423488e-02 |   8.328380e-04 |          2048 |
|  LBFGS |      1 |          110 |  2.452934e-01 |  1.000000e+00 |  2.135231e-02 |   5.149259e-04 |          2048 |
|  LBFGS |      1 |          111 |  2.452886e-01 |  1.000000e+00 |  1.779359e-02 |   3.650664e-04 |          2048 |
|  LBFGS |      1 |          112 |  2.452854e-01 |  1.000000e+00 |  1.067616e-02 |   2.633981e-04 |          2048 |
|  LBFGS |      1 |          113 |  2.452836e-01 |  1.000000e+00 |  1.067616e-02 |   1.804300e-04 |          2048 |
|  LBFGS |      1 |          114 |  2.452817e-01 |  1.000000e+00 |  7.117438e-03 |   4.251642e-04 |          2048 |
|  LBFGS |      1 |          115 |  2.452741e-01 |  1.000000e+00 |  1.779359e-02 |   9.018440e-04 |          2048 |
|  LBFGS |      1 |          116 |  2.452691e-01 |  1.000000e+00 |  2.135231e-02 |   9.941716e-05 |          2048 |
|=================================================================================================================|

Predict the test-set labels, construct a confusion matrix for the test set, and estimate the classification error for the test set.

UpdatedLabel = predict(UpdatedMdl,XTest);
UpdatedConfusionTest = confusionchart(YTest,UpdatedLabel);

Figure contains an object of type ConfusionMatrixChart.

UpdatedL = loss(UpdatedMdl,XTest,YTest)
UpdatedL = 
0.1284

The classification error decreases after resume updates the classification model with more iterations.

Extended Capabilities

Version History

Introduced in R2017b

expand all