# kfoldMargin

Classification margins for cross-validated kernel ECOC model

## Description

example

margin = kfoldMargin(CVMdl) returns the classification margins obtained by the cross-validated kernel ECOC model (ClassificationPartitionedKernelECOC) CVMdl. For every fold, kfoldMargin computes the classification margins for validation-fold observations using a model trained on training-fold observations.

margin = kfoldMargin(CVMdl,Name,Value) returns classification margins with additional options specified by one or more name-value pair arguments. For example, specify the binary learner loss function, decoding scheme, or verbosity level.

## Examples

collapse all

Load Fisher's iris data set. X contains flower measurements, and Y contains the names of flower species.

X = meas;
Y = species;

Cross-validate an ECOC model composed of kernel binary learners.

CVMdl = fitcecoc(X,Y,'Learners','kernel','CrossVal','on')
CVMdl =
ClassificationPartitionedKernelECOC
CrossValidatedModel: 'KernelECOC'
ResponseName: 'Y'
NumObservations: 150
KFold: 10
Partition: [1x1 cvpartition]
ClassNames: {'setosa'  'versicolor'  'virginica'}
ScoreTransform: 'none'

Properties, Methods

CVMdl is a ClassificationPartitionedKernelECOC model. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold' name-value pair argument instead of 'Crossval'.

Estimate the classification margins for validation-fold observations.

m = kfoldMargin(CVMdl);
size(m)
ans = 1×2

150     1

m is a 150-by-1 vector. m(j) is the classification margin for observation j.

Plot the k-fold margins using a boxplot.

boxplot(m,'Labels','All Observations')
title('Distribution of Margins')

Perform feature selection by comparing k-fold margins from multiple models. Based solely on this criterion, the classifier with the greatest margins is the best classifier.

Load Fisher's iris data set. X contains flower measurements, and Y contains the names of flower species.

X = meas;
Y = species;

Randomly choose half of the predictor variables.

rng(1); % For reproducibility
p = size(X,2); % Number of predictors
idxPart = randsample(p,ceil(0.5*p));

Cross-validate two ECOC models composed of kernel classification models: one that uses all of the predictors, and one that uses half of the predictors.

CVMdl = fitcecoc(X,Y,'Learners','kernel','CrossVal','on');
PCVMdl = fitcecoc(X(:,idxPart),Y,'Learners','kernel','CrossVal','on');

CVMdl and PCVMdl are ClassificationPartitionedKernelECOC models. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold' name-value pair argument instead of 'Crossval'.

Estimate the k-fold margins for each classifier.

fullMargins = kfoldMargin(CVMdl);
partMargins = kfoldMargin(PCVMdl);

Plot the distribution of the margin sets using box plots.

boxplot([fullMargins partMargins], ...
'Labels',{'All Predictors','Half of the Predictors'});
title('Distribution of Margins')

The PCVMdl margin distribution is similar to the CVMdl margin distribution.

## Input Arguments

collapse all

Cross-validated kernel ECOC model, specified as a ClassificationPartitionedKernelECOC model. You can create a ClassificationPartitionedKernelECOC model by training an ECOC model using fitcecoc and specifying these name-value pair arguments:

• 'Learners'– Set the value to 'kernel', a template object returned by templateKernel, or a cell array of such template objects.

• One of the arguments 'CrossVal', 'CVPartition', 'Holdout', 'KFold', or 'Leaveout'.

### Name-Value Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: kfoldMargin(CVMdl,'Verbose',1) specifies to display diagnostic messages in the Command Window.

Binary learner loss function, specified as the comma-separated pair consisting of 'BinaryLoss' and a built-in loss function name or function handle.

• This table contains names and descriptions of the built-in functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula.

ValueDescriptionScore Domaing(yj,sj)
'binodeviance'Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
'exponential'Exponential(–∞,∞)exp(–yjsj)/2
'hamming'Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
'hinge'Hinge(–∞,∞)max(0,1 – yjsj)/2
'linear'Linear(–∞,∞)(1 – yjsj)/2
'logit'Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]

The software normalizes binary losses such that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class.

• For a custom binary loss function, for example, customFunction, specify its function handle 'BinaryLoss',@customFunction.

customFunction has this form:

bLoss = customFunction(M,s)
where:

• M is the K-by-L coding matrix stored in Mdl.CodingMatrix.

• s is the 1-by-L row vector of classification scores.

• bLoss is the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class.

• K is the number of classes.

• L is the number of binary learners.

By default, if all binary learners are kernel classification models using SVM, then BinaryLoss is 'hinge'. If all binary learners are kernel classification models using logistic regression, then BinaryLoss is 'quadratic'.

Example: 'BinaryLoss','binodeviance'

Data Types: char | string | function_handle

Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of 'Decoding' and 'lossweighted' or 'lossbased'. For more information, see Binary Loss.

Example: 'Decoding','lossbased'

Estimation options, specified as the comma-separated pair consisting of 'Options' and a structure array returned by statset.

To invoke parallel computing:

• You need a Parallel Computing Toolbox™ license.

• Specify 'Options',statset('UseParallel',true).

Verbosity level, specified as the comma-separated pair consisting of 'Verbose' and 0 or 1. Verbose controls the number of diagnostic messages that the software displays in the Command Window.

If Verbose is 0, then the software does not display diagnostic messages. Otherwise, the software displays diagnostic messages.

Example: 'Verbose',1

Data Types: single | double

## Output Arguments

collapse all

Classification margins, returned as a numeric vector. margin is an n-by-1 vector, where each row is the margin of the corresponding observation and n is the number of observations (size(CVMdl.Y,1)).

collapse all

### Classification Margin

The classification margin is, for each observation, the difference between the negative loss for the true class and the maximal negative loss among the false classes. If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.

### Binary Loss

A binary loss is a function of the class and classification score that determines how well a binary learner classifies an observation into the class.

Suppose the following:

• mkj is element (k,j) of the coding design matrix M (that is, the code corresponding to class k of binary learner j).

• sj is the score of binary learner j for an observation.

• g is the binary loss function.

• $\stackrel{^}{k}$ is the predicted class for the observation.

In loss-based decoding [Escalera et al.], the class producing the minimum sum of the binary losses over binary learners determines the predicted class of an observation, that is,

$\stackrel{^}{k}=\underset{k}{\text{argmin}}\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right).$

In loss-weighted decoding [Escalera et al.], the class producing the minimum average of the binary losses over binary learners determines the predicted class of an observation, that is,

$\stackrel{^}{k}=\underset{k}{\text{argmin}}\frac{\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right)}{\sum _{j=1}^{L}|{m}_{kj}|}.$

Allwein et al. suggest that loss-weighted decoding improves classification accuracy by keeping loss values for all classes in the same dynamic range.

This table summarizes the supported loss functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj).

ValueDescriptionScore Domaing(yj,sj)
'binodeviance'Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
'exponential'Exponential(–∞,∞)exp(–yjsj)/2
'hamming'Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
'hinge'Hinge(–∞,∞)max(0,1 – yjsj)/2
'linear'Linear(–∞,∞)(1 – yjsj)/2
'logit'Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]

The software normalizes binary losses such that the loss is 0.5 when yj = 0, and aggregates using the average of the binary learners [Allwein et al.].

Do not confuse the binary loss with the overall classification loss (specified by the 'LossFun' name-value pair argument of the loss and predict object functions), which measures how well an ECOC classifier performs as a whole.

## References

[1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classiﬁers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141.

[2] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134.

[3] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recogn. Vol. 30, Issue 3, 2009, pp. 285–297.