kfoldEdge
Classification edge for cross-validated linear classification model
Description
returns
the cross-validated classification
edges obtained by the cross-validated, binary, linear
classification model e
= kfoldEdge(CVMdl
)CVMdl
. That is, for every fold,
kfoldEdge
estimates the classification edge for
observations that it holds out when it trains using all other
observations.
e
contains a classification edge for each regularization strength in the
linear classification models that comprise
CVMdl
.
uses additional options specified by one or more name-value arguments. For
example, indicate which folds to use for the edge calculation.e
= kfoldEdge(CVMdl
,Name=Value
)
Examples
Estimate k-Fold Cross-Validation Edge
Load the NLP data set.
load nlpdata
X
is a sparse matrix of predictor data, and Y
is a categorical vector of class labels. There are more than two classes in the data.
The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. So, identify the labels that correspond to the Statistics and Machine Learning Toolbox™ documentation web pages.
Ystats = Y == 'stats';
Cross-validate a binary, linear classification model that can identify whether the word counts in a documentation web page are from the Statistics and Machine Learning Toolbox™ documentation.
rng(1); % For reproducibility CVMdl = fitclinear(X,Ystats,'CrossVal','on');
CVMdl
is a ClassificationPartitionedLinear
model. By default, the software implements 10-fold cross validation. You can alter the number of folds using the 'KFold'
name-value pair argument.
Estimate the average of the out-of-fold edges.
e = kfoldEdge(CVMdl)
e = 8.1243
Alternatively, you can obtain the per-fold edges by specifying the name-value pair 'Mode','individual'
in kfoldEdge
.
Feature Selection Using k-fold Edges
One way to perform feature selection is to compare k-fold edges from multiple models. Based solely on this criterion, the classifier with the highest edge is the best classifier.
Load the NLP data set. Preprocess the data as in Estimate k-Fold Cross-Validation Edge.
load nlpdata Ystats = Y == 'stats'; X = X';
Create these two data sets:
fullX
contains all predictors.partX
contains 1/2 of the predictors chosen at random.
rng(1); % For reproducibility p = size(X,1); % Number of predictors halfPredIdx = randsample(p,ceil(0.5*p)); fullX = X; partX = X(halfPredIdx,:);
Cross-validate two binary, linear classification models: one that uses the all of the predictors and one that uses half of the predictors. Optimize the objective function using SpaRSA, and indicate that observations correspond to columns.
CVMdl = fitclinear(fullX,Ystats,'CrossVal','on','Solver','sparsa',... 'ObservationsIn','columns'); PCVMdl = fitclinear(partX,Ystats,'CrossVal','on','Solver','sparsa',... 'ObservationsIn','columns');
CVMdl
and PCVMdl
are ClassificationPartitionedLinear
models.
Estimate the k-fold edge for each classifier.
fullEdge = kfoldEdge(CVMdl)
fullEdge = 16.5629
partEdge = kfoldEdge(PCVMdl)
partEdge = 13.9030
Based on the k-fold edges, the classifier that uses all of the predictors is the better model.
Find Good Lasso Penalty Using k-fold Edge
To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare k-fold edges.
Load the NLP data set. Preprocess the data as in Estimate k-Fold Cross-Validation Edge.
load nlpdata Ystats = Y == 'stats'; X = X';
Create a set of 11 logarithmically-spaced regularization strengths from through .
Lambda = logspace(-8,1,11);
Cross-validate a binary, linear classification model using 5-fold cross-validation and that uses each of the regularization strengths. Optimize the objective function using SpaRSA. Lower the tolerance on the gradient of the objective function to 1e-8
.
rng(10); % For reproducibility CVMdl = fitclinear(X,Ystats,'ObservationsIn','columns','KFold',5,... 'Learner','logistic','Solver','sparsa','Regularization','lasso',... 'Lambda',Lambda,'GradientTolerance',1e-8)
CVMdl = ClassificationPartitionedLinear CrossValidatedModel: 'Linear' ResponseName: 'Y' NumObservations: 31572 KFold: 5 Partition: [1x1 cvpartition] ClassNames: [0 1] ScoreTransform: 'none'
CVMdl
is a ClassificationPartitionedLinear
model. Because fitclinear
implements 5-fold cross-validation, CVMdl
contains 5 ClassificationLinear
models that the software trains on each fold.
Estimate the edges for each fold and regularization strength.
eFolds = kfoldEdge(CVMdl,'Mode','individual')
eFolds = 5×11
0.9958 0.9958 0.9958 0.9958 0.9958 0.9923 0.9772 0.9231 0.8419 0.8127 0.8127
0.9991 0.9991 0.9991 0.9991 0.9991 0.9939 0.9780 0.9181 0.8257 0.8128 0.8128
0.9992 0.9992 0.9992 0.9992 0.9992 0.9942 0.9779 0.9103 0.8255 0.8128 0.8128
0.9974 0.9974 0.9974 0.9974 0.9974 0.9931 0.9772 0.9195 0.8486 0.8130 0.8130
0.9976 0.9976 0.9976 0.9976 0.9976 0.9942 0.9782 0.9194 0.8400 0.8127 0.8127
eFolds
is a 5-by-11 matrix of edges. Rows correspond to folds and columns correspond to regularization strengths in Lambda
. You can use eFolds
to identify ill-performing folds, that is, unusually low edges.
Estimate the average edge over all folds for each regularization strength.
e = kfoldEdge(CVMdl)
e = 1×11
0.9978 0.9978 0.9978 0.9978 0.9978 0.9935 0.9777 0.9181 0.8364 0.8128 0.8128
Determine how well the models generalize by plotting the averages of the 5-fold edge for each regularization strength. Identify the regularization strength that maximizes the 5-fold edge over the grid.
figure; plot(log10(Lambda),log10(e),'-o') [~, maxEIdx] = max(e); maxLambda = Lambda(maxEIdx); hold on plot(log10(maxLambda),log10(e(maxEIdx)),'ro'); ylabel('log_{10} 5-fold edge') xlabel('log_{10} Lambda') legend('Edge','Max edge') hold off
Several values of Lambda
yield similarly high edges. Higher values of lambda lead to predictor variable sparsity, which is a good quality of a classifier.
Choose the regularization strength that occurs just before the edge starts decreasing.
LambdaFinal = Lambda(5);
Train a linear classification model using the entire data set and specify the regularization strength LambdaFinal
.
MdlFinal = fitclinear(X,Ystats,'ObservationsIn','columns',... 'Learner','logistic','Solver','sparsa','Regularization','lasso',... 'Lambda',LambdaFinal);
To estimate labels for new observations, pass MdlFinal
and the new data to predict
.
Input Arguments
CVMdl
— Cross-validated, binary, linear classification model
ClassificationPartitionedLinear
model object
Cross-validated, binary, linear classification model, specified as a ClassificationPartitionedLinear
model object. You can create a
ClassificationPartitionedLinear
model using fitclinear
and specifying any one of the cross-validation, name-value
pair arguments, for example, CrossVal
.
To obtain estimates, kfoldEdge
applies the same data used to
cross-validate the linear classification model (X
and Y
).
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: kfoldEdge(CVMdl,Folds=[1 2 3 5])
specifies to use
the first, second, third, and fifth folds to compute the classification
edge, but to exclude the fourth fold.
Folds
— Fold indices to use for classification-score prediction
1:CVMdl.KFold
(default) | numeric vector of positive integers
Fold indices to use for classification-score prediction, specified as a numeric vector of
positive integers. The elements of Folds
must range from
1
through CVMdl.KFold
.
Example: Folds=[1 4 10]
Data Types: single
| double
Mode
— Edge aggregation level
"average"
(default) | "individual"
Edge aggregation level, specified as "average"
or
"individual"
.
Value | Description |
---|---|
"average" | Returns classification edges averaged over all folds |
"individual" | Returns classification edges for each fold |
Example: Mode="individual"
Output Arguments
e
— Cross-validated classification edges
numeric scalar | numeric vector | numeric matrix
Cross-validated classification edges, returned as a numeric scalar, vector, or matrix.
Let L
be the number of regularization
strengths in the cross-validated models (that is, L is numel(CVMdl.Trained{1}.Lambda)
)
and F
be the number of folds (stored in CVMdl.KFold
).
If
Mode
is'average'
, thene
is a 1-by-L
vector.e(
is the average classification edge over all folds of the cross-validated model that uses regularization strengthj
)j
.Otherwise,
e
is anF
-by-L
matrix.e(
is the classification edge for foldi
,j
)i
of the cross-validated model that uses regularization strengthj
.
To estimate e
, kfoldEdge
uses the data that created
CVMdl
(see X
and Y
).
More About
Classification Edge
The classification edge is the weighted mean of the classification margins.
One way to choose among multiple classifiers, for example to perform feature selection, is to choose the classifier that yields the greatest edge.
Classification Margin
The classification margin for binary classification is, for each observation, the difference between the classification score for the true class and the classification score for the false class.
The software defines the classification margin for binary classification as
x is an observation. If the true label of x is the positive class, then y is 1, and –1 otherwise. f(x) is the positive-class classification score for the observation x. The classification margin is commonly defined as m = yf(x).
If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.
Classification Score
For linear classification models, the raw classification score for classifying the observation x, a row vector, into the positive class is defined by
For the model with regularization strength j, is the estimated column vector of coefficients (the model property
Beta(:,j)
) and is the estimated, scalar bias (the model property
Bias(j)
).
The raw classification score for classifying x into the negative class is –f(x). The software classifies observations into the class that yields the positive score.
If the linear classification model consists of logistic regression learners, then the
software applies the 'logit'
score transformation to the raw
classification scores (see ScoreTransform
).
Extended Capabilities
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
This function fully supports GPU arrays. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2016aR2024a: Specify GPU arrays (requires Parallel Computing Toolbox)
kfoldEdge
fully
supports GPU arrays.
R2023b: Observations with missing predictor values are used in resubstitution and cross-validation computations
Starting in R2023b, the following classification model object functions use observations with missing predictor values as part of resubstitution ("resub") and cross-validation ("kfold") computations for classification edges, losses, margins, and predictions.
In previous releases, the software omitted observations with missing predictor values from the resubstitution and cross-validation computations.
R2022a: kfoldEdge
returns a different value for a model with a nondefault cost matrix
If you specify a nondefault cost matrix when you train the input model object, the kfoldEdge
function returns a different value compared to previous releases.
The kfoldEdge
function uses the
observation weights stored in the W
property. The way the function uses the
W
property value has not changed. However, the property value stored in the input model object has changed for a
model with a nondefault cost matrix, so the function might return a different value.
For details about the property value change, see Cost property stores the user-specified cost matrix.
If you want the software to handle the cost matrix, prior
probabilities, and observation weights in the same way as in previous releases, adjust the prior
probabilities and observation weights for the nondefault cost matrix, as described in Adjust Prior Probabilities and Observation Weights for Misclassification Cost Matrix. Then, when you train a
classification model, specify the adjusted prior probabilities and observation weights by using
the Prior
and Weights
name-value arguments, respectively,
and use the default cost matrix.
See Also
ClassificationPartitionedLinear
| kfoldMargin
| ClassificationLinear
| kfoldPredict
| edge
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)