Main Content

loss

Class: classreg.learning.regr.CompactRegressionSVM, RegressionSVM
Namespace: classreg.learning.regr

Regression error for support vector machine regression model

Syntax

L = loss(mdl,Tbl,ResponseVarName)
L = loss(mdl,Tbl,Y)
L = loss(mdl,X,Y)
L = loss(___,Name,Value)

Description

L = loss(mdl,Tbl,ResponseVarName) returns the loss for the predictions of the support vector machine (SVM) regression model, mdl, based on the predictor data in the table Tbl and the true response values in Tbl.ResponseVarName.

L = loss(mdl,Tbl,Y) returns the loss for the predictions of the support vector machine (SVM) regression model, mdl, based on the predictor data in the table X and the true response values in the vector Y.

L = loss(mdl,X,Y) returns the loss for the predictions of the support vector machine (SVM) regression model, mdl, based on the predictor data in X and the true responses in Y.

L = loss(___,Name,Value) returns the loss with additional options specified by one or more name-value arguments, using any of the previous syntaxes. For example, you can specify the loss function or observation weights.

Input Arguments

expand all

SVM regression model, specified as a RegressionSVM model or CompactRegressionSVM model returned by fitrsvm or compact, respectively.

Sample data, specified as a table. Each row of tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain additional columns for the response variable and observation weights. Tbl must contain all of the predictors used to train mdl. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

If you trained mdl using sample data contained in a table, then the input data for this method must also be in a table.

Data Types: table

Response variable name, specified as the name of a variable in Tbl. The response variable must be a numeric vector.

You must specify ResponseVarName as a character vector or string scalar. For example, if the response variable Y is stored as Tbl.Y, then specify ResponseVarName as 'Y'. Otherwise, the software treats all columns of Tbl, including Y, as predictors when training the model.

Data Types: char | string

Predictor data, specified as a numeric matrix or table. Each row of X corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature).

If you trained mdl using a matrix of predictor values, then X must be a numeric matrix with p columns. p is the number of predictors used to train mdl.

The length of Y and the number of rows of X must be equal.

Data Types: single | double

Observed response values, specified as a vector of length n containing numeric values. Each entry in Y is the observed response based on the predictor data in the corresponding row of X.

Data Types: single | double

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Loss function, specified as the comma-separated pair consisting of 'LossFun' and 'mse', 'epsiloninsensitive', or a function handle.

  • The following table lists the available loss functions.

    ValueLoss Function
    'mse'Weighted Mean Squared Error
    'epsiloninsensitive'Epsilon-Insensitive Loss Function
  • Specify your own function using function handle notation.

    Your function must have the signature lossvalue = lossfun(Y,Yfit,W), where:

    • The output argument lossvalue is a scalar value.

    • You choose the function name (lossfun).

    • Y is an n-by-1 numeric vector of observed response values.

    • Yfit is an n-by-1 numeric vector of predicted response values, calculated using the corresponding predictor values in X (similar to the output of predict).

    • W is an n-by-1 numeric vector of observation weights. If you pass W, the software normalizes them to sum to 1.

    Specify your function using 'LossFun',@lossfun.

Example: 'LossFun','epsiloninsensitive'

Data Types: char | string | function_handle

Since R2023b

Predicted response value to use for observations with missing predictor values, specified as "median", "mean", "omitted", or a numeric scalar.

ValueDescription
"median"loss uses the median of the observed response values in the training data as the predicted response value for observations with missing predictor values.
"mean"loss uses the mean of the observed response values in the training data as the predicted response value for observations with missing predictor values.
"omitted"loss excludes observations with missing predictor values from the loss computation.
Numeric scalarloss uses this value as the predicted response value for observations with missing predictor values.

If an observation is missing an observed response value or an observation weight, then loss does not use the observation in the loss computation.

Example: PredictionForMissingValue="omitted"

Data Types: single | double | char | string

Observation weights, specified as the comma-separated pair consisting of 'Weights' and a numeric vector. Weights must be the same length as the number of rows in X. The software weighs the observations in each row of X using the corresponding weight value in Weights.

Weights are normalized to sum to 1.

Data Types: single | double

Output Arguments

expand all

Regression loss, returned as a scalar value.

Examples

expand all

Calculate the test set mean squared error (MSE) and epsilon-insensitive error of an SVM regression model.

Load the carsmall sample data. Specify Horsepower and Weight as the predictor variables (X), and MPG as the response variable (Y).

load carsmall
X = [Horsepower,Weight];
Y = MPG;

Delete rows of X and Y where either array has NaN values.

R = rmmissing([X Y]);
X = R(:,1:2);
Y = R(:,end);

Reserve 10% of the observations as a holdout sample, and extract the training and test indices.

rng default  % For reproducibility
N = length(Y);
cv = cvpartition(N,'HoldOut',0.10);
trainInds = training(cv);
testInds = test(cv);

Specify the training and test data sets.

XTrain = X(trainInds,:);
YTrain = Y(trainInds);
XTest = X(testInds,:);
YTest = Y(testInds);

Train a linear SVM regression model and standardize the data.

mdl = fitrsvm(XTrain,YTrain,'Standardize',true)
mdl = 
  RegressionSVM
             ResponseName: 'Y'
    CategoricalPredictors: []
        ResponseTransform: 'none'
                    Alpha: [68x1 double]
                     Bias: 23.0248
         KernelParameters: [1x1 struct]
                       Mu: [108.8810 2.9419e+03]
                    Sigma: [44.4943 805.1412]
          NumObservations: 84
           BoxConstraints: [84x1 double]
          ConvergenceInfo: [1x1 struct]
          IsSupportVector: [84x1 logical]
                   Solver: 'SMO'


mdl is a RegressionSVM model.

Determine how well the trained model generalizes to new predictor values by estimating the test sample mean squared error and epsilon-insensitive error.

lossMSE = loss(mdl,XTest,YTest)
lossMSE = 
32.0268
lossEI = loss(mdl,XTest,YTest,'LossFun','epsiloninsensitive')
lossEI = 
3.2919

More About

expand all

Tips

Extended Capabilities

Version History

Introduced in R2015b

expand all