Explain image classification result using LIME
uses the locally-interpretable model-agnostic explanation (LIME) technique to compute a map
of the importance of the features in the input image scoreMap
= imageLIME(net
,X
,label
)X
when the network
net
evaluates the class score for the class given by
label
. Use this function to explain classification decisions and
check that your network is focusing on the appropriate features of the image.
The LIME technique approximates the classification behavior of the
net
using a simpler, more interpretable model. By generating
synthetic data from input X
, classifying the synthetic data using
net
, and then using the results to fit a simple regression model, the
imageLIME
function determines the importance of each feature of
X
to the network's classification score for class given by
label
.
This function requires Statistics and Machine Learning Toolbox™.
[
also returns a map of the features used to compute the LIME results and the calculated
importance of each feature.scoreMap
,featureMap
,featureImportance
] = imageLIME(net
,X
,label
)
___ = imageLIME(___,
specifies options using one or more name-value pair arguments in addition to the input
arguments in previous syntaxes. For example, Name,Value
)'NumFeatures',100
sets the
target number of features to 100.