Create a regression model for Cifar-100 dataset .

4 visualizaciones (últimos 30 días)
Ali Yar Khan
Ali Yar Khan el 20 de Mayo de 2019
I have code for the cifar-10 dataset which classify the image but i want to convert into regression model on Cifar-100 dataset .. Any help modifying it for regression
%Download CIFAR-10 Image Data
%Download the CIFAR-10 data set [3]. This dataset contains 50,000 training images that will be used to train a CNN.
%Download CIFAR-10 data to a temporary directory
cifar10Data = tempdir;
url = 'https://www.cs.toronto.edu/~kriz/cifar-10-matlab.tar.gz';
helperCIFAR10Data.download(url,cifar10Data);
%Load the CIFAR-10 training and test data.
[trainingImages,trainingLabels,testImages,testLabels] = helperCIFAR10Data.load(cifar10Data);
%Each image is a 32x32 RGB image and there are 50,000 training samples.
size(trainingImages)
%CIFAR-10 has 10 image categories. List the image categories:
numImageCategories = 10;
categories(trainingLabels)
%Display a few of the training images.
figure
thumbnails = trainingImages(:,:,:,1:100);
montage(thumbnails)
%Create A Convolutional Neural Network (CNN)
%A CNN is composed of a series of layers, where each layer defines a specific computation. The Deep Learning Toolbox™ provides functionality to easily design a CNN layer-by-layer. In this example, the following layers are used to create a CNN:
%imageInputLayer - Image input layer
%convolutional2dLayer - 2D convolution layer for Convolutional Neural Networks
%reluLayer - Rectified linear unit (ReLU) layer
%maxPooling2dLayer - Max pooling layer
%fullyConnectedLayer - Fully connected layer
%softmaxLayer - Softmax layer
%classificationLayer - Classification output layer for a neural network
%The network defined here is similar to the one described in [4] and starts with an imageInputLayer. The input layer defines the type and size of data the CNN can process. In this example, the CNN is used to process CIFAR-10 images, which are 32x32 RGB images:
% Create the image input layer for 32x32x3 CIFAR-10 images
[height, width, numChannels, ~] = size(trainingImages);
imageSize = [height width numChannels];
inputLayer = imageInputLayer(imageSize)
%Next, define the middle layers of the network. The middle layers are made up of repeated blocks of convolutional, ReLU (rectified linear units), and pooling layers. These 3 layers form the core building blocks of convolutional neural networks. The convolutional layers define sets of filter weights, which are updated during network training. The ReLU layer adds non-linearity to the network, which allow the network to approximate non-linear functions that map image pixels to the semantic content of the image. The pooling layers downsample data as it flows through the network. In a network with lots of layers, pooling layers should be used sparingly to avoid downsampling the data too early in the network.
% Convolutional layer parameters
filterSize = [5 5];
numFilters = 32;
middleLayers = [
% The first convolutional layer has a bank of 32 5x5x3 filters. A
% symmetric padding of 2 pixels is added to ensure that image borders
% are included in the processing. This is important to avoid
% information at the borders being washed away too early in the
% network.
convolution2dLayer(filterSize, numFilters, 'Padding', 2)
% Note that the third dimension of the filter can be omitted because it
% is automatically deduced based on the connectivity of the network. In
% this case because this layer follows the image layer, the third
% dimension must be 3 to match the number of channels in the input
% image.
% Next add the ReLU layer:
reluLayer()
% Follow it with a max pooling layer that has a 3x3 spatial pooling area
% and a stride of 2 pixels. This down-samples the data dimensions from
% 32x32 to 15x15.
maxPooling2dLayer(3, 'Stride', 2)
% Repeat the 3 core layers to complete the middle of the network.
convolution2dLayer(filterSize, numFilters, 'Padding', 2)
reluLayer()
maxPooling2dLayer(3, 'Stride',2)
convolution2dLayer(filterSize, 2 * numFilters, 'Padding', 2)
reluLayer()
maxPooling2dLayer(3, 'Stride',2)
]
%A deeper network may be created by repeating these 3 basic layers. However, the number of pooling layers should be reduced to avoid downsampling the data prematurely. Downsampling early in the network discards image information that is useful for learning.
%The final layers of a CNN are typically composed of fully connected layers and a softmax loss layer.
finalLayers = [
% Add a fully connected layer with 64 output neurons. The output size of
% this layer will be an array with a length of 64.
fullyConnectedLayer(64)
% Add an ReLU non-linearity.
reluLayer
% Add the last fully connected layer. At this point, the network must
% produce 10 signals that can be used to measure whether the input image
% belongs to one category or another. This measurement is made using the
% subsequent loss layers.
fullyConnectedLayer(numImageCategories)
% Add the softmax loss layer and classification layer. The final layers use
% the output of the fully connected layer to compute the categorical
% probability distribution over the image classes. During the training
% process, all the network weights are tuned to minimize the loss over this
% categorical distribution.
softmaxLayer
classificationLayer
]
%Combine the input, middle, and final layers.
layers = [
inputLayer
middleLayers
finalLayers
]
%Initialize the first convolutional layer weights using normally distributed random numbers with standard deviation of 0.0001. This helps improve the convergence of training.
layers(2).Weights = 0.0001 * randn([filterSize numChannels numFilters]);
%Train CNN Using CIFAR-10 Data
%Now that the network architecture is defined, it can be trained using the CIFAR-10 training data. First, set up the network training algorithm using the trainingOptions function. The network training algorithm uses Stochastic Gradient Descent with Momentum (SGDM) with an initial learning rate of 0.001. During training, the initial learning rate is reduced every 8 epochs (1 epoch is defined as one complete pass through the entire training data set). The training algorithm is run for 40 epochs.
%Note that the training algorithm uses a mini-batch size of 128 images. If using a GPU for training, this size may need to be lowered due to memory constraints on the GPU.
% Set the network training options
opts = trainingOptions('sgdm', ...
'Momentum', 0.9, ...
'InitialLearnRate', 0.001, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', 0.1, ...
'LearnRateDropPeriod', 8, ...
'L2Regularization', 0.004, ...
'MaxEpochs', 40, ...
'MiniBatchSize', 128, ...
'Verbose', true);
%Train the network using the trainNetwork function. This is a computationally intensive process that takes 20-30 minutes to complete. To save time while running this example, a pretrained network is loaded from disk. If you wish to train the network yourself, set the doTraining variable shown below to true.
%Note that a CUDA-capable NVIDIA™ GPU with compute capability 3.0 or higher is highly recommended for training.
% A trained network is loaded from disk to save time when running the
% example. Set this flag to true to train the network.
doTraining = false;
if doTraining
% Train a network.
cifar10Net = trainNetwork(trainingImages, trainingLabels, layers, opts);
else
% Load pre-trained detector for the example.
load('rcnnStopSigns.mat','cifar10Net')
end

Respuestas (0)

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Productos


Versión

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by