Transfer Learning for Regression in Matlab - matlab

I am trying to implement a model that takes an image as the input and gives a vector of 26 numbers. I am using VGG-16 at this time through the following Matlab code:
analyzeNetwork(net);
NUM_OUTPUT = 26;
layers = net.Layers;
%output = fullyConnectedLayer(NUM_OUTPUT, ...
% 'Name','output_layer', ...
% 'WeightLearnRateFactor',10, ...
% 'BiasLearnRateFactor',10);
layers = [
layers(1:38)
fullyConnectedLayer(NUM_OUTPUT)
regressionLayer];
%layers(1:67) = freezeWeights(layers(1:67));
miniBatchSize = 5;
validationFrequency = floor(numel(YTrain)/miniBatchSize);
options = trainingOptions('sgdm',...
'InitialLearnRate',0.001, ...
'ValidationData',{XValidation,YValidation},...
'Plots','training-progress',...
'Verbose',false);
net = trainNetwork(XTrain,YTrain,layers,options);
YPred = predict(net,XValidation);
predictionError = YValidation - YPred;
thr = 10;
numCorrect = sum(abs(predictionError) < thr);
numImagesValidation = numel(YValidation);
accuracy = numCorrect/numImagesValidation;
rmse = sqrt(mean(predictionError.^2));
The shape of XTrain and YTrain are as follows:
XTrain: 224 224 3 140
YTrain: 26 140
By running the code above (it is a part of the code not the whole of it) I get the following error:
Error using trainNetwork (line 170)
Number of observations in X and Y disagree.
I would appreciate it if somebody could help me to figure out what is the problem because as far as I know the number of samples in both are equal and there is no necessity for the rest of the dimensions to be equal.

Transpose YTrain to be 140x26.
Name your new layers, and make them layerGraph
Regression can easly go unstable so decrease learning rate or increase batch size if you get some nans.
net = vgg16 ; % analyzeNetwork(net);
LAYERS_FREEZE_UNTIL=35;
LAYERS_COPY_UNTIL=38;
NUM_TRAIN_SAMPLES = size(YTrain,1);
NUM_OUTPUT = size(YTrain,2);
my_layers =layerGraph([
freezeWeights(net.Layers(1:LAYERS_FREEZE_UNTIL))
net.Layers(LAYERS_FREEZE_UNTIL+1:LAYERS_COPY_UNTIL)
fullyConnectedLayer(NUM_OUTPUT*2,'Name','my_fc1')
fullyConnectedLayer(NUM_OUTPUT,'Name','my_fc2')
regressionLayer('Name','my_regr')
]);
% figure; plot(my_layers), ylim([0.5,6.5])
% analyzeNetwork(my_layers);
MINI_BATCH_SIZE = 16;
options = trainingOptions('sgdm', ...
'MiniBatchSize',MINI_BATCH_SIZE, ...
'MaxEpochs',20, ...
'InitialLearnRate',1e-4, ...
'Shuffle','every-epoch', ...
'ValidationData',{XValidation,YValidation}, ...
'ValidationFrequency',floor(NUM_TRAIN_SAMPLES/MINI_BATCH_SIZE), ...
'Verbose',true, ...
'Plots','training-progress');
my_net = trainNetwork(XTrain,YTrain,my_layers,options);

Related

how to make the prediction of a qos with a lstm network under matlab

I am trying to make a QOS prediction on the QWS dataset but I have the following error:
Error using trainNetwork (line 170)
Too many input arguments.
Error in lstm (line 63)
net =
trainNetwork(x_train,y_train,layers,options);
Caused by:
Error using
trainNetwork>iParseInputArguments
(line 326)
Too many input arguments.
data = readtable('C:\Users\Etudiant FST\Documents\études\mini_pjt\d\qws1\qws1.txt');
%test_data = readtable('C:\Users\Etudiant FST\Documents\études\mini_pjt\d\qws2\qws2.txt');
data = data(:,1:10);
x = [];
y = [];
delta_x = 1;
delta_y = 1;
pas = 1;
while (height(data) >= delta_x + delta_y)
x = [x; data(1:delta_x,:)];
y = [y; data(delta_x + 1:delta_x + delta_y,:)];
data(1:pas,:) = [];
end
%numObservations = height(data);
%idxTrain = 1:floor(0.8*numObservations);
%idxTest = floor(0.8*numObservations)+1:numObservations;
%dataTrain = data(idxTrain,:);
%dataTest = data(idxTest,:);
%%for n = 1:numel(dataTrain)
%X = dataTrain{n};
% xt{n} = X(:,1:end-1);
% tt{n} = X(:,2:end);
%%end
height_x = height(x);
split = fix(height_x*0.8);
x_train = x(1:split,:);
x_test = x(split:height_x,:);
y_train = y(1:split,:);
y_test = y(split:height_x,:);
layers = [
sequenceInputLayer(10)
lstmLayer(128,'OutputMode','sequence')
fullyConnectedLayer(10)
regressionLayer];
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'MiniBatchSize',miniBatchSize, ...
'InitialLearnRate',0.01, ...
'GradientThreshold',1, ...
'Shuffle','never', ...
'Plots','training-progress',...
'Verbose',0);
net = trainNetwork(x_train,y_train,layers,options);
enter image description here
I would like it to give me a prediction of the new QOS from the old ones
thank you.
As the error message suggests, MATLAB isn't able to detect the correct trainNetwork function to use (since the function is overloaded). This is because the correct function is selected based on the numbers of inputs and the input (types) passed to it.
If you look at the example for LSTM on the documentation for trainNetwork, you will see that XTrain is a 270 by 1 'cell array' with every cell containing a N x M array while YTrain is a 270 by 1 'categorical array'.
Shaping your Xtrain and Ytrain to these data shape and types should solve the problem. Everything else on the code seems okay to me.

Time Series Forecasting Using Deep Learning in MATLAB

I am using the time series forecasting sample from MathWorks in https://uk.mathworks.com/help/nnet/examples/time-series-forecasting-using-deep-learning.html
The output in the above-mentioned web-address is:
I only changed the dataset and ran the algorithm. Surprisingly, the algorithm is not working good with my dataset and generates a line as forecast as follows:
I am really confused and I cannot understand the reason behind that. I might be need to tune parameters in the algorithm that I am not aware on that. The code I am using is:
%% Load Data
%data = chickenpox_dataset;
%data = [data{:}];
data = xlsread('data.xlsx');
data = data';
%% Divide Data: Training and Testing
numTimeStepsTrain = floor(0.7*numel(data));
XTrain = data(1:numTimeStepsTrain);
YTrain = data(2:numTimeStepsTrain+1);
XTest = data(numTimeStepsTrain+1:end-1);
YTest = data(numTimeStepsTrain+2:end);
%% Standardize Data
mu = mean(XTrain);
sig = std(XTrain);
XTrain = (XTrain - mu) / sig;
YTrain = (YTrain - mu) / sig;
XTest = (XTest - mu) / sig;
%% Define LSTM Network
inputSize = 1;
numResponses = 1;
numHiddenUnits = 500;
layers = [ ...
sequenceInputLayer(inputSize)
lstmLayer(numHiddenUnits)
fullyConnectedLayer(numResponses)
regressionLayer];
%% Training Options
opts = trainingOptions('adam', ...
'MaxEpochs',500, ...
'GradientThreshold',1, ...
'InitialLearnRate',0.005, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',125, ...
'LearnRateDropFactor',0.2, ...
'Verbose',0, ...
'Plots','training-progress');
%% Train Network
net = trainNetwork(XTrain,YTrain,layers,opts);
%% Forecast Future Time Steps
net = predictAndUpdateState(net,XTrain);
[net,YPred] = predictAndUpdateState(net,YTrain(end));
numTimeStepsTest = numel(XTest);
for i = 2:numTimeStepsTest
[net,YPred(1,i)] = predictAndUpdateState(net,YPred(i-1));
end
%% Unstandardize the predictions using mu and sig calculated earlier.
YPred = sig*YPred + mu;
%% RMSE and MAE Calculation
rmse = sqrt(mean((YPred-YTest).^2))
MAE = mae(YPred-YTest)
%% Plot results
figure
plot(data(1:numTimeStepsTrain))
hold on
idx = numTimeStepsTrain:(numTimeStepsTrain+numTimeStepsTest);
plot(idx,[data(numTimeStepsTrain) YPred],'.-')
hold off
xlabel("Month")
ylabel("Cases")
title("Forecast")
legend(["Observed" "Forecast"])
%% Compare the forecasted values with the test data
figure
subplot(2,1,1)
plot(YTest)
hold on
plot(YPred,'.-')
hold off
legend(["Observed" "Forecast"])
ylabel("Cases")
title("Forecast")
subplot(2,1,2)
stem(YPred - YTest)
xlabel("Month")
ylabel("Error")
title("RMSE = " + rmse)
And the data.xlsx is in: https://www.dropbox.com/s/vv1apug7iqlocu1/data.xlsx?dl=1
You want to find temporal patterns in the data. Matlab's data looks like a sine-wave with noise, a very clear pattern. Your data is far from showing a clear pattern. Your data needs preprocessing. I would start by removing the slow drifts. A high-pass, or band-pass filter of some sort makes sense. Here is a simple line just for a quick view of your data without the slow frequencies:
T=readtable('data.xlsx','readvariablenames',0);
figure; plot(T.Var1-smoothdata(T.Var1,'movmean',200))

Training of FCN matlab not decreasing loss

I have a Neural Network based on Fully Convolutional Neural Network for semantic segmentation / pixelwise object detection ( FCN) and try to train it using Pascal-Context data. However, instead of 450 classes provided by the dataset, I only want to capture 12 + background.
I prepared a dataset with 7000+ images and label maps, and followed the tutorial on Semantic Segmentation ( Semantic Segmentation), and followed every step similarly to the one presented.
However, when I train my neural network my Mini-batch loss remains constant (checked until end of first 3 epochs, then tried to reduce the amount of images, but still the same value) and doesn't change, even if I change learning rate or other parameters. Is there something wrong with the training process?
So I tried to reduce the number of images to 400, and what I am getting is in picture below. It doesn't improve at all.
( First column represents entry number, second Iteration Number, Third Time Elapsed, Fourth Mini-batch loss, Fifth Accuracy on batch, and finally learning rate).
Below is my code:
% MAIN FUNCTION %
outputFolder= fullfile('imade');
imgDir = fullfile('imade','images');
labelDir = fullfile('imade','labels');
imds = imageDatastore(imgDir);
I = readimage(imds, 1);
% I = histeq(I);
% figure
% imshow(I)
classes = [
"bottle"
"chair"
"diningtable"
"person"
"pottedplant"
"sofa"
"tvmonitor"
"ground"
"wall"
"floor"
"keyboard"
"ceiling"
"background"
];
valueSet = {
[230 25 75]
[60 180 75]
[255 225 25]
[0 130 200]
[245 130 48]
[145 30 180]
[128 128 0]
[210 245 60]
[250 190 190]
[0 128 128]
[170 110 40]
[128 0 0]
[0 0 0]
};
pxds = pixelLabelDatastore(labelDir,classes,valueSet);
% show sample image with overlay
C = readimage(pxds, 1);
cmap = camvidColorMap;
B = labeloverlay(I,C,'ColorMap',cmap);
figure
imshow(B)
pixelLabelColorbar(cmap,classes)
% calculate frequency of class pixels
tbl = countEachLabel(pxds);
% resize images and labels to desired format
imageFolder = fullfile(outputFolder,'imagesReszed',filesep);
imds = resizeCamVidImages(imds,imageFolder);
labelFolder = fullfile(outputFolder,'labelsResized',filesep);
pxds = resizeCamVidPixelLabels(pxds,labelFolder);
% partition dataset into test and train
[imdsTrain, imdsTest, pxdsTrain, pxdsTest] = partitionCamVidData(imds,pxds);
% create FCN net
imageSize = [225 300];
numClasses = numel(classes);
lgraph = fcnLayers(imageSize,numClasses,'type','16s');
st = fullfile('imade','checkPoint');
% adjust based on occurrence of each class
imageFreq = tbl.PixelCount ./ tbl.ImagePixelCount;
classWeights = median(imageFreq) ./ imageFreq;
pxLayer = pixelClassificationLayer('Name','labels','ClassNames', tbl.Name, 'ClassWeights', classWeights);
lgraph = removeLayers(lgraph, 'pixelLabels');
lgraph = addLayers(lgraph, pxLayer);
lgraph = connectLayers(lgraph, 'softmax' ,'labels');
% set training parameters
options = trainingOptions('sgdm', ...
'Momentum', 0.9, ...
'InitialLearnRate', 1e-3, ...
'L2Regularization', 0.0005, ...
'MaxEpochs', 100, ...
'MiniBatchSize', 1, ...
'Shuffle', 'every-epoch', ...
'VerboseFrequency', 2);
datasource = pixelLabelImageSource(imdsTrain,pxdsTrain);
doTraining = true;
if doTraining
[net, info] = trainNetwork(datasource,lgraph,options);
else
data = load (strcat(st,filesep,'convnet_checkpoint__4607__2018_01_27__21_25_03.mat'));
war = data.net;
[sp, info] = trainNetwork(datasource,war.Layers,options);
end

Training neural network for image segmentation

I have one set of original image patches (101x101 matrices) and another corresponding set of image patches (same size 101x101) in binary which are the 'answer' for training the neural network. I wanted to train my neural network so that it can learn, recognize the shape that it's trained from the given image, and produce the image (in same matrix form 150x10201 maybe?) at the output matrix (as a result of segmentation).
Original image is on the left and the desired output is on the right.
So, as for pre-processing stage of the data, I reshaped the original image patches into vector matrices of 1x10201 for each image patch. Combining 150 of them i get a 150x10201 matrix as my input, and another 150x10201 matrix from the binary image patches. Then I provide these input data into the deep learning network. I used Deep Belief Network in this case.
My Matlab code for setup and train DBN as below:
%train a 4 layers 100 hidden unit DBN and use its weights to initialize a NN
rand('state',0)
%train dbn
dbn.sizes = [100 100 100 100];
opts.numepochs = 5;
opts.batchsize = 10;
opts.momentum = 0;
opts.alpha = 1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
%unfold dbn to nn
nn = dbnunfoldtonn(dbn, 10201);
nn.activation_function = 'sigm';
%train nn
opts.numepochs = 1;
opts.batchsize = 10;
assert(isfloat(train_x), 'train_x must be a float');
assert(nargin == 4 || nargin == 6,'number ofinput arguments must be 4 or 6')
loss.train.e = [];
loss.train.e_frac = [];
loss.val.e = [];
loss.val.e_frac = [];
opts.validation = 0;
if nargin == 6
opts.validation = 1;
end
fhandle = [];
if isfield(opts,'plot') && opts.plot == 1
fhandle = figure();
end
m = size(train_x, 1);
batchsize = opts.batchsize;
numepochs = opts.numepochs;
numbatches = m / batchsize;
assert(rem(numbatches, 1) == 0, 'numbatches must be a integer');
L = zeros(numepochs*numbatches,1);
n = 1;
for i = 1 : numepochs
tic;
kk = randperm(m);
for l = 1 : numbatches
batch_x = train_x(kk((l - 1) * batchsize + 1 : l * batchsize), :);
%Add noise to input (for use in denoising autoencoder)
if(nn.inputZeroMaskedFraction ~= 0)
batch_x = batch_x.*(rand(size(batch_x))>nn.inputZeroMaskedFraction);
end
batch_y = train_y(kk((l - 1) * batchsize + 1 : l * batchsize), :);
nn = nnff(nn, batch_x, batch_y);
nn = nnbp(nn);
nn = nnapplygrads(nn);
L(n) = nn.L;
n = n + 1;
end
t = toc;
if opts.validation == 1
loss = nneval(nn, loss, train_x, train_y, val_x, val_y);
str_perf = sprintf('; Full-batch train mse = %f, val mse = %f',
loss.train.e(end), loss.val.e(end));
else
loss = nneval(nn, loss, train_x, train_y);
str_perf = sprintf('; Full-batch train err = %f', loss.train.e(end));
end
if ishandle(fhandle)
nnupdatefigures(nn, fhandle, loss, opts, i);
end
disp(['epoch ' num2str(i) '/' num2str(opts.numepochs) '. Took ' num2str(t) ' seconds' '. Mini-batch mean squared error on training set is ' num2str(mean(L((n-numbatches):(n-1)))) str_perf]);
nn.learningRate = nn.learningRate * nn.scaling_learningRate;
end
Can anyone let me know, is the training for the NN like this enable it to do the segmentation work? or how should I modify the code to train the NN so that it can generate the output/result as the image matrix in 150x10201 form?
Thank you so much..
Your inputs are TOO big. You should try to work with smaller patches from 19x19 to maximum 30x30 (which already represent 900 neurons into the input layer).
Then comes your main problem: you only have 150 images! And when you train a NN, you need at least three times more training instances than weights into your NN. So be very careful to the architecture you choose.
A CNN may be more adapted to your problem.

MATLAB: One Step Ahead Neural Network Timeseries Forecast

Intro: I'm using MATLAB's Neural Network Toolbox in an attempt to forecast time series one step into the future. Currently I'm just trying to forecast a simple sinusoidal function, but hopefully I will be able to move on to something a bit more complex after I obtain satisfactory results.
Problem: Everything seems to work fine, however the predicted forecast tends to be lagged by one period. Neural network forecasting isn't much use if it just outputs the series delayed by one unit of time, right?
Code:
t = -50:0.2:100;
noise = rand(1,length(t));
y = sin(t)+1/2*sin(t+pi/3);
split = floor(0.9*length(t));
forperiod = length(t)-split;
numinputs = 5;
forecasted = [];
msg = '';
for j = 1:forperiod
fprintf(repmat('\b',1,numel(msg)));
msg = sprintf('forecasting iteration %g/%g...\n',j,forperiod);
fprintf('%s',msg);
estdata = y(1:split+j-1);
estdatalen = size(estdata,2);
signal = estdata;
last = signal(end);
[signal,low,high] = preprocess(signal'); % pre-process
signal = signal';
inputs = signal(rowshiftmat(length(signal),numinputs));
targets = signal(numinputs+1:end);
%% NARNET METHOD
feedbackDelays = 1:4;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,[hiddenLayerSize hiddenLayerSize]);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
signalcells = mat2cell(signal,[1],ones(1,length(signal)));
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},signalcells);
net.trainParam.showWindow = false;
net.trainparam.showCommandLine = false;
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
net.performFcn = 'mse'; % Mean squared error
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
next = net(inputs(end),inputStates,layerStates);
next = postprocess(next{1}, low, high); % post-process
next = (next+1)*last;
forecasted = [forecasted next];
end
figure(1);
plot(1:forperiod, forecasted, 'b', 1:forperiod, y(end-forperiod+1:end), 'r');
grid on;
Note:
The function 'preprocess' simply converts the data into logged % differences and 'postprocess' converts the logged % differences back for plotting. (Check EDIT for preprocess and postprocess code)
Results:
BLUE: Forecasted Values
RED: Actual Values
Can anyone tell me what I'm doing wrong here? Or perhaps recommend another method to achieve the desired results (lagless prediction of sinusoidal function, and eventually more chaotic timeseries)? Your help is very much appreciated.
EDIT:
It's been a few days now and I hope everyone has enjoyed their weekend. Since no solutions have emerged I've decided to post the code for the helper functions 'postprocess.m', 'preprocess.m', and their helper function 'normalize.m'. Maybe this will help get the ball rollin.
postprocess.m:
function data = postprocess(x, low, high)
% denormalize
logdata = (x+1)/2*(high-low)+low;
% inverse log data
sign = logdata./abs(logdata);
data = sign.*(exp(abs(logdata))-1);
end
preprocess.m:
function [y, low, high] = preprocess(x)
% differencing
diffs = diff(x);
% calc % changes
chngs = diffs./x(1:end-1,:);
% log data
sign = chngs./abs(chngs);
logdata = sign.*log(abs(chngs)+1);
% normalize logrets
high = max(max(logdata));
low = min(min(logdata));
y=[];
for i = 1:size(logdata,2)
y = [y normalize(logdata(:,i), -1, 1)];
end
end
normalize.m:
function Y = normalize(X,low,high)
%NORMALIZE Linear normalization of X between low and high values.
if length(X) <= 1
error('Length of X input vector must be greater than 1.');
end
mi = min(X);
ma = max(X);
Y = (X-mi)/(ma-mi)*(high-low)+low;
end
I didn't check you code, but made a similar test to predict sin() with NN. The result seems reasonable, without a lag. I think, your bug is somewhere in synchronization of predicted values with actual values.
Here is the code:
%% init & params
t = (-50 : 0.2 : 100)';
y = sin(t) + 0.5 * sin(t + pi / 3);
sigma = 0.2;
n_lags = 12;
hidden_layer_size = 15;
%% create net
net = fitnet(hidden_layer_size);
%% train
noise = sigma * randn(size(t));
y_train = y + noise;
out = circshift(y_train, -1);
out(end) = nan;
in = lagged_input(y_train, n_lags);
net = train(net, in', out');
%% test
noise = sigma * randn(size(t)); % new noise
y_test = y + noise;
in_test = lagged_input(y_test, n_lags);
out_test = net(in_test')';
y_test_predicted = circshift(out_test, 1); % sync with actual value
y_test_predicted(1) = nan;
%% plot
figure,
plot(t, [y, y_test, y_test_predicted], 'linewidth', 2);
grid minor; legend('orig', 'noised', 'predicted')
and the lagged_input() function:
function in = lagged_input(in, n_lags)
for k = 2 : n_lags
in = cat(2, in, circshift(in(:, end), 1));
in(1, k) = nan;
end
end