Unknown error on optimizing CNN on Matlab - matlab

I want to optimize my CNN as this example using Bayesian optimization and Matlab 2019a. Instead of showing whole my codes in a single part, I split it to three parts to show which lines causes error. First part is about loading data, dividing for training and validation and defining optimization variables. Second part is where error is. Third part is remaining part of codes. Note that buildLayer is also written by me to make network layers.
My questions are: how can I fix aforementioned error and why it occurred?
First part is (loading data):
folder = 'C:\Users\X';
imds = imageDatastore(folder, ...
"IncludeSubfolders",true, ...
"LabelSource","foldernames");
num = 20;
per = randperm(1000, num);
figure;
for i=1 : num
subplot(4, 5, i);
imshow(imds.Files{per(i)});
end
img = readimage(imds,1);
[trainData, validData] = splitEachLabel(imds,0.8);
% defining optimizable variables
optVars = [
optimizableVariable('depth', [1 5], "Type","integer")
optimizableVariable('initLearningRate', [0.001, 1], "Transform","log")
optimizableVariable('initFilterNum', [8 32], "Type","integer")
optimizableVariable('momentum', [0.7 0.98])
optimizableVariable('l2Reg', [1e-10 1e-2],'Transform','log')]
objFcn = makeObjFcn(trainData,validData);
I get this error:
Array formation and parentheses-style indexing with objects of class
'matlab.io.datastore.ImageDatastore' is not allowed. Use objects of
class 'matlab.io.datastore.ImageDatastore' only as scalars or use a
cell array.
Error in unique>uniqueR2012a (line 191)
a = a(:);
Error in unique (line 103)
[varargout{1:nlhs}] = uniqueR2012a(varargin{:});
Error in main>makeObjFcn/valErrorFun (line 47)
numClasses = numel(unique(vaData));
Error in BayesianOptimization/callObjNormally (line 2560)
[Objective, ConstraintViolations, UserData] = this.ObjectiveFcn(conditionalizeX(this, X));
Error in BayesianOptimization/callObjFcn (line 467)
= callObjNormally(this, X);
Error in BayesianOptimization/runSerial (line 1989)
ObjectiveFcnObjectiveEvaluationTime, ObjectiveNargout] = callObjFcn(this, this.XNext);
Error in BayesianOptimization/run (line 1941)
this = runSerial(this);
Error in BayesianOptimization (line 457)
this = run(this);
Error in bayesopt (line 323) Results = BayesianOptimization(Options);
on second part (the lines cause error):
bayesObj = bayesopt(...
objFcn, optVars, ...
"MaxTime", 1* 3600,...,
"IsObjectiveDeterministic",false,...
"UseParallel",false);
and third part (remaining of codes):
estIdx = bayesObj.IndexOfMinimumTrace(end);
fileName = bayesObj.UserDataTrace{bestIdx};
savedStruct = load(fileName);
valError = savedStruct.valError;
function ObjFcn = makeObjFcn(trData,vaData)
ObjFcn = #valErrorFun;
function [valError,cons,fileName] = valErrorFun(optVars)
% making layers
imageSize = [32 32 3];
numClasses = numel(unique(vaData));
layers = buidLayers(...
imageSize,...
optVars.initFilterNum,...
optVars.depth,...
numClasses);
miniBatchSize = 256;
validationFrequency = floor(numel(vaData)/miniBatchSize);
options = trainingOptions('sgdm', ...
'InitialLearnRate',optVars.initLearningRate, ...
'Momentum',optVars.momentum1, ...
'MaxEpochs',60, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',40, ...
'LearnRateDropFactor',0.1, ...
'MiniBatchSize',miniBatchSize, ...
'L2Regularization',optVars.l2Reg, ...
'Shuffle','every-epoch', ...
'Verbose',false, ...
'Plots','training-progress', ...
'ValidationData',vaData, ...
'ValidationFrequency',validationFrequency);
pixelRange = [-4 4];
imageAugmenter = imageDataAugmenter( ...
'RandXReflection',true, ...
'RandXTranslation',pixelRange, ...
'RandYTranslation',pixelRange);
datasource = augmentedImageDatastore(...
imageSize,...
trData,...
'DataAugmentation',imageAugmenter);
trainedNet = trainNetwork(datasource,layers,options);
%close(findall(groot,'Tag','NNET_CNN_TRAININGPLOT_FIGURE'))
YPredicted = classify(trainedNet,XValidation);
valError = 1 - mean(YPredicted == YValidation);
fileName = num2str(valError) + ".mat";
save(fileName,'trainedNet','valError','options')
cons = [];
end
end

Related

how to make the prediction of a qos with a lstm network under matlab

I am trying to make a QOS prediction on the QWS dataset but I have the following error:
Error using trainNetwork (line 170)
Too many input arguments.
Error in lstm (line 63)
net =
trainNetwork(x_train,y_train,layers,options);
Caused by:
Error using
trainNetwork>iParseInputArguments
(line 326)
Too many input arguments.
data = readtable('C:\Users\Etudiant FST\Documents\études\mini_pjt\d\qws1\qws1.txt');
%test_data = readtable('C:\Users\Etudiant FST\Documents\études\mini_pjt\d\qws2\qws2.txt');
data = data(:,1:10);
x = [];
y = [];
delta_x = 1;
delta_y = 1;
pas = 1;
while (height(data) >= delta_x + delta_y)
x = [x; data(1:delta_x,:)];
y = [y; data(delta_x + 1:delta_x + delta_y,:)];
data(1:pas,:) = [];
end
%numObservations = height(data);
%idxTrain = 1:floor(0.8*numObservations);
%idxTest = floor(0.8*numObservations)+1:numObservations;
%dataTrain = data(idxTrain,:);
%dataTest = data(idxTest,:);
%%for n = 1:numel(dataTrain)
%X = dataTrain{n};
% xt{n} = X(:,1:end-1);
% tt{n} = X(:,2:end);
%%end
height_x = height(x);
split = fix(height_x*0.8);
x_train = x(1:split,:);
x_test = x(split:height_x,:);
y_train = y(1:split,:);
y_test = y(split:height_x,:);
layers = [
sequenceInputLayer(10)
lstmLayer(128,'OutputMode','sequence')
fullyConnectedLayer(10)
regressionLayer];
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'MiniBatchSize',miniBatchSize, ...
'InitialLearnRate',0.01, ...
'GradientThreshold',1, ...
'Shuffle','never', ...
'Plots','training-progress',...
'Verbose',0);
net = trainNetwork(x_train,y_train,layers,options);
enter image description here
I would like it to give me a prediction of the new QOS from the old ones
thank you.
As the error message suggests, MATLAB isn't able to detect the correct trainNetwork function to use (since the function is overloaded). This is because the correct function is selected based on the numbers of inputs and the input (types) passed to it.
If you look at the example for LSTM on the documentation for trainNetwork, you will see that XTrain is a 270 by 1 'cell array' with every cell containing a N x M array while YTrain is a 270 by 1 'categorical array'.
Shaping your Xtrain and Ytrain to these data shape and types should solve the problem. Everything else on the code seems okay to me.

Transfer Learning for Regression in Matlab

I am trying to implement a model that takes an image as the input and gives a vector of 26 numbers. I am using VGG-16 at this time through the following Matlab code:
analyzeNetwork(net);
NUM_OUTPUT = 26;
layers = net.Layers;
%output = fullyConnectedLayer(NUM_OUTPUT, ...
% 'Name','output_layer', ...
% 'WeightLearnRateFactor',10, ...
% 'BiasLearnRateFactor',10);
layers = [
layers(1:38)
fullyConnectedLayer(NUM_OUTPUT)
regressionLayer];
%layers(1:67) = freezeWeights(layers(1:67));
miniBatchSize = 5;
validationFrequency = floor(numel(YTrain)/miniBatchSize);
options = trainingOptions('sgdm',...
'InitialLearnRate',0.001, ...
'ValidationData',{XValidation,YValidation},...
'Plots','training-progress',...
'Verbose',false);
net = trainNetwork(XTrain,YTrain,layers,options);
YPred = predict(net,XValidation);
predictionError = YValidation - YPred;
thr = 10;
numCorrect = sum(abs(predictionError) < thr);
numImagesValidation = numel(YValidation);
accuracy = numCorrect/numImagesValidation;
rmse = sqrt(mean(predictionError.^2));
The shape of XTrain and YTrain are as follows:
XTrain: 224 224 3 140
YTrain: 26 140
By running the code above (it is a part of the code not the whole of it) I get the following error:
Error using trainNetwork (line 170)
Number of observations in X and Y disagree.
I would appreciate it if somebody could help me to figure out what is the problem because as far as I know the number of samples in both are equal and there is no necessity for the rest of the dimensions to be equal.
Transpose YTrain to be 140x26.
Name your new layers, and make them layerGraph
Regression can easly go unstable so decrease learning rate or increase batch size if you get some nans.
net = vgg16 ; % analyzeNetwork(net);
LAYERS_FREEZE_UNTIL=35;
LAYERS_COPY_UNTIL=38;
NUM_TRAIN_SAMPLES = size(YTrain,1);
NUM_OUTPUT = size(YTrain,2);
my_layers =layerGraph([
freezeWeights(net.Layers(1:LAYERS_FREEZE_UNTIL))
net.Layers(LAYERS_FREEZE_UNTIL+1:LAYERS_COPY_UNTIL)
fullyConnectedLayer(NUM_OUTPUT*2,'Name','my_fc1')
fullyConnectedLayer(NUM_OUTPUT,'Name','my_fc2')
regressionLayer('Name','my_regr')
]);
% figure; plot(my_layers), ylim([0.5,6.5])
% analyzeNetwork(my_layers);
MINI_BATCH_SIZE = 16;
options = trainingOptions('sgdm', ...
'MiniBatchSize',MINI_BATCH_SIZE, ...
'MaxEpochs',20, ...
'InitialLearnRate',1e-4, ...
'Shuffle','every-epoch', ...
'ValidationData',{XValidation,YValidation}, ...
'ValidationFrequency',floor(NUM_TRAIN_SAMPLES/MINI_BATCH_SIZE), ...
'Verbose',true, ...
'Plots','training-progress');
my_net = trainNetwork(XTrain,YTrain,my_layers,options);

Abandoned object detection using SVM classifier. the error is: Error using svmtrain (line 254) Y and TRAINING must have the same number of rows

I am working on my project for abandoned object detection using svm in matlab and I'm totally new to it.
I have been constantly getting this error which is as follows:
Error using svmtrain (line 254)
Y and TRAINING must have the same number of rows.
Error in NewSVM (line 35)
SVMStruct = svmtrain(Data,class);
My Code:
imageOne = imread('D:\project\PlaneBackgound.jpg'); //Read a white image
[r,c] = size(imageOne);
imageTwo = imread('D:\d backup\my movie\sankalp 2014\IMG_6984.JPG'); //Read reference image
resize = imresize(imageTwo,[r c/3]);//resize the image
K = imsubtract(imageOne,resize);
imshow(K);
imageTwo = rgb2gray(imageTwo);
imageTwo = resize(imageTwo,[200 200]);
readImages = dir('E:\Matlab\bin\data set\*.jpg'); //Read all the data from data set
for i = 1 : length(readImages) // read all the required images for providing to the svm, currently using only two images
filename = strcat('E:\Matlab\bin\data set\',readImages(i).name);
imageRead{i} = imread(filename);
end
for i = 1:2 // using for loop, trying to convert all the images read into gray scale image and modifying the size and shape.
currentimage = imageRead{i};
images{i} = currentimage;
images{i} = im2double(images{i});
images{i} = imresize(images{i},[200 200]);
images{i} = rgb2gray(images{i});
images{i} = reshape(images{i}', 1, size(images{i},1)*size(images{i},2));
end
d = size(images);
for ii=1:d
trainData(ii,:) = images(ii);
end
class = [1 -1];
Data = str2double(trainData);
SVMStruct = svmtrain(Data,class); // Here is where i get an error and cannot proceed any further beyond this
imageTwo = imresize(imageTwo, [200 200]);
imageTwo = reshape (imageTwo, 1, size(imageTwo,1)*size(imageTwo,2));
result = svmclassify(SVMStruct,imageTwo);

fminsearch on a function internally using matrices

I'm having trouble minimizing a rather complicated function:
% Current densities - psi is a 4x1 fourvector
j0 = #(psi) psi' * psi;
% Solutions
chi1 = #(n_,r_,kt_,theta_) ...
1/sqrt(2) * ...
[ exp(1i*n_*theta_) .* besselj(n_,kt_*r_); ...
exp(1i*(n_+1)*theta_) .* besselj(n_+1,kt_*r_) ];
chi2 = #(n_,r_,kt_,theta_) ...
1/sqrt(2) * ...
[ exp(1i*n_*theta_) .* besselj(n_,kt_*r_); ...
-exp(1i*(n_+1)*theta_) .* besselj(n_+1,kt_*r_) ];
uplus = #(n_,E_,m_,r_,kz_,kt_,theta_) ...
sqrt((E_+m_)/(4*m_)) * ...
[ chi1(n_,r_,kt_,theta_); ...
(kz_-1i*kt_)/(E_+m_) * chi2(n_,r_,kt_,theta_) ];
Here: n,E,m,kz,theta are all constants for any purpose. I need to fit j0 to a step function (1 for r=1...10), and the model function being the four-vector psi, consisting of summing uplus over kt=zeros of besselj(n,kt/10*r), so that besselj(0,kt*10) is zero. The problem is that fminsearch doesn't like my complicated setup:
uplus_reduced = #(r_,kt_) uplus(0,E,m,r_,kz,kt_,0);
error_function = #(r_,coeffs_) error_j0_stepfunction(r_,coeffs_,...
uplus_reduced);
coeffs(1,:) = fminsearch( #(r) error_function(r',coeffs(:,1)), r);
where error_j0_stepfunction is this:
function error = error_j0_stepfunction(r,coeffs,basisspinor)
% Zeros of BesselJ
Nzeros = 100;
lambda = [ besselzero(0,Nzeros,1)';
besselzero(1,Nzeros,1)';
besselzero(2,Nzeros,1)' ];
% calculate psi= sum over zeros
psi = zeros(4,length(r));
for k=1:length(coeffs)
size_psi = size(psi(:,:) )
size_coeffs = size(coeffs(k))
size_basisspinor = size( basisspinor(r(:)', lambda(1,k)/10))
psi(:,:) = psi(:,:) + coeffs(k) * ...
basisspinor(r(:), lambda(1,k)/10);
end;
% calculate density (j0)
density = zeros(1,length(r));
for k=1:length(r)
density(k) = j0(psi(:,k));
end;
% calculate square error
error = sum((1-density(:)).^2);
end
I hope I have been clear enough, and at the same time concise so this can be answered. Thanks for any help!
EDIT: The error I get is this (with output making it nonsensical to me):
size_psi = 4 1000
size_coeffs = 1 1
size_basisspinor = 4 1000
Error using +
Matrix dimensions must agree.
Error in error_j0_stepfunction (line 15)
psi(:,:) = psi(:,:) + coeffs(k) * ...
Error in #(r_,coeffs_)error_j0_stepfunction(r_,coeffs_,uplus_reduced)
Error in #(r)error_function(r',coeffs(:,1))
Error in fminsearch (line 191)
fv(:,1) = funfcn(x,varargin{:});
Error in dirac_stepfunction (line 17)
coeffs(1,:) = fminsearch( #(r) error_function(r',coeffs(:,1)), r);
Error in dirac (line 7)
dirac_stepfunction;
It should make enough sense to anyone reading the error and code above. coeffs is a 20x3 matrix, r is a 1x1000 (or 1000x1)
Your call to basisspinor in the size-statement has r(:)' as first argument, while the second call to basisspinor has just r(:). I guess the first version was also intended for the second call.

How can I fix an undefined function error in MATLAB?

I'm trying to write the code for quadrature phase-shift keying (QPSK) with zeroforcing when N=2, and I got an error.
Here is the code:
Modulation = 'QPSK'
Decode_Method = 'ZeroForcing'
switch Modulation
case {'QPSK'}
Symbols = [ 1+j 1-j -1+j -1-j ]';
end
Symbols = Symbols.';
nSymbols = length(Symbols);
SNR_Array = [0.3 0.7 1.2 2.5 5 6.2 10 15.4 22 45 75.7 100.0];
nSNR = length(SNR_Array);
Ntest = 20;
N = 2;
for iSNR = 1 : nSNR
SNR = SNR_Array(iSNR);
Nerror = 0;
for i = 1:Ntest
H = randn(N,N) + j*randn(N,N);
X = Symbols( ceil( nSymbols*rand(N,1) ) )';
Noise = (randn(N,1) + j*randn(N,1))/sqrt(2)/sqrt(SNR);
Y = H*X + Noise;
switch Decode_Method
case {'ZeroForcing'}
X_Decode = Zero_Forcing(Y,H,Symbols);
end
end
Nerror = Nerror + length( find( X ~= X_Decode) );
end
Symbol_Error_Rate(iSNR) = Nerror/Ntest/N;
figure(1)
loglog(SNR_Array, Symbol_Error_Rate,'b')
hold on
xlabel('SNR')
ylabel('Symbol Error Ratio')
title('Symbol Error Ratio for NxN MIMO System')
And the error is:
??? Undefined function or method 'Zero_Forcing' for input arguments of type 'double'.
Error in ==> Untitled2 at 33
X_Decode = Zero_Forcing(Y,H,Symbols);
How can I fix this error?
The error indicates that MATLAB cannot find the function Zero_Forcing. If you have a function of that name, you should make sure it's on the MATLAB path, that is, a directory MATLAB knows about. Otherwise, you should write the function. It seems rather important.
Also, you may want to not call your function 'Untitled2', but give it a more meaningful name.