Constant error in neural network, MatConvNet - neural-network

Solved: Previously my dataset had around 1000 images. I increased it to 50 000 and now the neural network learns and works.
I have created a convolutional neural network for recognizing three emotions from facial expression(positive, neutral, negative emotion). Somehow, my error function does not get any better(error image). Training and validation error are constant for 100 epochs. What could be the reason?
Why the error is constant?
Here's my code:
function training(varargin)
setup ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
rngNum = 1; % rng number for random weight initialization, e.g., 1,2,3
num_fcHiddenNeuron =1024; % # neurons in the fully-connected hidden layer
prob_fcDropout = 0.5; % dropout probability in the fully-connected hidden layer,
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% input data for training deep CNNs
imdb1 = load(['trainingdata']) ;
imdb2 = load(['testdata']) ;
imdb.images.data = cat(4, imdb1.images.data, imdb2.images.data);
imdb.images.labels = cat(2, imdb1.images.labels, imdb2.images.labels);
imdb.images.set = cat(2, imdb1.images.set, imdb2.images.set);
imdb.meta = imdb1.meta;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
trainOpts.batchSize = 200 ;
trainOpts.numEpochs = 100 ;
trainOpts.gpus = [] ;
trainOpts.continue = true ;
trainOpts.learningRate = [0.004*ones(1,25), 0.002*ones(1,25), 0.001*ones(1,25), 0.0005*ones(1,25)];
trainOpts = vl_argparse(trainOpts, varargin);
%% Training Deep CNNs
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CNN configuration
net.layers = {} ;
% %
% % %% Conv1 - MaxPool1
rng(rngNum) %control random number generation
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.01*randn(3,3,1,32, 'single'), 0.1*ones(1, 32, 'single')}}, ...
'stride', 1, ...
'pad', 1, ...
'filtersLearningRate', 1, ...
'biasesLearningRate', 1, ...
'filtersWeightDecay', 1/5, ...
'biasesWeightDecay', 0) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'pool', ...
'method', 'max', ...
'pool', [2 2], ...
'stride', 2, ...
'pad', 0) ;
% %%% Conv2 - MaxPool2
rng(rngNum)
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.01*randn(3,3,32,32, 'single'), 0.1*ones(1, 32, 'single')}}, ...
'stride', 1, ...
'pad', 0, ...
'filtersLearningRate', 1, ...
'biasesLearningRate', 1, ...
'filtersWeightDecay', 1/5, ...
'biasesWeightDecay', 0) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'pool', ...
'method', 'max', ...
'pool', [2 2], ...
'stride', 2, ...
'pad', [1, 0, 1, 0]) ;
% %%% Conv3 - MaxPool3
rng(rngNum)
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.01*randn(3,3,32,64, 'single'), 0.1*ones(1, 64, 'single')}}, ...
'stride', 1, ...
'pad', 1, ...
'filtersLearningRate', 1, ...
'biasesLearningRate', 1, ...
'filtersWeightDecay', 1/5, ...
'biasesWeightDecay', 0) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'pool', ...
'method', 'max', ...
'pool', [2 2], ...
'stride', 2, ...
'pad', 0) ;
% %%% Fc Hidden
rng(rngNum)
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.001*randn(5,5,64,num_fcHiddenNeuron, 'single'), 0.01*ones(1, num_fcHiddenNeuron, 'single')}}, ...
'stride', 1, ...
'pad', 0, ...
'filtersLearningRate', 1, ...
'biasesLearningRate', 1, ...
'filtersWeightDecay', 1/5, ...
'biasesWeightDecay', 0) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'dropout', ...
'rate', prob_fcDropout) ;
%
% %%% Fc Output
rng(rngNum)
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{zeros(1,1,num_fcHiddenNeuron, 3, 'single'), zeros(1, 3, 'single')}}, ...
'stride', 1, ...
'pad', 0, ...
'filtersLearningRate', 1, ...
'biasesLearningRate', 1, ...
'filtersWeightDecay', 4, ...
'biasesWeightDecay', 0) ;
net.layers{end+1} = struct('type', 'softmaxloss') ;
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% starting to train deep CNN
[net,info] = cnn_train(net, imdb, getBatch(opts), trainOpts, 'val', find(imdb.images.set == 2)) ;
net.layers(end) = [] ;
function fn = getBatch(opts)
% -------------------------------------------------------------------------
fn = #(x,y) getSimpleNNBatch(x,y) ;
end
% -------------------------------------------------------------------------
function [images, labels] = getSimpleNNBatch(imdb, batch)
% -------------------------------------------------------------------------
images = imdb.images.data(:,:,:,batch) ;
labels = imdb.images.labels(1,batch) ;
end

Related

train the network with matlab matconvnet

I want to train my network using matlab and matconvnet-1.0-beta25.
My problem is regression and I use pdist as loss function to get mse.
The inputs data is 56*56*64*6000 and the targets data is 56*56*64*6000 and network architecture is as follows:
opts.networkType = 'simplenn' ;
opts = vl_argparse(opts, varargin) ;
lr = [.01 2] ;
% Define network CIFAR10-quick
net.layers = {} ;
% Block 1
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.01*randn(5,5,64,32, 'single'), zeros(1, 32, 'single')}}, ...
'learningRate', lr, ...
'stride', 1, ...
'pad', 2) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.05*randn(5,5,32,16, 'single'), zeros(1,16,'single')}}, ...
'learningRate', .1*lr, ...
'stride', 1, ...
'pad', 2) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.01*randn(5,5,16,8, 'single'), zeros(1, 8, 'single')}}, ...
'learningRate', lr, ...
'stride', 1, ...
'pad', 2) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.05*randn(5,5,8,16, 'single'), zeros(1,16,'single')}}, ...
'learningRate', .1*lr, ...
'stride', 1, ...
'pad', 2) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.01*randn(5,5,16,32, 'single'), zeros(1, 32, 'single')}}, ...
'learningRate', lr, ...
'stride', 1, ...
'pad', 2) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{0.05*randn(5,5,32,64, 'single'), zeros(1,64,'single')}}, ...
'learningRate', .1*lr, ...
'stride', 1, ...
'pad', 2) ;
net.layers{end+1} = struct('type', 'relu') ;
% Loss layer
net.layers{end+1} = struct('type', 'pdist') ;
% Meta parameters
net.meta.inputSize = [56 56 64] ;
net.meta.trainOpts.learningRate = [0.0005*ones(1,30) 0.0005*ones(1,10) 0.0005*ones(1,5)] ;
net.meta.trainOpts.weightDecay = 0.0001 ;
net.meta.trainOpts.batchSize = 100 ;
net.meta.trainOpts.numEpochs = numel(net.meta.trainOpts.learningRate) ;
% Fill in default values
net = vl_simplenn_tidy(net) ;
I change getSimpleNNBatch(imdb, batch) function in ncnn_train (the name of mine) as follows:
function [images, labels] = getSimpleNNBatch(imdb, batch)
images = imdb.images.data(:,:,:,batch) ;
labels = imdb.images.labels(:,:,:,batch) ;
if rand > 0.5, images=fliplr(images) ;
end
because my label is multi-dimensional.
Also I change errorFunction in cnn_train from multiclasses to none:
opts.errorFunction = 'none' ;
and change the error variable from:
% accumulate errors
error = sum([error, [...
sum(double(gather(res(end).x))) ;
reshape(params.errorFunction(params, labels, res),[],1) ; ]],2) ;
to:
% accumulate errors
error = sum([error, [...
mean(mean(mean(double(gather(res(end).x))))) ;
reshape(params.errorFunction(params, labels, res),[],1) ; ]],2) ;
My first question is why the res(end).x third dimension in above command is one instead of 64? this is 56*56*1*100 (100 is the batch).
Have I made a mistake?
here is the results:
train: epoch 01: 2/ 40: 10.1 (27.0) Hz objective: 21360.722
train: epoch 01: 3/ 40: 13.0 (30.0) Hz objective: 67328685.873
...
train: epoch 01: 39/ 40: 29.7 (29.6) Hz objective: 5179175.587
train: epoch 01: 40/ 40: 29.8 (30.6) Hz objective: 5049697.440
val: epoch 01: 1/ 10: 87.3 (87.3) Hz objective: 49.512
val: epoch 01: 2/ 10: 88.9 (90.5) Hz objective: 50.012
...
val: epoch 01: 9/ 10: 88.2 (88.2) Hz objective: 49.936
val: epoch 01: 10/ 10: 88.1 (87.3) Hz objective: 49.962
train: epoch 02: 1/ 40: 30.2 (30.2) Hz objective: 49.650
train: epoch 02: 2/ 40: 30.3 (30.4) Hz objective: 49.704
...
train: epoch 02: 39/ 40: 30.2 (31.6) Hz objective: 49.739
train: epoch 02: 40/ 40: 30.3 (31.0) Hz objective: 49.722
val: epoch 02: 1/ 10: 91.8 (91.8) Hz objective: 49.687
val: epoch 02: 2/ 10: 92.0 (92.2) Hz objective: 49.831
...
val: epoch 02: 9/ 10: 92.0 (88.5) Hz objective: 49.931
val: epoch 02: 10/ 10: 91.9 (91.1) Hz objective: 49.962
train: epoch 03: 1/ 40: 31.7 (31.7) Hz objective: 49.014
train: epoch 03: 2/ 40: 31.2 (30.8) Hz objective: 49.237
...
here is my network schema image
Two inputs of pdist have got nxmx64x100 size as below and as this mentioned, the output of pdist has got the same height and width, but depth equal to one. About the correctness of error definition, you should debug and check the size and definition accurately.

The same function behaves differently, why?

I have this function below which works perfecttly as I need:
function [pointsQRS, pointsP, pointsT] = VCG (pointsQRS,pointsP,pointsT)
global ax1 ax2 h
figure('Name','Vektorkardiogram','NumberTitle','off','Color',[0.8 0.8 0.8])
%% first axes
ax1=subplot(1,2,1);
set(ax1,'Position',[0.1,0.2,0.3,0.7])
title('Vektorkardiogram')
hold on
grid on
axis vis3d
view([0 10])
plotCurve
mArrow3([1.5 2 -1],[-0.5, 2, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, -0.5, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, 2, 1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
text(-0.5, 2, -1, 'Vx','FontSize',12);
text(1.5, -0.5, -1, 'Vy','FontSize',12);
text(1.5, 2, 1, 'Vz','FontSize',12);
%% second axes
ax2=subplot(1,2,2);
set(ax2,'Position',[0.6,0.2,0.3,0.7])
title('Vektorkardiogram')
hold on
grid on
axis vis3d
view([10 10])
plotCurve
function plotCurve
for i=2:size(pointsQRS,1)
if mod(i,2)==0
QRS=plot3(pointsQRS([i-1:i],1),pointsQRS([i-
1:i],2),pointsQRS([i-1:i],3),'-g','LineWidth',1);
else
plot3(pointsQRS([i-1:i],1),pointsQRS([i-
1:i],2),pointsQRS([i-1:i],3),'Color',[0 0 0],'LineWidth',1);
end
end
for i=2:size(pointsT,1)
if mod(i,2)==0
T=plot3(pointsT([i-1:i],1),pointsT([i-1:i],2),pointsT([i-
1:i],3),'-r','LineWidth',1);
else
plot3(pointsT([i-1:i],1),pointsT([i-1:i],2),pointsT([i-
1:i],3),'Color',[0 0 0],'LineWidth',1);
end
end
for i=2:size(pointsP,1)
if mod(i,2)==0
P=plot3(pointsP([i-1:i],1),pointsP([i-1:i],2),pointsP([i-
1:i],3),'-b','LineWidth',1);
else
plot3(pointsP([i-1:i],1),pointsP([i-1:i],2),pointsP([i-
1:i],3),'Color',[0 0 0],'LineWidth',1);
end
end
xlabel('Vx');ylabel('Vy');zlabel('Vz');
end
mArrow3([1.5 2 -1],[-0.5, 2, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, -0.5, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, 2, 1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
text(-0.5, 2, -1, 'Vx','FontSize',12);
text(1.5, -0.5, -1, 'Vy','FontSize',12);
text(1.5, 2, 1, 'Vz','FontSize',12);
%% Slider Rotace
S = uicontrol('Style','slider',...
'Position',[10 10 300 20],...
'Max',180,...
'Min',-180,...
'Value',0,...
'SliderStep',[1/360 1/360]);
LS=addlistener(S,'ContinuousValueChange',#slider_callback);
set(S,'UserData',LS)
end
function slider_callback(hObject,eventData)
global ax1 ax2
val = get(hObject,'value');
view(ax1,[val,10])
view(ax2,[val+10,10])
end
But when I changed the code, simplified, because in other function I need only pointsT it gives me the error as in the picture.
The simplified code is:
function pointsT = VCG_T (pointsT)
global ax1_T ax2_T h_T
figure('Name','Vektorkardiogram','NumberTitle','off','Color',[0.8 0.8 0.8])
%% first axes
ax1_T=subplot(1,2,1);
set(ax1_T,'Position',[0.1,0.2,0.3,0.7])
title('Vektorkardiogram')
hold on
grid on
axis vis3d
view([0 10])
plotCurve
mArrow3([1.5 2 -1],[-0.5, 2, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, -0.5, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, 2, 1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
text(-0.5, 2, -1, 'Vx','FontSize',12);
text(1.5, -0.5, -1, 'Vy','FontSize',12);
text(1.5, 2, 1, 'Vz','FontSize',12);
%% second axes
ax2_T=subplot(1,2,2);
set(ax2_T,'Position',[0.6,0.2,0.3,0.7])
title('Vektorkardiogram')
hold on
grid on
axis vis3d
view([10 10])
plotCurve
for i=2:size(pointsT,1)
if mod(i,2)==0
T=plot3(pointsT([i-1:i],1),pointsT([i-1:i],2),pointsT([i-
1:i],3),'-r','LineWidth',1);
else
plot3(pointsT([i-1:i],1),pointsT([i-1:i],2),pointsT([i-
1:i],3),'Color',[0 0 0],'LineWidth',1);
end
end
mArrow3([1.5 2 -1],[-0.5, 2, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, -0.5, -1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
mArrow3([1.5 2 -1],[1.5, 2, 1], 'stemWidth',
0.005,'color','red','facealpha',0.3);
text(-0.5, 2, -1, 'Vx','FontSize',12);
text(1.5, -0.5, -1, 'Vy','FontSize',12);
text(1.5, 2, 1, 'Vz','FontSize',12);
%% Slider Rotace
S = uicontrol('Style','slider',...
'Position',[10 10 300 20],...
'Max',180,...
'Min',-180,...
'Value',0,...
'SliderStep',[1/360 1/360]);
LS=addlistener(S,'ContinuousValueChange',#slider_callback);
set(S,'UserData',LS)
end
function slider_callback(hObject,eventData)
global ax1_T ax2_T
val = get(hObject,'value');
view(ax1_T,[val,10])
view(ax2_T,[val+10,10])
end
The picture of the error:
I have literally no idea why the simplified code doesn't work, probably I'm overlooking some thing.
Could you please give me a hint?
New problem below:
You have removed the line function plotCurve. This is an important one since it defines a local function that the main function VCG calls. In your reduced example this function does not exist anymore since you removed its header. That's why you see this error.
Just put it back (before the first loop), then it should work.

MATCONVVNET nnloss error 'Index Exceed Matrix Dimension'

I have made my own IMDB using a set of 51000 images categorized into 43 different categories of road traffic signs. However, when I want to use my own IMDB to train the alexnet network, I get an error which says: Index exceeds matrix dimensions.
Error in vl_nnloss (line 230)
t = - log(x(ci)) ;
Do you have an idea what I am doing wrong? I have checked through my IMDB, and the images, labels and sets have been appropriately created as specified in my code. Also, the image array is declared as type single and not uint8.
Here is my training code below
function [net, info] = alexnet_train(imdb, expDir)
run(fullfile(fileparts(mfilename('fullpath')), '../../', 'matlab', 'vl_setupnn.m')) ;
% some common options
opts.train.batchSize = 100;
opts.train.numEpochs = 20 ;
opts.train.continue = true ;
opts.train.gpus = [1] ;
opts.train.learningRate = [1e-1*ones(1, 10), 1e-2*ones(1, 5)];
opts.train.weightDecay = 3e-4;
opts.train.momentum = 0.;
opts.train.expDir = expDir;
opts.train.numSubBatches = 1;
% getBatch options
bopts.useGpu = numel(opts.train.gpus) > 0 ;
% network definition!
% MATLAB handle, passed by reference
net = dagnn.DagNN() ;
net.addLayer('conv1', dagnn.Conv('size', [11 11 3 96], 'hasBias', true, 'stride', [4, 4], 'pad', [0 0 0 0]), {'input'}, {'conv1'}, {'conv1f' 'conv1b'});
net.addLayer('relu1', dagnn.ReLU(), {'conv1'}, {'relu1'}, {});
net.addLayer('lrn1', dagnn.LRN('param', [5 1 2.0000e-05 0.7500]), {'relu1'}, {'lrn1'}, {});
net.addLayer('pool1', dagnn.Pooling('method', 'max', 'poolSize', [3, 3], 'stride', [2 2], 'pad', [0 0 0 0]), {'lrn1'}, {'pool1'}, {});
net.addLayer('conv2', dagnn.Conv('size', [5 5 48 256], 'hasBias', true, 'stride', [1, 1], 'pad', [2 2 2 2]), {'pool1'}, {'conv2'}, {'conv2f' 'conv2b'});
net.addLayer('relu2', dagnn.ReLU(), {'conv2'}, {'relu2'}, {});
net.addLayer('lrn2', dagnn.LRN('param', [5 1 2.0000e-05 0.7500]), {'relu2'}, {'lrn2'}, {});
net.addLayer('pool2', dagnn.Pooling('method', 'max', 'poolSize', [3, 3], 'stride', [2 2], 'pad', [0 0 0 0]), {'lrn2'}, {'pool2'}, {});
net.addLayer('conv3', dagnn.Conv('size', [3 3 256 384], 'hasBias', true, 'stride', [1, 1], 'pad', [1 1 1 1]), {'pool2'}, {'conv3'}, {'conv3f' 'conv3b'});
net.addLayer('relu3', dagnn.ReLU(), {'conv3'}, {'relu3'}, {});
net.addLayer('conv4', dagnn.Conv('size', [3 3 192 384], 'hasBias', true, 'stride', [1, 1], 'pad', [1 1 1 1]), {'relu3'}, {'conv4'}, {'conv4f' 'conv4b'});
net.addLayer('relu4', dagnn.ReLU(), {'conv4'}, {'relu4'}, {});
net.addLayer('conv5', dagnn.Conv('size', [3 3 192 256], 'hasBias', true, 'stride', [1, 1], 'pad', [1 1 1 1]), {'relu4'}, {'conv5'}, {'conv5f' 'conv5b'});
net.addLayer('relu5', dagnn.ReLU(), {'conv5'}, {'relu5'}, {});
net.addLayer('pool5', dagnn.Pooling('method', 'max', 'poolSize', [3 3], 'stride', [2 2], 'pad', [0 0 0 0]), {'relu5'}, {'pool5'}, {});
net.addLayer('fc6', dagnn.Conv('size', [6 6 256 4096], 'hasBias', true, 'stride', [1, 1], 'pad', [0 0 0 0]), {'pool5'}, {'fc6'}, {'conv6f' 'conv6b'});
net.addLayer('relu6', dagnn.ReLU(), {'fc6'}, {'relu6'}, {});
net.addLayer('fc7', dagnn.Conv('size', [1 1 4096 4096], 'hasBias', true, 'stride', [1, 1], 'pad', [0 0 0 0]), {'relu6'}, {'fc7'}, {'conv7f' 'conv7b'});
net.addLayer('relu7', dagnn.ReLU(), {'fc7'}, {'relu7'}, {});
net.addLayer('classifier', dagnn.Conv('size', [1 1 4096 10], 'hasBias', true, 'stride', [1, 1], 'pad', [0 0 0 0]), {'relu7'}, {'classifier'}, {'conv8f' 'conv8b'});
net.addLayer('prob', dagnn.SoftMax(), {'classifier'}, {'prob'}, {});
net.addLayer('objective', dagnn.Loss('loss', 'log'), {'prob', 'label'}, {'objective'}, {});
net.addLayer('error', dagnn.Loss('loss', 'classerror'), {'prob','label'}, 'error') ;
% -- end of the network
% initialization of the weights (CRITICAL!!!!)
initNet(net, 1/100);
% do the training!
info = cnn_train_dag(net, imdb, #(i,b) getBatch(bopts,i,b), opts.train, 'val', find(imdb.images.set == 3)) ;
end
function initNet(net, f)
net.initParams();
f_ind = net.layers(1).paramIndexes(1);
b_ind = net.layers(1).paramIndexes(2);
net.params(f_ind).value = 10*f*randn(size(net.params(f_ind).value), 'single');
net.params(f_ind).learningRate = 1;
net.params(f_ind).weightDecay = 1;
for l=2:length(net.layers)
% is a conenter code herevolution layer?
if(strcmp(class(net.layers(l).block), 'dagnn.Conv'))
f_ind = net.layers(l).paramIndexes(1);
b_ind = net.layers(l).paramIndexes(2);
[h,w,in,out] = size(net.params(f_ind).value);
net.params(f_ind).value = f*randn(size(net.params(f_ind).value), 'single');
net.params(f_ind).learningRate = 1;
net.params(f_ind).weightDecay = 1;
net.params(b_ind).value = f*randn(size(net.params(b_ind).value), 'single');
net.params(b_ind).learningRate = 0.5;
net.params(b_ind).weightDecay = 1;
end
end
end
% function on charge of creating a batch of images + labels
function inputs = getBatch(opts, imdb, batch)
%[227 by 227 by 3] image
images = imdb.images.data(:,:,:,batch) ;
labels = imdb.images.labels(1,batch) ;
if opts.useGpu > 0
images = gpuArray(images) ;
end
inputs = {'input', images, 'label', labels} ;
end
Your network is not true. Conv1 layer must be [11 11 3 48]. If it doesn't work check again your network. This error occurs due to your network errors.

Using AlexNet for prediction on new data after training

I am using Hands-on DL Tutorial (http://www.cvc.uab.es/~gros/index.php/hands-on-deep-learning-with-matconvnet/) for understanding how Convolutional Neural Networks (CNNs) work.
To start, I compiled MatConvnet and ran AlexNet with the network structure as follows:
net = dagnn.DagNN() ;
% special padding for CIFAR-10
net.addLayer('conv1', dagnn.Conv('size', [11 11 3 96], 'hasBias', true, 'stride', [4, 4], 'pad', [20 20 20 20]), {'input'}, {'conv1'}, {'conv1f' 'conv1b'});
net.addLayer('relu1', dagnn.ReLU(), {'conv1'}, {'relu1'}, {});
net.addLayer('lrn1', dagnn.LRN('param', [5 1 2.0000e-05 0.7500]), {'relu1'}, {'lrn1'}, {});
net.addLayer('pool1', dagnn.Pooling('method', 'max', 'poolSize', [3, 3], 'stride', [2 2], 'pad', [0 0 0 0]), {'lrn1'}, {'pool1'}, {});
net.addLayer('conv2', dagnn.Conv('size', [5 5 48 256], 'hasBias', true, 'stride', [1, 1], 'pad', [2 2 2 2]), {'pool1'}, {'conv2'}, {'conv2f' 'conv2b'});
net.addLayer('relu2', dagnn.ReLU(), {'conv2'}, {'relu2'}, {});
net.addLayer('lrn2', dagnn.LRN('param', [5 1 2.0000e-05 0.7500]), {'relu2'}, {'lrn2'}, {});
net.addLayer('pool2', dagnn.Pooling('method', 'max', 'poolSize', [3, 3], 'stride', [2 2], 'pad', [0 0 0 0]), {'lrn2'}, {'pool2'}, {});
net.addLayer('conv3', dagnn.Conv('size', [3 3 256 384], 'hasBias', true, 'stride', [1, 1], 'pad', [1 1 1 1]), {'pool2'}, {'conv3'}, {'conv3f' 'conv3b'});
net.addLayer('relu3', dagnn.ReLU(), {'conv3'}, {'relu3'}, {});
net.addLayer('conv4', dagnn.Conv('size', [3 3 192 384], 'hasBias', true, 'stride', [1, 1], 'pad', [1 1 1 1]), {'relu3'}, {'conv4'}, {'conv4f' 'conv4b'});
net.addLayer('relu4', dagnn.ReLU(), {'conv4'}, {'relu4'}, {});
net.addLayer('conv5', dagnn.Conv('size', [3 3 192 256], 'hasBias', true, 'stride', [1, 1], 'pad', [1 1 1 1]), {'relu4'}, {'conv5'}, {'conv5f' 'conv5b'});
net.addLayer('relu5', dagnn.ReLU(), {'conv5'}, {'relu5'}, {});
net.addLayer('pool5', dagnn.Pooling('method', 'max', 'poolSize', [3 3], 'stride', [2 2], 'pad', [0 0 0 0]), {'relu5'}, {'pool5'}, {});
net.addLayer('fc6', dagnn.Conv('size', [1 1 256 4096], 'hasBias', true, 'stride', [1, 1], 'pad', [0 0 0 0]), {'pool5'}, {'fc6'}, {'conv6f' 'conv6b'});
net.addLayer('relu6', dagnn.ReLU(), {'fc6'}, {'relu6'}, {});
net.addLayer('fc7', dagnn.Conv('size', [1 1 4096 4096], 'hasBias', true, 'stride', [1, 1], 'pad', [0 0 0 0]), {'relu6'}, {'fc7'}, {'conv7f' 'conv7b'});
net.addLayer('relu7', dagnn.ReLU(), {'fc7'}, {'relu7'}, {});
net.addLayer('classifier', dagnn.Conv('size', [1 1 4096 10], 'hasBias', true, 'stride', [1, 1], 'pad', [0 0 0 0]), {'relu7'}, {'classifier'}, {'conv8f' 'conv8b'});
net.addLayer('prob', dagnn.SoftMax(), {'classifier'}, {'prob'}, {});
net.addLayer('objective', dagnn.Loss('loss', 'log'), {'prob', 'label'}, {'objective'}, {});
net.addLayer('error', dagnn.Loss('loss', 'classerror'), {'prob','label'}, 'error') ;
I load dataset (imdb_cifar10.mat) and train the network:
imdb_cifar10 = load('../data/imdb_cifar10.mat');
[net_alexnet, info] = alexnet_train(imdb_cifar10, 'results/cifar_10_experiment_1');
After 10 epochs, I received 10 net-epoch-x.mat file. So then, I would like to load one of these files to test on an image, but without success:
net = load('results/cifar_10_experiment_1 /net-epoch-10.mat');
net = dagnn.DagNN.loadobj(net.net);
net.meta.classes.description = imdb_cifar10 .meta.classes;
im = imread('../data/dog.jpg');
inference_classification(im, alexNet);
Where:
function inference_classification(im, net)
im_ = single(im) ; % note: 0-255 range
im_ = imresize(im_, net.meta.normalization.imageSize(1:2));
im_ = im_ - net.meta.normalization.averageImage;
% run the CNN
net.eval({'input', im_});
% obtain the CNN otuput
scores = net.vars(net.getVarIndex('prob')).value;
scores = squeeze(gather(scores));
% show the classification results
[bestScore, best] = max(scores);
figure(1) ; clf ; imagesc(im);
title(sprintf('%s (%d), score %.3f', net.meta.classes.description{best}, best, bestScore));
end
Matlab showed that I have an error with im_ = imresize(im_, net.meta.normalization.imageSize(1:2));
Error
I tried to run the code with some different versions of MatconvNet (e.g, matconvnet-1.0-beta16 and matconvnet-1.0-beta23) but the result is same. Could you please give me some advice for solving this problem.
Thank you so much for your time.
Best regards,
An Nhien

MatLab - histc with many edges vector

Consider this :
a = [1 ; 7 ; 13];
edges = [1, 3, 6, 9, 12, 15];
[~, bins] = histc(a, edges)
bins =
1
3
5
Now I would like to have the same output, but with a different "edges" vector for each a value, i.e. a matrix instead of a vector for edges. Exemple :
a = [1 ; 7 ; 13];
edges = [ 1, 3, 6 ; 1, 4, 15 ; 1, 20, 30];
edges =
1 3 6
1 4 15
1 20 30
indexes = theFunctionINeed(a, edges);
indexes =
1 % 1 inside [1, 3, 6]
2 % 7 indide [1, 4, 15]
1 %13 inside [1, 20, 30]
I could do this with histc inside a for loop, by I'm trying to avoid loops.
If you transform your arrays to cell arrays, you can try
a = {1 ; 7 ; 13};
edges = {[ 1, 3, 6 ];[ 1, 4, 15] ; [1, 20, 30]};
[~, indexes] = cellfun(#histc, a, edges,'uniformoutput', false)
This results in
indexes =
[1]
[2]
[1]
~edit~
To transform your matrices into cell arrays you can use num2cell:
a = num2cell(a);
edges = num2cell(edges, 2);
You could also do:
a = [1; 7; 13];
edges = [1 3 6; 1 4 15; 1 20 30];
bins = sum(bsxfun(#ge, a, edges), 2)
The result:
>> bins
bins =
1
2
1