When using fasterRCNNLayers for creating faster rcnn network from custom network, it gives error message for upgrading the network to faster rcnn network. matlab 2022a update 3 is used.
I have traced the error and found the problem is caused by deleting classification layers and adding faster-rcnn layers. What's the restrictions for upgrading network and I can't found related clue from the documents.
My cnn network code is as follows: cup_use_net_h.m
numTrainingFiles = 30;
[imdsTrain,imdsTest] = splitEachLabel(imds,numTrainingFiles,'randomize');
layers = [ ...
imageInputLayer([180 60 1])
convolution2dLayer(5,40)
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(5,10)
reluLayer
maxPooling2dLayer(5,'Stride',5)
fullyConnectedLayer(2,'Name','fc') %% fullyConnectedLayer(10)
softmaxLayer
classificationLayer];
options = trainingOptions('sgdm', ...
'MaxEpochs',100,...
'InitialLearnRate',1e-2, ...
'Verbose',true, ...
'Plots','training-progress');
net = trainNetwork(imdsTrain,layers,options);
save("cup_single_mark_net.mat","net");
My fasterRCNNLayers calling program is as follows: cup_faster_rcnn_c.m
% load("cup_single_mark_net.mat");
% trainingDataTable = objectDetectorTrainingData(gTruth);
load("cup_single_mark_net.mat");
singlenet = net;
inputImageSize = [450,400,1];
featureLayer = 'relu_2';
anchorBoxes = [180,60; 150,50];
numClasses = 2;
lgraph = fasterRCNNLayers(inputImageSize,numClasses,anchorBoxes, ...
singlenet,featureLayer)
analyzeNetwork(lgraph);
error message is as follows:
>> cup_faster_rcnn_c
Error using nnet.cnn.LayerGraph>iValidateLayerName
Layer 'fc' does not exist.
Error in nnet.cnn.LayerGraph>iGetDestinationInformation (line 625)
iValidateLayerName( endLayerName, layerNames );
Error in nnet.cnn.LayerGraph/disconnectLayers (line 372)
iGetDestinationInformation(d, this.PrivateDirectedGraph.Nodes.Layers);
Error in vision.internal.cnn.RCNNLayers/insertOrReplaceROIPooling (line 717)
lgraph = lgraph.disconnectLayers(featureExtractionLayer, outLayers{ii});
Error in vision.internal.cnn.RCNNLayers/fastRCNNForNonSequentialNetworks (line 646)
[lgraph, numFiltersLastConvLayer] = insertOrReplaceROIPooling(this, lgraph, featureExtractionLayer);
Error in vision.internal.cnn.RCNNLayers.createFasterRCNN (line 165)
lgraph = fastRCNNForNonSequentialNetworks(obj, numClasses, lgraph, 'faster-rcnn', anchorBoxes, featureExtractionLayer);
Error in fasterRCNNLayers (line 174)
lgraph = vision.internal.cnn.RCNNLayers.createFasterRCNN(...
Error in cup_faster_rcnn_c (line 12)
lgraph = fasterRCNNLayers(inputImageSize,numClasses,anchorBoxes, ...
The feature_layers parameter should be conv_2 not relu_2.
Related
I want to train an Artificial Neural Network on matlab r2020a for an automatic voltage regulator, I already have a simulink file for my output, but whenever I want to run this code:
I = out.input';
T = out.output';
net=newff(minmax(I),[3,5,1],{'logsig','tansig','purelin'},'trainlm');
net = init(net); % Used to initialize the network (weight and biases)
net.trainParam.show =1; % The result of error (mse) is shown at each iteration (epoch)
net.trainParam.epochs = 10000; % Maximum limit of the network training iteration process (epoch)
net.trainParam.goal =1e-12; % Stopping criterion based on error (mse) goal
net=train(net,I,T); % Start training the network
I keep facing this error:
Unrecognized function or variable 'procInfo'.
Error in nnMex.netHints (line 143)
processingInfoArray = [processingInfoArray procInfo];
Error in nncalc.setup1>setupImpl (line 201)
calcHints = calcMode.netHints(net,calcHints);
Error in nncalc.setup1 (line 16)
[calcMode,calcNet,calcData,calcHints,net,resourceText] = setupImpl(calcMode,net,data);
Error in nncalc.setup (line 7)
[calcMode,calcNet,calcData,calcHints,net,resourceText] = nncalc.setup1(calcMode,net,data);
Error in network/train (line 361)
[calcLib,calcNet,net,resourceText] = nncalc.setup(calcMode,net,data);
Error in ann (line 8)
net=train(net,I,T); % Start training the network
I'm not sure where to start to my search for this problem, but I think it might be a compiler issue
I tried searching online, but all I got was just suggestion to download compilers and toolboxes.
I have another question about running narx with bigg data. I tried increase the hidden size, to get better model.
seems like 300 is some upper limit for the hidden units allowed- memory error. So for 1000 it clearly says no.
with the narxnet of :
net = narxnet(1:25,1:25,1000);
I get this error .e file
{Error using zeros
Requested 1000x54373200 (405.1GB) array exceeds maximum array size preference
(377.4GB). This might cause MATLAB to become unresponsive.
Error in nnet.internal.configure.inputWeight (line 25)
net.IW{i,j} = zeros(newSize);
Error in nnet.internal.configure.input (line 42)
net = nnet.internal.configure.inputWeight(net,j,i,x);
Error in network/configure (line 244)
net = nnet.internal.configure.input(net,i,X{i});
Error in preparets (line 302)
net = configure(net,'input',xx(i,:),i);
}
with the size 600 i get - out of memeory error, How to fix it to able to use the NARX for big data:
net = narxnet(1:25,1:25,600);
we get the following
{Out of memory.
Error in normr (line 27)
xi(~isfinite(xi)) = 0;
Error in randnr>new_value_from_rows_cols (line 152)
x = normr(rands(rows,cols));
Error in randnr (line 98)
out1 = new_value_from_rows_cols(in1,in2);
Error in initnw>calcnw (line 287)
wDir = randnr(s,r);
Error in initnw>initialize_layer (line 212)
[w,b] = calcnw(range,net.layers{i}.size,active);
Error in initnw (line 101)
out1 = initialize_layer(in1,in2);
Error in initlay>initialize_network (line 155)
net = feval(initFcn,net,i);
Error in initlay (line 97)
out1 = initialize_network(in1);
Error in network/init (line 31)
net = feval(initFcn,net);
Error in network/configure (line 253)
net = init(net);
Error in preparets (line 302)
net = configure(net,'input',xx(i,:),i);
}
Also, the matlab code do not scale up with more cpu count . Like I get same time for 24 cpu and 120 cpu.
I am trying to get familiar with the Image category classification using deep learning. I am trying to run the Matlab example available on
http://uk.mathworks.com/help/vision/examples/image-category-classification-using-deep-learning.html
However running this example is giving me the following error
Error using message/getString
Unable to load a message catalog 'nnet_cnn:layer:Layer'. Please check
the file location and format.
Error in nnet.cnn.layer.Layer/getVectorHeader (line 113)
header = getString( message( ...
Error in nnet.cnn.layer.Layer/displayNonScalarObject (line 86)
header = sprintf( ' %s\n', getVectorHeader( layers )
);
Error in cnn (line 150)
convnet.Layers
Moreover if I pass and ignore this error, I will get the following error later for the line
trainingFeatures = activations(convnet, trainingSet, featureLayer, ...
'MiniBatchSize', 32, 'OutputAs', 'columns')
Undefined variable "nnet" or class "nnet.internal.cnngpu.convolveForward2D".
Error in nnet.internal.cnn.layer.Convolution2D/doForward (line 218)
Z = nnet.internal.cnngpu.convolveForward2D( ...
Error in nnet.internal.cnn.layer.Convolution2D/forwardNormal (line 195)
Z = this.doForward(X,this.Weights.Value,this.Bias.Value);
Error in nnet.internal.cnn.layer.Convolution2D/forward (line 98)
[Z, memory] = this.forwardNormal( X );
Error in nnet.internal.cnn.SeriesNetwork/activations (line 50)
output = this.Layers{currentLayer}.forward( output );
Error in SeriesNetwork/activations (line 269)
YChannelFormat = predictNetwork.activations(X, layerID);
Error in cnn (line 262)
trainingFeatures = activations(convnet, trainingSet, featureLayer, ...
Can please someone tell me the possibilities for getting this error and how to solve it.
Regards
Is it possible to make MNIST example in matconvnet to work for two classes instead of 10? I changed the cnn_mnist_init.m file to be the following to generate feature vectors for two classes:
net.layers{end+1} = struct('type', 'conv', ...
'weights', {{f*randn(1,1,500,2, 'single'), zeros(1,2,'single')}}, ...
'stride', 1, ...
'pad', 0) ;
But when I run cnn_train I have the following error:
Error in cnn_train>error_multiclass (line 222)
err(2,1) = sum(sum(sum(min(error(:,:,1:5,:),[],3)))) ;
Error in cnn_train>process_epoch (line 302)
error = sum([error, [...
Error in cnn_train (line 153)
[net, stats.train] = process_epoch(opts, getBatch, epoch, train, learningRate, imdb, net) ;
Error in original_image (line 40)
[net, info] = cnn_train(fold, net, imdb, #getBatch, ...
Error in main_original (line 13)
[imdb, net, info] = original_image(fold);
What I did wrong?
The error you are getting in err(2,1) is probably caused because your error vector has a wrong dimension. err(2,1) is the error of false classificated 5 classes you have only two classes. Check the size of the tensor you feed into softmax it should have the dimension [1,1,2=number of classes, batch size]
i am trying to run the following commands
x = simplecluster_dataset;
net = selforgmap([8 8]);
net = train(net,x);
view(net)
y = net(x);
classes = vec2ind(y);
from SOM Example in Matlab R2015a, but it is continuously showing me an error
`Error in selforgmap>create_network (line 110)
net = network(1,1,0,1,0,1);
Error in selforgmap (line 74)
net = create_network(param);
Error in nueralNet (line 2)
net = selforgmap([8 8]);`.
I am trying to implement Self organizing maps in Matlab. Please help me out.
Thanks in advance!