requirement failed: A & B Dimension mismatch! : Multilayer Perceptron Pyspark - pyspark

I build a pipeline with the Multilayerperceptronclassifier but when I try to evaluate the results I obtain an error. Can anyone hel me to fix the problem?
I think there is no problem with the pipeline before the classifier since I used with several classifiers and it works. I Have 3 label to predict.
Error: An error occurred while calling o554.evaluate.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 127.0 failed 1 times, most recent failure: Lost task 0.0 in stage 127.0 (TID 123) (70c695f6a9e1 executor driver): org.apache.spark.SparkException: Failed to execute user defined function (ProbabilisticClassificationModel$$Lambda$4201/0x00000008417dd840: (struct<type:tinyint,size:int,indices:array<int>,values:array<double>>) => struct<type:tinyint,size:int,indices:array<int>,values:array<double>>)
at org.apache.spark.sql.errors.QueryExecutionErrors$.failedExecuteUserDefinedFunctionError(QueryExecutionErrors.scala:177)
at org.apache.spark.sql.errors.QueryExecutionErrors.failedExecuteUserDefinedFunctionError(QueryExecutionErrors.scala)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: requirement failed: A & B Dimension mismatch!
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.ml.ann.BreezeUtil$.dgemm(BreezeUtil.scala:42)
at org.apache.spark.ml.ann.AffineLayerModel.eval(Layer.scala:164)
at org.apache.spark.ml.ann.FeedForwardModel.forward(Layer.scala:508)
at org.apache.spark.ml.ann.FeedForwardModel.predictRaw(Layer.scala:561)
at org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.predictRaw(MultilayerPerceptronClassifier.scala:332)
at org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.predictRaw(MultilayerPerceptronClassifier.scala:274)
at org.apache.spark.ml.classification.ProbabilisticClassificationModel.$anonfun$transform$2(ProbabilisticClassifier.scala:121)
... 19 more
train,test, validation = df.randomSplit([0.7, 0.2, 0.1], 1234)
mlp = MultilayerPerceptronClassifier(labelCol = 'label',
featuresCol = 'features',
maxIter=100,
layers=[11, 4, 5, 3],
seed=1234)
stages.append(mlp)
pipeline = Pipeline(stages=stages)
model = pipeline.fit(train)
pred = model.transform(test)
accuracy = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy").evaluate(pred)
precision = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedPrecision").evaluate(pred)
recall = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedRecall").evaluate(pred)
f1 = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="f1").evaluate(pred)
print("Test Error = %g" % (1.0 - accuracy))
print("Accuracy = %g" % (accuracy))
print("Precision = %g" % (precision))
print("Recall = %g" % (recall))
print("F1 = %g" % (f1))
accuracy = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy").evaluate(pred)
precision = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedPrecision").evaluate(pred)
recall = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedRecall").evaluate(pred)
f1 = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="f1").evaluate(pred)
print("Test Error = %g" % (1.0 - accuracy))
print("Accuracy = %g" % (accuracy))
print("Precision = %g" % (precision))
print("Recall = %g" % (recall))
print("F1 = %g" % (f1))

The problem lies most likely on your pipeline. It will be hard to help without seeing the various algorithms you used prior to your mlp. However, here is a general solution:
E.g: if you have 11 features columns & perform the following algorithms:
agorithm1 --> agorithm2 --> vectorassember --> MultilayerPerceptronClassifier
At vectorassember, you could be having 1000s of features, e.g. (20000,[155,268,27...]), in which case your input layer would have 20000 nodes, not just the 11 columns of your initial features columns.
So,
layers = [11, 4, 5, 3] will throw the error:
Caused by: java.lang.IllegalArgumentException: requirement failed: A & B Dimension mismatch!
And so,
layers = [20000, 4, 5, 3] will be correct.

Related

how to make the prediction of a qos with a lstm network under matlab

I am trying to make a QOS prediction on the QWS dataset but I have the following error:
Error using trainNetwork (line 170)
Too many input arguments.
Error in lstm (line 63)
net =
trainNetwork(x_train,y_train,layers,options);
Caused by:
Error using
trainNetwork>iParseInputArguments
(line 326)
Too many input arguments.
data = readtable('C:\Users\Etudiant FST\Documents\études\mini_pjt\d\qws1\qws1.txt');
%test_data = readtable('C:\Users\Etudiant FST\Documents\études\mini_pjt\d\qws2\qws2.txt');
data = data(:,1:10);
x = [];
y = [];
delta_x = 1;
delta_y = 1;
pas = 1;
while (height(data) >= delta_x + delta_y)
x = [x; data(1:delta_x,:)];
y = [y; data(delta_x + 1:delta_x + delta_y,:)];
data(1:pas,:) = [];
end
%numObservations = height(data);
%idxTrain = 1:floor(0.8*numObservations);
%idxTest = floor(0.8*numObservations)+1:numObservations;
%dataTrain = data(idxTrain,:);
%dataTest = data(idxTest,:);
%%for n = 1:numel(dataTrain)
%X = dataTrain{n};
% xt{n} = X(:,1:end-1);
% tt{n} = X(:,2:end);
%%end
height_x = height(x);
split = fix(height_x*0.8);
x_train = x(1:split,:);
x_test = x(split:height_x,:);
y_train = y(1:split,:);
y_test = y(split:height_x,:);
layers = [
sequenceInputLayer(10)
lstmLayer(128,'OutputMode','sequence')
fullyConnectedLayer(10)
regressionLayer];
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'MiniBatchSize',miniBatchSize, ...
'InitialLearnRate',0.01, ...
'GradientThreshold',1, ...
'Shuffle','never', ...
'Plots','training-progress',...
'Verbose',0);
net = trainNetwork(x_train,y_train,layers,options);
enter image description here
I would like it to give me a prediction of the new QOS from the old ones
thank you.
As the error message suggests, MATLAB isn't able to detect the correct trainNetwork function to use (since the function is overloaded). This is because the correct function is selected based on the numbers of inputs and the input (types) passed to it.
If you look at the example for LSTM on the documentation for trainNetwork, you will see that XTrain is a 270 by 1 'cell array' with every cell containing a N x M array while YTrain is a 270 by 1 'categorical array'.
Shaping your Xtrain and Ytrain to these data shape and types should solve the problem. Everything else on the code seems okay to me.

Unknown error on optimizing CNN on Matlab

I want to optimize my CNN as this example using Bayesian optimization and Matlab 2019a. Instead of showing whole my codes in a single part, I split it to three parts to show which lines causes error. First part is about loading data, dividing for training and validation and defining optimization variables. Second part is where error is. Third part is remaining part of codes. Note that buildLayer is also written by me to make network layers.
My questions are: how can I fix aforementioned error and why it occurred?
First part is (loading data):
folder = 'C:\Users\X';
imds = imageDatastore(folder, ...
"IncludeSubfolders",true, ...
"LabelSource","foldernames");
num = 20;
per = randperm(1000, num);
figure;
for i=1 : num
subplot(4, 5, i);
imshow(imds.Files{per(i)});
end
img = readimage(imds,1);
[trainData, validData] = splitEachLabel(imds,0.8);
% defining optimizable variables
optVars = [
optimizableVariable('depth', [1 5], "Type","integer")
optimizableVariable('initLearningRate', [0.001, 1], "Transform","log")
optimizableVariable('initFilterNum', [8 32], "Type","integer")
optimizableVariable('momentum', [0.7 0.98])
optimizableVariable('l2Reg', [1e-10 1e-2],'Transform','log')]
objFcn = makeObjFcn(trainData,validData);
I get this error:
Array formation and parentheses-style indexing with objects of class
'matlab.io.datastore.ImageDatastore' is not allowed. Use objects of
class 'matlab.io.datastore.ImageDatastore' only as scalars or use a
cell array.
Error in unique>uniqueR2012a (line 191)
a = a(:);
Error in unique (line 103)
[varargout{1:nlhs}] = uniqueR2012a(varargin{:});
Error in main>makeObjFcn/valErrorFun (line 47)
numClasses = numel(unique(vaData));
Error in BayesianOptimization/callObjNormally (line 2560)
[Objective, ConstraintViolations, UserData] = this.ObjectiveFcn(conditionalizeX(this, X));
Error in BayesianOptimization/callObjFcn (line 467)
= callObjNormally(this, X);
Error in BayesianOptimization/runSerial (line 1989)
ObjectiveFcnObjectiveEvaluationTime, ObjectiveNargout] = callObjFcn(this, this.XNext);
Error in BayesianOptimization/run (line 1941)
this = runSerial(this);
Error in BayesianOptimization (line 457)
this = run(this);
Error in bayesopt (line 323) Results = BayesianOptimization(Options);
on second part (the lines cause error):
bayesObj = bayesopt(...
objFcn, optVars, ...
"MaxTime", 1* 3600,...,
"IsObjectiveDeterministic",false,...
"UseParallel",false);
and third part (remaining of codes):
estIdx = bayesObj.IndexOfMinimumTrace(end);
fileName = bayesObj.UserDataTrace{bestIdx};
savedStruct = load(fileName);
valError = savedStruct.valError;
function ObjFcn = makeObjFcn(trData,vaData)
ObjFcn = #valErrorFun;
function [valError,cons,fileName] = valErrorFun(optVars)
% making layers
imageSize = [32 32 3];
numClasses = numel(unique(vaData));
layers = buidLayers(...
imageSize,...
optVars.initFilterNum,...
optVars.depth,...
numClasses);
miniBatchSize = 256;
validationFrequency = floor(numel(vaData)/miniBatchSize);
options = trainingOptions('sgdm', ...
'InitialLearnRate',optVars.initLearningRate, ...
'Momentum',optVars.momentum1, ...
'MaxEpochs',60, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',40, ...
'LearnRateDropFactor',0.1, ...
'MiniBatchSize',miniBatchSize, ...
'L2Regularization',optVars.l2Reg, ...
'Shuffle','every-epoch', ...
'Verbose',false, ...
'Plots','training-progress', ...
'ValidationData',vaData, ...
'ValidationFrequency',validationFrequency);
pixelRange = [-4 4];
imageAugmenter = imageDataAugmenter( ...
'RandXReflection',true, ...
'RandXTranslation',pixelRange, ...
'RandYTranslation',pixelRange);
datasource = augmentedImageDatastore(...
imageSize,...
trData,...
'DataAugmentation',imageAugmenter);
trainedNet = trainNetwork(datasource,layers,options);
%close(findall(groot,'Tag','NNET_CNN_TRAININGPLOT_FIGURE'))
YPredicted = classify(trainedNet,XValidation);
valError = 1 - mean(YPredicted == YValidation);
fileName = num2str(valError) + ".mat";
save(fileName,'trainedNet','valError','options')
cons = [];
end
end

CUDA_ERROR_ILLEGAL_ADDRESS when runnin Faster R-CNN on Matlab

I'm running faster R-CNN in matlab 2018b on a Windows 10. I face an exception CUDA_ERROR_ILLEGAL_ADDRESS when I increase the number of my training items or when I increase the MaxEpoch.
Below are the information of my gpuDevice
CUDADevice with properties:
Name: 'GeForce GTX 1050'
Index: 1
ComputeCapability: '6.1'
SupportsDouble: 1
DriverVersion: 9.2000
ToolkitVersion: 9.1000
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 4.2950e+09
AvailableMemory: 3.4635e+09
MultiprocessorCount: 5
ClockRateKHz: 1493000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
And this is my code
latest_index =0;
for i=1:6
load (strcat('newDataset', int2str(i), '.mat'));
len =length(vehicleDataset.imageFilename);
for j=1:len
filename = vehicleDataset.imageFilename{j};
latest_index=latest_index+1;
fulldata.imageFilename{latest_index} = filename;
fulldata.vehicle{latest_index} = vehicleDataset.vehicle{j};
end
end
trainingDataTable = table(fulldata.imageFilename', fulldata.vehicle');
trainingDataTable.Properties.VariableNames = {'imageFilename','vehicle'};
data.trainingDataTable = trainingDataTable;
trainingDataTable(1:4,:)
% Split data into a training and test set.
idx = floor(0.6 * height(trainingDataTable));
trainingData = trainingDataTable(1:idx,:);
testData = trainingDataTable(idx:end,:);
% Create image input layer.
inputLayer = imageInputLayer([32 32 3]);
% Define the convolutional layer parameters.
filterSize = [3 3];
numFilters = 64;
% Create the middle layers.
middleLayers = [
convolution2dLayer(filterSize, numFilters, 'Padding', 1)
reluLayer()
convolution2dLayer(filterSize, numFilters, 'Padding', 1)
reluLayer()
maxPooling2dLayer(3, 'Stride',2)
];
finalLayers = [
fullyConnectedLayer(128)
% Add a ReLU non-linearity.
reluLayer()
fullyConnectedLayer(width(trainingDataTable))
% Add the softmax loss layer and classification layer.
softmaxLayer()
classificationLayer()
];
layers = [
inputLayer
middleLayers
finalLayers
];
% Options for step 1.
optionsStage1 = trainingOptions('sgdm', ...
'MaxEpochs', 2, ...
'MiniBatchSize', 1, ...
'InitialLearnRate', 1e-3, ...
'CheckpointPath', tempdir);
% Options for step 2.
optionsStage2 = trainingOptions('sgdm', ...
'MaxEpochs', 2, ...
'MiniBatchSize', 1, ...
'InitialLearnRate', 1e-3, ...
'CheckpointPath', tempdir);
% Options for step 3.
optionsStage3 = trainingOptions('sgdm', ...
'MaxEpochs', 2, ...
'MiniBatchSize', 1, ...
'InitialLearnRate', 1e-3, ...
'CheckpointPath', tempdir);
% Options for step 4.
optionsStage4 = trainingOptions('sgdm', ...
'MaxEpochs', 2, ...
'MiniBatchSize', 1, ...
'InitialLearnRate', 1e-3, ...
'CheckpointPath', tempdir);
options = [
optionsStage1
optionsStage2
optionsStage3
optionsStage4
];
doTrainingAndEval = true;
if doTrainingAndEval
% Set random seed to ensure example training reproducibility.
rng(0);
% Train Faster R-CNN detector. Select a BoxPyramidScale of 1.2 to allow
% for finer resolution for multiscale object detection.
detector = trainFasterRCNNObjectDetector(trainingData, layers, options, ...
'NegativeOverlapRange', [0 0.3], ...
'PositiveOverlapRange', [0.6 1], ...
'BoxPyramidScale', 1.2);
data.detector= detector;
else
% Load pretrained detector for the example.
detector = data.detector;
end
save mix_data data
if doTrainingAndEval
% Run detector on each image in the test set and collect results.
resultsStruct = struct([]);
for i = 1:height(testData)
% Read the image.
I = imread(testData.imageFilename{i});
% Run the detector.
[bboxes, scores, labels] = detect(detector, I);
% Collect the results.
resultsStruct(i).Boxes = bboxes;
resultsStruct(i).Scores = scores;
resultsStruct(i).Labels = labels;
end
% Convert the results into a table.
results = struct2table(resultsStruct);
data.results = results;
save mix_data data
else
% Load results from disk.
results = data.results;
end
% Extract expected bounding box locations from test data.
expectedResults = testData(:, 2:end);
% Evaluate the object detector using Average Precision metric.
[ap, recall, precision] = evaluateDetectionPrecision(results, expectedResults);
% Plot precision/recall curve
figure
plot(recall,precision)
xlabel('Recall')
ylabel('Precision')
grid on
title(sprintf('Average Precision = %.2f', ap))
First it prints the warning multiple time and throws the below exception
Warning: An unexpected error occurred during CUDA execution. The CUDA error was:
CUDA_ERROR_ILLEGAL_ADDRESS
In trainFasterRCNNObjectDetector (line 320)
In rcnn_trail (line 184)
Error using -
An unexpected error occurred during CUDA execution. The CUDA error was:
CUDA_ERROR_ILLEGAL_ADDRESS
Error in vision.internal.cnn.layer.SmoothL1Loss/backwardLoss (line 156)
idx = (X > -one) & (X < one);
Error in nnet.internal.cnn.DAGNetwork/computeGradientsForTraining/efficientBackProp (line 585)
dLossdX = thisLayer.backwardLoss( ...
Error in nnet.internal.cnn.DAGNetwork>#()efficientBackProp(i) (line 661)
#() efficientBackProp(i), ...
Error in nnet.internal.cnn.util.executeWithStagedGPUOOMRecovery (line 11)
[ varargout{1:nOutputs} ] = computeFun();
Error in nnet.internal.cnn.DAGNetwork>iExecuteWithStagedGPUOOMRecovery (line 1195)
[varargout{1:nargout}] = nnet.internal.cnn.util.executeWithStagedGPUOOMRecovery(varargin{:});
Error in nnet.internal.cnn.DAGNetwork/computeGradientsForTraining (line 660)
theseGradients = iExecuteWithStagedGPUOOMRecovery( ...
Error in nnet.internal.cnn.Trainer/computeGradients (line 184)
[gradients, predictions, states] = net.computeGradientsForTraining(X, Y,
needsStatefulTraining, propagateState);
Error in nnet.internal.cnn.Trainer/train (line 85)
[gradients, predictions, states] = this.computeGradients(net, X, response,
needsStatefulTraining, propagateState);
Error in vision.internal.cnn.trainNetwork (line 47)
trainedNet = trainer.train(trainedNet, trainingDispatcher);
Error in fastRCNNObjectDetector.train (line 190)
[network, info] = vision.internal.cnn.trainNetwork(ds, lgraph, opts, mapping,
checkpointSaver);
Error in trainFasterRCNNObjectDetector (line 410)
[stage2Detector, fastRCNN, ~, info(2)] = fastRCNNObjectDetector.train(trainingData, fastRCNN,
options(2), iStageTwoParams(params), checkpointSaver);
Error in rcnn_trail (line 184)
detector = trainFasterRCNNObjectDetector(trainingData, layers, options, ...
After talking to Matlab support, apparently my GPU is not the "right" GPU for deep learning and Neural Network.
However, I found that the issue was that Windows changed the GPU during the run, to fix this I went to INVIDIA Control Panel > Programs settings >
1. Select Mathworks Matlab
2. Preferred graphic processor choose your GPU card

GridsearchCV tuning KerasClassifier with callbacks error: ValueError: Found input variables with inconsistent numbers of samples

Using sklearn.GridSearchCV to fine tune the hyperparameters of model in Keras. Also, I add callbacks into it.
Input Format: (1500, 3, 10, 10)
Output Format: (1500,)
Grid search code:
def Grid_Search_Training(model):
# parameters grid
epochs = [300]
activations = ['relu', 'tanh']
L2_lambda = [0.01, 0.001, 0.0001]
batches = [16, 32, 64, 128]
param_grid = dict(activation=activations, epochs=epochs, batch_size=batches, L2_lambda=L2_lambda)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring='accuracy', cv=5)
return grid
def run(grid_search = True):
model = Model()
plot_model(model, to_file='Model_plot.png', show_shapes=True, show_layer_names=True)
# save layer names into a set, to visualize all layers' output in tensorboard
embeddings_all_layer_names = set(layer.name for layer in model.layers if layer.name.startswith('tower_'))
# train and save the model weights
Model_weights_path = 'Model_weights.h5'
checkpointer = ModelCheckpoint(Model_weights_path, monitor='val_loss', verbose=1, save_best_only=True)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.0000001)
tensorboard_log_dir = 'ModelLogs/{}'.format(time.time())
tensorboard = TensorBoard(log_dir = tensorboard_log_dir, histogram_freq = 1,
write_graph=True, write_images=True, embeddings_freq=1,
embeddings_layer_names=embeddings_all_layer_names, embeddings_metadata=None)
callbacks_list = [checkpointer, reduce_lr, tensorboard]
fit_params = dict(callbacks=callbacks_list)
if grid_search:
t0 = time.time()
print incepModel().summary()
model = KerasClassifier(build_fn = model, verbose=1)
grid = Grid_Search_Training(model)
print 'Start Training the model......'
grid_result = grid.fit(X_train, y_train, **fit_params)
print("Best acc Score: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
t1 = time.time()
t = t1-t0
print 'The GirdSearch on CNN took %.2f mins.' %(round(t/60., 2))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
else:
history = model.fit(X_train, to_categorical(y_train), epochs=100, batch_size=64, validation_split=0.2, callbacks=callbacks_list)
X_train, X_test, y_train, y_test = read_split(data)
run(grid_search=True)
The error is :
grid_result = grid.fit(X_train, y_train, fit_params)
File "/Users/jd/anaconda2/lib/python2.7/site-packages/sklearn/model_selection/_search.py", line 615, in fit
X, y, groups = indexable(X, y, groups)
File "/Users/jd/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 229, in indexable
check_consistent_length(*result)
File "/Users/jd/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 204, in check_consistent_length
" samples: %r" % [int(l) for l in lengths])
ValueError: Found input variables with inconsistent numbers of samples: [1500, 1500, 1]
The code works well without callbacks, i.e. No fit_params in grid_result = grid.fit(X_train, y_train, fit_params). There is no error.
What causes such kind of error?

How to set the number of iterations in RCNN, Fast RCNN or Faster RCNN?

I have trained R-CNN network models on a custom dataset and got the results as expected in the end. But I couldn't find where to set the number of iterations before starting the train process and the training continues without any sign of when it's going to stop. Is there a way to set the number of iterations beforehand, so it would stop after specified steps?
This is the code of training the rcnn:
%%%%%%%%%%%%%%%%%%%%%% Define Inputs
imagePath = 'D:\Thesis\Data\VEDAI\vedai\train_images\';
sampleImage = '00000000.png';
objectClasses = {'car','truck','tractor','campingcar','van','other', 'pickup', 'boat', 'plane'};
imageTable = vedaiTrain;
smallestObjectSize = [32, 32, 3];
%%%%%%%%%%%%%%%%%%%%%% Calculations
numClassesPlusBackground = numel(objectClasses) + 1;
t = num2cell(smallestObjectSize);
[height, width, numChannels] = deal(t{:});
imageSize = [height width numChannels];
%%%%%%%%%%%%%%%%%%%%%% Network Layers
%%%%% inputLayer
inputLayer = imageInputLayer(imageSize);
%%%%% middleLayer
filterSize = [5 5];
numFilters = 32;
middleLayers = [
convolution2dLayer(filterSize, numFilters, 'Padding', 2)
reluLayer()
maxPooling2dLayer(3, 'Stride', 2)
convolution2dLayer(filterSize, numFilters, 'Padding', 2)
reluLayer()
maxPooling2dLayer(3, 'Stride',2)
convolution2dLayer(filterSize, 2 * numFilters, 'Padding', 2)
reluLayer()
maxPooling2dLayer(3, 'Stride',2)
]
%%%%% finalLayer
finalLayers = [
fullyConnectedLayer(64)
reluLayer
fullyConnectedLayer(numClassesPlusBackground)
softmaxLayer
classificationLayer
]
Layers = [
inputLayer
middleLayers
finalLayers
]
layers(2).Weights = 0.0001 * randn([filterSize numChannels numFilters]);
%%%%%%%%%%%%%%%%%%%%%% training options
options = trainingOptions('sgdm', ...
'Momentum', 0.9, ...
'InitialLearnRate', 0.001, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', 0.1, ...
'LearnRateDropPeriod', 8, ...
'L2Regularization', 0.004, ...
'MaxEpochs', 40, ...
'MiniBatchSize', 128, ...
'Verbose', true);
%%%%%%%%%%%%%%%%%%%%%% Train an R-CNN object detector
rcnn = trainRCNNObjectDetector(imageTable,Layers, options, ...
'NegativeOverlapRange', [0 0.3], 'PositiveOverlapRange',[0.5 1]);
It keeps training for iterations until some time, which I don't know how it decides.
In the file train_faster_rcnn_alt_opt.py file, set the max_iters = [80000, 40000, 80000, 40000] parameter to the number of iterations you want at each stage.