When I run go_calib_optim on the calibration data I get:
Error using -
Matrix dimensions must agree.
Error in compute_extrinsic_init (line 66)
Y = X_kk - (X_mean*ones(1,Np));
Error in comp_ext_calib (line 22)
[omckk,Tckk] = compute_extrinsic_init(x_kk,X_kk,fc,cc,kc,alpha_c);
Error in go_calib_optim_iter (line 293)
comp_ext_calib;
Error in go_calib_optim (line 56)
go_calib_optim_iter;
Error in calibration_script (line 176)
go_calib_optim;
What could be causing this? Alternatively, is this routine optional? It says it is the main optimization routine but how essential is it? I can afford minor to medium errors in the calibration data.
Related
I'm getting this error and I have not found any relatable answer to EXECUTION_FAILED EVEN though training starts but it's pretty slow to suggest that training process in using GPU. In detail if it may help.
Specs I'm Using:
CPU = Core-i7 9th Gen Hexacore
RAM = 16GB
GPU = Nvidia GTX 1660Ti 6-GB
MATLAB = R2018b Version
Code:
options = trainingOptions('sgdm', ...
'MiniBatchSize',32, ...
'MaxEpochs',10, ...
'InitialLearnRate',1e-4, ...
'Shuffle','every-epoch', ...
'ValidationData',augimdsValidation, ...
'ValidationFrequency',3, ...
'Verbose',false, ...
'Plots','training-progress');
try
net.internal.cnngpu.reluForward(1);
catch ME
end
netTransfer = trainNetwork(augimdsTrain,layers,options);
Error Detail:
Warning: The CUDA driver must recompile the GPU libraries because your device is more recent than the
libraries. Recompiling can take several minutes. Learn more.
> In parallel.internal.gpu.selectDevice
In parallel.gpu.GPUDevice.current (line 44)
In gpuDevice (line 23)
In nnet.internal.cnn.util.isGPUCompatible (line 10)
In nnet.internal.cnn.util.GPUShouldBeUsed (line 17)
In nnet.internal.cnn.assembler.setupExecutionEnvironment (line 24)
In trainNetwork>doTrainNetwork (line 171)
In trainNetwork (line 148)
In viperMat (line 45)
Error using trainNetwork (line 150)
Unexpected error calling cuDNN: CUDNN_STATUS_EXECUTION_FAILED.
So, the slow performance was due to bigger batch size passed while training reducing batch size made it faster (But it's still no comparison to python libs). About the error, you can re-execute the code few time to get rid of the error or you can simply write code below to suppress it in start.
warning off parallel:gpu:device:DeviceLibsNeedsRecompiling
Hope it helps to people having similar issue.
Matlab's UT framework seems to print out a long call stack of internal methods of the framework itself. This results into an annoying flood of completely useless information if your test case causes lots of warnings. Is there a way to suppress the call stack but not the warning itself?
Example code:
classdef fooTest < matlab.unittest.TestCase
methods (Test)
function bar(testCase)
testCase.verifyEqual(0,0);
warning('!!!!!');
end
end
end
Running the test:
>> result = run(fooTest);
Running fooTest
Warning: !!!!!
> In fooTest/bar (line 7)
In matlab.unittest.TestRunner/evaluateMethodCore (line 790)
In matlab.unittest.TestRunner/evaluateMethodsOnTestContent (line 737)
In matlab.unittest.TestRunner/runTestMethod (line 1061)
In matlab.unittest.TestRunner/runTest (line 1015)
In matlab.unittest.TestRunner/repeatTest (line 441)
In matlab.unittest.TestRunner/runSharedTestCase (line 416)
In matlab.unittest.TestRunner/runTestClass (line 943)
In matlab.unittest.TestRunner/invokeTestContentOperatorMethod_ (line 838)
In matlab.unittest.plugins.TestRunnerPlugin/runTestClass (line 407)
In matlab.unittest.plugins.testrunprogress.ConciseProgressPlugin/runTestClass (line 61)
In matlab.unittest.plugins.TestRunnerPlugin/invokeTestContentOperatorMethod_ (line 696)
In matlab.unittest.TestRunner/evaluateMethodOnPlugins (line 696)
In matlab.unittest.TestRunner/runTestSuite (line 880)
In matlab.unittest.TestRunner/invokeTestContentOperatorMethod_ (line 838)
In matlab.unittest.plugins.TestRunnerPlugin/runTestSuite (line 250)
In matlab.unittest.plugins.FailureDiagnosticsPlugin/runTestSuite (line 106)
In matlab.unittest.plugins.TestRunnerPlugin/invokeTestContentOperatorMethod_ (line 696)
In matlab.unittest.plugins.TestRunnerPlugin/runTestSuite (line 250)
In matlab.unittest.plugins.DiagnosticsRecordingPlugin/runTestSuite (line 184)
In matlab.unittest.plugins.TestRunnerPlugin/invokeTestContentOperatorMethod_ (line 696)
In matlab.unittest.plugins.TestRunnerPlugin/runTestSuite (line 250)
In sltest.testmanager.plugins.TestManagerResultsPlugin/runTestSuite (line 60)
In matlab.unittest.plugins.TestRunnerPlugin/invokeTestContentOperatorMethod_ (line 696)
In matlab.unittest.TestRunner/evaluateMethodOnPlugins (line 696)
In matlab.unittest.TestRunner/run (line 288)
In matlab.unittest.TestSuite/run (line 543)
In matlab.unittest.internal.RunnableTestContent/run (line 48)
.
Done fooTest
__________
What version of MATLAB are you using? In more recent versions these stack frames are trimmed such that the framework stacks are not included but the relevant stack frames from the test down into the code being tested is still shown.
Your solution to turn of the stack frames entirely might be a good workaround for earlier versions, but it is a big hammer, and more recent versions should give you less extraneous information while still providing you with the information that is more likely to be useful.
Also, I would certainly encourage you to aim to run your test code without warnings at all. They can certainly be indicative of problems. In fact, you can configure your runner to be more strict and fail in the presence of these warnings to keep your testing clean. To do this use the FailOnWarningsPlugin, or runtests(..., 'Strict',true). In the event that you do indeed have a valid warning you should be able to test against that using the verifyWarning method or IssuesWarnings constraint, which works well with this workflow and does the right thing. Finally, if there is a case where you aren't testing for a warning but for some reason you can't avoid the warning being issued you can leverage the SuppressedWarningsFixture.
Hope that helps,
Andy
I found it:
warning('off','backtrace')
I'm working on a project on neural network and my working medium is Matlab. While running the following code:
net=train(net, feat_mat, gt_mat);
The neural network that I've used is a ffnn with 3 hidden layer. The highest value in feat_mat is 255 and the lowest is 0. The highest value in gt_mat is 1 and the lowest is 0. feat_mat has 5x423500 uint8 value and gt_mat has 1x423500 uint8.
I got the following error:
Error using bsxfun
Mixed integer class inputs are not supported.
Error in mapminmax.apply (line 6)
Error in nnet.mode.matlab.processInput (line 7)
Error in nnet.mode.matlab.processInputs (line 12)
Error in nncalc.preCalcData (line 16)
Error in nncalc.setup1>setupImpl (line 176)
Error in nncalc.setup1 (line 16)
Error in nncalc.setup (line 7)
Error in network/train (line 357)
I don't understand why this error is occuring. Any help would be appreciated. Thanks.
P.S: I've searched google and other questions at this site but none of them are relevant to mine.
As detailed in the error, train relies on bsxfun, which doesn't support mixed integer classes.
Your inputs are uint8 arrays, i.e. mixed integers, so train falls over.
To get around this, simply convert the inputs to doubles
net = train( net, double(feat_mat), double(gt_mat) );
I happened to a Matlab Problem, the error looks like this:
Error using pca
Too many input arguments.
Error in mdscale (line 413)
[~,score] = pca(Y,'Economy',false);
Error in demo_libsvm_test1 (line 140)
newCoor = mdscale(distanceMatrix,2);
When I debug my code step by step, the error comes from here:
distanceMatrix = pdist(heart_scale_inst,'euclidean'); (line 139)
newCoor = mdscale(distanceMatrix,2); (line 140)
Everything above the two lines are all right. I do not know how to fix the problem. I use Matlab 2014a. Can anyone give me a help?
This is my code:
C=15*10^-6;
L=240*10^-3;
V=24;
R=10:0.01:40;
W=60:0.01:110;
I=V/(R.^2+((W*L)-(1/(W*C))).^2).^0.5;
axes(handles.axes3)
plot3(R,W,I)
And this is where it what the error message says:
Error using /
Matrix dimensions must agree.
Error in Homework9GUI>pushbutton3_Callback (line 110)
I=V/(R.^2+((W*L)-(1/G)).^2).^0.5;
You need to use ./ insteads of / - that is, element-wise division instead of matrix division.
I=V./(R.^2+((W*L)-(1./(W*C))).^2).^0.5;