prediction array is empty Mask RCNN - neural-network

I am working on Mask RCNN and my prediction array is empty. I think it has to do with memory. This is what my code where I call for the predictions:
model.eval()
with torch.no_grad():
prediction = model([img.to(device)], [target])
When I am in debug mode and I evaluate the expression model([img.to(device)], [target]) I get the following notifcation:
{RuntimeError}[enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you
tried to allocate 172738321924 bytes. Error code 12 (Cannot allocate memory). But when I run the code
as a whole I do not get any errors, except that the predictions are empty.

Related

"MATLAB: corrupted double-linked list"

I recently started getting the error
MATLAB: corrupted double-linked list
about 90% of the time when running a moderately complex matlab model on a supercomputing cluster.
The model runs fine on my laptop [about 15 hours for a run, cluster is used for parameter sweeps], and has done for nearly 2 years.
The only difference in recent runs is that the output is more verbose and creates a large array that wasn't there previously (1.5 Gb).
The general pattern for this array is that it is a 3D array, built from saving 2D slices of the model each timestep. The array is initialised outside of the timestepping loop, and slices are overwritten as the model progresses
%** init
big_array = zeros(a,b,c)
%** Loop
for i=1:c
%%%% DO MODEL %%%%
%** Save to array
big_array(:,:,i) = modelSnapshot';
end
I have checked that the indexing of this array is correct (ie. big_array(:,:,i) = modelSnapshot' has the correct dimensions / size)
Does anyone have any experience with this error and can point to solutions?
The only relevant results I can see on google are for matlabs' mex-file stuff, which is not active in my model
(crashes are on matlab 2016a, laptop runs 2014a)

Out of memory - default matlab database and code

I'm learning nn toolbox with matlab examples and i've got all time error
Out of memory. Type HELP MEMORY for your
options. Error in test2 (line 10) xTest = zeros(inputSize,numel(xTestImages));
Here is my simply code
% Get the number of pixels in each image
imageWidth = 28;
imageHeight = 28;
inputSize = imageWidth*imageHeight;
% Load the test images
[xTestImages, outputs] = digittest_dataset;
% Turn the test images into vectors and put them in a matrix
xTest = zeros(inputSize,numel(xTestImages));
for i = 1:numel(xTestImages)
xTest(:,i) = xTestImages{i}(:);
end
code is written according to
mathwork example (but im trying to do my own custom network). I reinstall matlab, make maximum java RAM storage, clean some disk space and delate rest of neural network. Still not working. Any ideas how to fix this problem?
As written above, the line:
xTest = zeros(inputSize,numel(xTestImages)); # xTestImages is 1x5000
would yield a matrix of size 28^2*5000= 3,920e6 elements. Every element has a double precision (8byte), hence the matrix would consume around 30mb...
You stated, that the command memory shows the following:
Maximum possible array: 29 MB (3.054e+07 bytes)
* Memory available for all arrays: 467 MB (4.893e+08 bytes)
** Memory used by MATLAB: 624 MB (6.547e+08 bytes)
Physical Memory (RAM): 3067 MB (3.216e+09 bytes)
So the first line shows the limitation for ONE single array.
So a few things to consider:
I guess clear all or quitting some other running applications does not improve the situation!?
Do you use a 64 or 32bit OS? And/or MATLAB 32/64 bit?
Did you try to change the Java Heap Settings? https://de.mathworks.com/help/matlab/matlab_external/java-heap-memory-preferences.html
I know this won't fix the problem, but maybe it will help you to keep on working in the meanwhile: You could create the matrix with single precision which should work for your testcase. Simply pass single as second option while creating the matrix.
Out of memory was created by Levenberg–Marquardt algorithm - it's create huge Jacobian matrix for calculations when data is big.

ERROR SVMCLASSIFY MATLAB out of memery

I have this script in Matlab
struct = svmTraining(feature_train,class_final_train);
svmclassify(struct,feature_test);
but, after 5 seconds the following message appears
??? Error using ==> svmclassify at 117
An error was encountered during classification.
Out of memory. Type HELP MEMORY for your options.
Help me, Thanks
I was able to solve this same problem for myself by calling the svmclassify() function on successive subsets of the test data. For some reason it needs an enormous amount of memory if you give it a large array of test data.
So here is something that worked for me
numExemplars = size(testData,1);
chunkSize = 1000;
j=1:chunkSize:numExemplars;
classifications = zeros(numExemplars,1); %initialize
for i=1:length(j)-1;
index1 = j(i);
index2 = j(i+1)-1;
fprintf('classifying exemplars %d to %d\n', index1, index2 );
chunk = testData(index1:index2,:);
classifications(index1:index2) = svmclassify(SVM_struct,chunk);
end
% last bit of data
chunk = testData(j(end):numExemplars,:);
classifications(j(end):numExemplars) = svmclassify(SVM_struct,chunk);
The error means you do not have enough memory available on your machine to carry out the classification.
Firstly, try repeating the commands with a freshly started MATLAB, without creating any more variables than necessary, and with no other applications running.
If that doesn't work, then essentially you will need to either work with a smaller dataset, or get more memory for your machine.

Matlab - Assignment into array causes error: "Maximum variable size allowed by the program is exceeded"

I am running a script which creates a lot of big arrays. Everything runs fine until the following lines:
%dist is a sparse matrix
inds=dist~=0;
inserts=exp(-dist(inds).^2/2*sig_dist);
dist(inds)=inserts;
The last line causes the error: ??? Maximum variable size allowed by the program is exceeded.
I don't understand how could the last line increase the variable size - notice I am inserting into the matrix dist only in places which were non-zero to begin with. So what's happening here?
I'm not sure why you are seeing that error. However, I suggest you use the Matlab function spfun to apply a function to the nonzero elements in a sparse matrix. For example:
>>dist = sprand(10000,20000,0.001);
>>f = #(x) exp(-x.^2/2*sig_dist);
>>dist = spfun(f,dist)
MATLAB implements a "lazy copy-on-write" model. Let me explain with an example.
First, create a really large vector
x = ones(5*1e7,1);
Now, say we wanted to create another vector of the same size:
y = ones(5*1e7,1);
On my machine, this will fail with the following error
??? Out of memory. Type HELP MEMORY for your options.
We know that y will require 5*1e7*8 = 400000000 bytes ~ 381.47 MB (which is also confirmed by which x), but if we check the amount of free contiguous-memory left:
>> memory
Maximum possible array: 242 MB (2.540e+008 bytes) *
Memory available for all arrays: 965 MB (1.012e+009 bytes) **
Memory used by MATLAB: 820 MB (8.596e+008 bytes)
Physical Memory (RAM): 3070 MB (3.219e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
we can see that it exceeds the 242 MB available.
On the other hand, if you assign:
y = x;
it will succeed almost instantly. This is because MATLAB is not actually allocating another memory chunk of the same size as x, instead it creates a variable y that shares the same underlying data as x (in fact, if you call memory again, you will see almost no difference).
MATLAB will only try to make another copy of the data once one of the variables changes, thus if you try this rather innocent assignment statement:
y(1) = 99;
it will throw an error, complaining that it ran out of memory, which I suspect is what is happening in your case...
EDIT:
I was able to reproduce the problem with the following example:
%# a large enough sparse matrix (you may need to adjust the size)
dist = sparse(1:3000000,1:3000000,1);
First, lets check the memory status:
» whos
Name Size Bytes Class Attributes
dist 3000000x3000000 48000004 double sparse
» memory
Maximum possible array: 394 MB (4.132e+008 bytes) *
Memory available for all arrays: 1468 MB (1.539e+009 bytes) **
Memory used by MATLAB: 328 MB (3.440e+008 bytes)
Physical Memory (RAM): 3070 MB (3.219e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
say we want to apply a function to all non-zeros elements:
f = #(X) exp(-X.^2 ./ 2);
strange enough, if you try to slice/assign then it will fail:
» dist(dist~=0) = f( dist(dist~=0) );
??? Maximum variable size allowed by the program is exceeded.
However the following assignment does not throw an error:
[r,c,val] = find(dist);
dist = sparse(r, c, f(val));
I still don't have an explanation why the error is thrown in the first case, but maybe using the FIND function this way will solve your problem...
In general, reassigning non-sparse elements does not change the memory footprint of the matrix. Call whos before and after assignment to check.
dist = sparse(10, 10);
dist(1,1) = 99;
dist(6,7) = exp(1);
inds = dist ~= 0;
whos
dist(inds) = 1;
whos
Without a reproducible example it is hard to determine the cause of the problem. It may be that some intermediate assignment is taking place that isn't sparse. Or you have something specific to your problem that we can't see here.

BMP2AVI program in matlab

HI
I wrote a program that use to work (swear to god) and has stopped from working. this code takes a series of BMPs and convert them into avi file. this is the code:
path4avi='C:/FadeOutMask/'; %dont forget the '/' in the end of the path
pathOfFrames='C:/FadeOutMask/';
NumberOfFiles=1;
NumberOfFrames=10;
%1:1:(NumberOfFiles)
for i=0:1:(NumberOfFiles-1)
FileName=strcat(path4avi,'FadeMaskAsael',int2str(i),'.avi') %the generated file
aviobj = avifile(FileName,'compression','None');
aviobj.fps=10;
for j=0:1:(NumberOfFrames-1)
Frame=strcat(pathOfFrames,'MaskFade',int2str(i*10+j),'.bmp') %not a good name for thedirectory
[Fa,map]=imread(Frame);
imshow(Fa,map);
F=getframe();
aviobj=addframe(aviobj,F)
end
aviobj=close(aviobj);
end
And this is the error I get:
??? Error using ==> checkDisplayRange at 22
HIGH must be greater than LOW.
Error in ==> imageDisplayValidateParams at 57
common_args.DisplayRange = checkDisplayRange(common_args.DisplayRange,mfilename);
Error in ==> imageDisplayParseInputs at 79
common_args = imageDisplayValidateParams(common_args);
Error in ==> imshow at 199
[common_args,specific_args] = ...
Error in ==> ConverterDosenWorkd at 19
imshow(Fa,map);
for some reason I cant put it as code segments. sorry
thank you
Ariel
Are the BMP images indexed? I think the map parameter only applies to images with indexed color maps.
The only way I am able to reproduce the error that you get is when map is a two-element vector where the first element is greater than the second. Note first that the function IMSHOW can be called with the following syntax:
imshow(I,[low high]);
In which I is a grayscale image and low and high specify the display range for the pixel intensities. The extra argument is ignored when I is an RGB image, but even then the value of high must be greater than the value of low or an error is thrown (the one you see above).
What's confusing is why map would be a two-element vector. When loading an image with IMREAD, the map output will either be empty (if the image is not an indexed image) or it will be an N-by-3 color map. I can't think of a situation where the built-in IMREAD would return a map argument with just 2 elements.
Based on the fact that you said it was working, and now suddenly isn't, I would suggest first checking to see if you have inadvertently created an m-file somewhere with the name imread. Doing so could cause that new imread function to be called instead of the built-in one, giving you different outputs than you expect.