I'm learning nn toolbox with matlab examples and i've got all time error
Out of memory. Type HELP MEMORY for your
options. Error in test2 (line 10) xTest = zeros(inputSize,numel(xTestImages));
Here is my simply code
% Get the number of pixels in each image
imageWidth = 28;
imageHeight = 28;
inputSize = imageWidth*imageHeight;
% Load the test images
[xTestImages, outputs] = digittest_dataset;
% Turn the test images into vectors and put them in a matrix
xTest = zeros(inputSize,numel(xTestImages));
for i = 1:numel(xTestImages)
xTest(:,i) = xTestImages{i}(:);
end
code is written according to
mathwork example (but im trying to do my own custom network). I reinstall matlab, make maximum java RAM storage, clean some disk space and delate rest of neural network. Still not working. Any ideas how to fix this problem?
As written above, the line:
xTest = zeros(inputSize,numel(xTestImages)); # xTestImages is 1x5000
would yield a matrix of size 28^2*5000= 3,920e6 elements. Every element has a double precision (8byte), hence the matrix would consume around 30mb...
You stated, that the command memory shows the following:
Maximum possible array: 29 MB (3.054e+07 bytes)
* Memory available for all arrays: 467 MB (4.893e+08 bytes)
** Memory used by MATLAB: 624 MB (6.547e+08 bytes)
Physical Memory (RAM): 3067 MB (3.216e+09 bytes)
So the first line shows the limitation for ONE single array.
So a few things to consider:
I guess clear all or quitting some other running applications does not improve the situation!?
Do you use a 64 or 32bit OS? And/or MATLAB 32/64 bit?
Did you try to change the Java Heap Settings? https://de.mathworks.com/help/matlab/matlab_external/java-heap-memory-preferences.html
I know this won't fix the problem, but maybe it will help you to keep on working in the meanwhile: You could create the matrix with single precision which should work for your testcase. Simply pass single as second option while creating the matrix.
Out of memory was created by Levenberg–Marquardt algorithm - it's create huge Jacobian matrix for calculations when data is big.
Related
I recently asked how to extract VLAD from SIFT descriptors in VLFeat with Matlab here.
However, I am running up against memory limitations. I have 64GB RAM and 64GB Swap.
all_descr = single([sift_descr{:}]);
... produces a memory error:
Requested 128x258438583 (123.2GB) array exceeds maximum array size preference. Creation of arrays greater than this limit may take a
long time and cause MATLAB to become unresponsive. See array size limit or preference panel for more information.
What is the correct way to extract VLAD when we have a very large training dataset? For example, I could subset the SIFT descriptors before running vl_kmeans, like this:
all_descr = single([sift_descr{1:100000}]);
This works within memory, but how will it affect my VLAD features? Thank you.
The main issue here is not how to extract the VLAD for such a matrix: you calculate the VLAD vector of each image by loop'ing over all images and calculating the VLAD one-by-one, i.e. the memory problem doesn't appear there.
You run out of memory when you try to concatenate the SIFT descriptors of all images, in order to cluster them into visual words using a nearest neighbor search:
all_descr = single([sift_descr{:}]);
centroids = vl_kmeans(all_descr, 64);
The simplest way I can think of is to switch to MATLAB's own kmeans function, which is included in the Statistics and Machine Learning toolbox.
It comes with support for Tall Arrays, i.e. MATLAB's datatype for arrays which don't fit into the memory.
To do that, you can save the SIFT descriptors of each image into a CSV file using csvwrite:
for k = 1:size(filelist, 1)
I = imread([repo filelist(k).name]) ;
I = single(rgb2gray(I)) ;
[f,d] = vl_sift(I) ;
csvwrite(sprintf('sift/im_%d.csv', k), single(d.'));
end
Then, you can load the descriptors as a Tall Array by using the datastore function and converting it to tall. As the result will be a table, you can use table2array to convert it to a tall array.
ds = datastore('sift/*.csv');
all_descriptors = table2array(tall(ds));
Finally, you can call thekmeans function on that, and get your centroids
[~, centroids] = kmeans(all_descriptors, 64);
Now you can proceed with calculating the VLAD vector as usual.
I have a large stack of 800 16bit gray scale images with 2048x2048px. They are read from a single BigTIFF file and the whole stack barely fits into my RAM (8GB).
Now I need do a median projection. That means I want to compute the median of each pixel across all 800 frames. The Matlab median function fails because there is not enough memory left make a copy of the whole array for the function call. What would be an efficient way to compute the median?
I have tried using a for loop to compute the median one pixel at a time, but this is still terribly slow.
Iterating over blocks, as #Shai suggests, may be the most straightforward solution. If you do have this problem frequently, you may want to consider converting the image to a mat-file, so that you can access the pixels as n-d array directly from disk.
%# convert to mat file
matObj = matfile('dest.mat','w');
matObj.data(2048,2048,numSlices) = 0;
for t = 1:numSlices
matObj.data(:,:,t) = imread(tiffFile,'index',t);
end
%# load a block of the matfile to take median (run as part of a loop)
medianOfBlock = median(matObj.data(1:128,1:128,:),3);
I bet that the distributions of the individual pixel values over the stack (i.e. the histograms of the pixel jets) are sparse.
If that's the case, the amount of memory needed to keep all the pixel histograms is much less than 2K x 2K x 64k: you can use a compact hash map to represent each histogram, and update them loading the images one at a time. When all updates are done, you go through your histograms and compute the median of each.
If you have access to the Image Processing Toolbox, Matlab has a set of tool to handle large images called Blockproc
From the docs :
To avoid these problems, you can process large images incrementally: reading, processing, and finally writing the results back to disk, one region at a time. The blockproc function helps you with this process.
I will try my best to provide help (if any), because I don't have an 800-stack TIFF image, nor an 8GB computer, but I want to see if my thinkings can form a solution.
First, 800*2048*2048*8bit = 3.2GB, not including the headers. With your 8GB RAM it should not be too difficult to store it at once; there might be too many programs running and chopping up the contiguous memories. Anyway, let's treat the problem as Matlab can't load it as a whole into the memory.
As Jonas suggests, imread supports loading a TIFF image by index. It also supports a PixelRegion parameter, so you can also consider accessing parts of the image by this parameter if you want to utilize Shai's idea.
I came up with a median algo that doesn't use all the data at the same time; it barely scans through a sequence of un-ordered data, one at each time; but it does keep a memory of 256 counters.
_
data = randi([0,255], 1, 800);
bins = num2cell(zeros(256,1,'uint16'));
for ii = 1:800
bins{data(ii)+1} = bins{data(ii)+1} + 1;
end
% clearvars data
s = cumsum(cell2mat(bins));
if find(s==400)
med = ( find(s==400, 1, 'first') + ...
find(s>400, 1, 'first') ) /2 - 1;
else
med = find(s>400, 1, 'first') - 1;
end
_
It's not very efficient, at least because it uses a for loop. But the benefit is instead of keeping 800 raw data in memory, only 256 counters are kept; but the counters need uint16, so actually they are roughly equivalent to 512 raw data. But if you are confident that for any pixel the same grayscale level won't count for more than 255 times among the 800 samples, you can choose uint8, and hence reduce the memory by half.
The above code is for one pixel. I'm still thinking how to expand it to a 2048x2048 version, such as
for ii = 1:800
img_data = randi([0,255], 2048, 2048);
(do stats stuff)
end
By doing so, for each iteration, you only need these kept in memory:
One frame of image;
A set of counters;
A few supplemental variables, with size comparable to one frame of image.
I use a cell array to store the counters. According to this post, a cell array can be pre-allocated while its elements can still be stored in memory non-contigously. That means the 256 counters (512*2048*2048 bytes) can be stored separately, which is quite reasonable for your 8GB RAM. But obviously my sample code does not make use of it since bins = num2cell(zeros(....
So I'm trying to perform STFT on a piano recording using matlab, but I get the following error.
Warning: Input arguments must be scalar.
In test3 at 35
??? Error using ==> zeros
Out of memory. Type HELP MEMORY for your options.
Error in ==> test3 at 35
song = cat(1,song,zeros(n_of_padding,1));
The coding I've used is taken from a sample code found on the net.
clc;
clear all;
[song,FS] = wavread('c scale fast.wav');
song = sum(song,2);
song = song/max(abs(song));
wTime = 0.05;
ZP_exp = 1;
P_OL = 50;
% Number of STFT samples per STFT slice
N_window = floor(wTime*FS);
% Number of overlapping points
window_overlap = floor(N_window*(P_OL/100));
wTime = N_window/FS;
%size checking
%make sure there are integer number of windows if not zero pad until they are
L = size(song);
%determine the number of times-1 the overlapping window will fit the song length
N_of_windows = floor(L - N_window/(N_window - window_overlap));
%determine the remainder
N_of_points_left = L - (N_window + N_of_windows*(N_window - window_overlap));
%Calculate the number of points to zero pad
n_of_padding = (N_window - window_overlap) - N_of_points_left;
%append the zeros to the end of the song
song = cat(1,song,zeros(n_of_padding,1));
clear n_of_windows n_of_points_left n_of_padding
n_of_windows = floor((L - N_window)/(N_window - window_overlap))+1;
windowing = hamming(N_window);
N_padding = 2^(nextpow2(N_window)+ZP_exp);
parfor k = 1:N_of_windows
starting = (k-1)*(N_window -window_overlap) +1;
ending = starting+N_window-1;
%Define the Time of the window, i.e., the center of window
times(k) = (starting + ceil(N_window/2))/Fs;
%apply windowing function
frame_sample = music(starting:ending).*windowing;
%take FFT of sample and apply zero padding
F_trans = fft(frame_sample,N_padding);
%store FFT data for later
STFT_out(:,k) = F_trans;
end
Based on some assumptions I would reason that:
- n_of_padding should be smaller than N_window
- N_window is much smaller FS
- Fs is not too high (frequency of your sound, so should not exceed a few thousand?!)
- Your zeros matrix will not be huge
This should mean that the problem is not that you are creating a too large matrix, but that you already filled up the memory before this call.
How to deal with this?
First type dbstop if error
Run your code
When it stops check all variable sizes to see where the space has gone.
If you don't see anything strange (and the big storage is really needed) then you may be able to process your song in parts.
In line 35 you are trying to make an array that exceeds your available memory. Note that a 1 by n array of zeros alone, is n*8 bytes in size. This means if you make such an array, call it x, and check it with whos('x'), like:
x = zeros(10000,1);
whos('x');
You will likely find that x is 80000 bytes. Maybe by adding such an array to your song variable is adding the last bytes that breaks the memory-camel's back. Using and whos('variableName') take whatever the size of song is before line 35, separately add the size of zeros(n_of_padding,1), convert that to MB, and see if it exceeds your maximum possible memory given by help memory.
The most common implication of Out of memory errors on Matlab is that it is unable to allocate memory due to the lack of a contiguous block. This article explains the various reasons that can cause an Out of memory error on MATLAB.
The Out of memory error often points to a faulty implementation of code that expands matrices on the fly (concatenating, out-of-range indexing). In such scenarios, MATLAB creates a copy in memory i.e memory twice the size of the matrix is consumed with each such occurrence.
On Windows this problem can be alleviated to some extent by passing the /3GB /USERVA=3030 switch during boot as explained here. This enables additional virtual memory to be addressed by the application(MATLAB in this case).
I am running a script which creates a lot of big arrays. Everything runs fine until the following lines:
%dist is a sparse matrix
inds=dist~=0;
inserts=exp(-dist(inds).^2/2*sig_dist);
dist(inds)=inserts;
The last line causes the error: ??? Maximum variable size allowed by the program is exceeded.
I don't understand how could the last line increase the variable size - notice I am inserting into the matrix dist only in places which were non-zero to begin with. So what's happening here?
I'm not sure why you are seeing that error. However, I suggest you use the Matlab function spfun to apply a function to the nonzero elements in a sparse matrix. For example:
>>dist = sprand(10000,20000,0.001);
>>f = #(x) exp(-x.^2/2*sig_dist);
>>dist = spfun(f,dist)
MATLAB implements a "lazy copy-on-write" model. Let me explain with an example.
First, create a really large vector
x = ones(5*1e7,1);
Now, say we wanted to create another vector of the same size:
y = ones(5*1e7,1);
On my machine, this will fail with the following error
??? Out of memory. Type HELP MEMORY for your options.
We know that y will require 5*1e7*8 = 400000000 bytes ~ 381.47 MB (which is also confirmed by which x), but if we check the amount of free contiguous-memory left:
>> memory
Maximum possible array: 242 MB (2.540e+008 bytes) *
Memory available for all arrays: 965 MB (1.012e+009 bytes) **
Memory used by MATLAB: 820 MB (8.596e+008 bytes)
Physical Memory (RAM): 3070 MB (3.219e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
we can see that it exceeds the 242 MB available.
On the other hand, if you assign:
y = x;
it will succeed almost instantly. This is because MATLAB is not actually allocating another memory chunk of the same size as x, instead it creates a variable y that shares the same underlying data as x (in fact, if you call memory again, you will see almost no difference).
MATLAB will only try to make another copy of the data once one of the variables changes, thus if you try this rather innocent assignment statement:
y(1) = 99;
it will throw an error, complaining that it ran out of memory, which I suspect is what is happening in your case...
EDIT:
I was able to reproduce the problem with the following example:
%# a large enough sparse matrix (you may need to adjust the size)
dist = sparse(1:3000000,1:3000000,1);
First, lets check the memory status:
» whos
Name Size Bytes Class Attributes
dist 3000000x3000000 48000004 double sparse
» memory
Maximum possible array: 394 MB (4.132e+008 bytes) *
Memory available for all arrays: 1468 MB (1.539e+009 bytes) **
Memory used by MATLAB: 328 MB (3.440e+008 bytes)
Physical Memory (RAM): 3070 MB (3.219e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
say we want to apply a function to all non-zeros elements:
f = #(X) exp(-X.^2 ./ 2);
strange enough, if you try to slice/assign then it will fail:
» dist(dist~=0) = f( dist(dist~=0) );
??? Maximum variable size allowed by the program is exceeded.
However the following assignment does not throw an error:
[r,c,val] = find(dist);
dist = sparse(r, c, f(val));
I still don't have an explanation why the error is thrown in the first case, but maybe using the FIND function this way will solve your problem...
In general, reassigning non-sparse elements does not change the memory footprint of the matrix. Call whos before and after assignment to check.
dist = sparse(10, 10);
dist(1,1) = 99;
dist(6,7) = exp(1);
inds = dist ~= 0;
whos
dist(inds) = 1;
whos
Without a reproducible example it is hard to determine the cause of the problem. It may be that some intermediate assignment is taking place that isn't sparse. Or you have something specific to your problem that we can't see here.
I'm conducting dimensional reduction of a square matrix A. My issue now is that I have problem computing eigvalue decomposition of a 13000 x 13000 matrix A, i.e. [v d]=eigs(A). Because it's a sparse matrix, I get 'out of memory error' using a 4GB RAM. I'm convinced it's not my PC's problem, since the memory is not used up when eigs command is run. The help I saw online had to do with ARPACK. I checked the recommended site, but there were a lot of files there, don't know which to download. Also, I did not understand how to use it with MATLAB. Another help says use numerical methods, but I dont know which specific one to use. Please any solution is welcome.
Error in ==> eigs>ishermitian at 1535
tf = isequal(A,A');
Error in ==> eigs>checkInputs at 479
issymA = ishermitian(A);
Error in ==> eigs at 96
[A,Amatrix,isrealprob,issymA,n,B,classAB,k,eigs_sigma,whch, ...
Error in ==> labcomp at 20
[vector lambda] = eigs(A)
Please can I get translation of these errors and how to correct it?
The reason you don't see the memory used up, is that it isn't used up - Matlab fails to allocate the needed amount of memory.
Although an array of 13000 x 13000 doubles (the default data type in Matlab) is about 1.25 GB, it doesn't mean a 4Gb of ram is enough - Matlab need 1.25Gb of contiguous memory, otherwise it won't succeed in allocating your matrix. You can read more on memory problems in Matlab here: http://www.mathworks.com/support/tech-notes/1100/1106.html
You can as a first step try using single precision:
[v d]=eigs(single(A));
You say
another help says use numerical methods
If you are doing it on the computer, it's numerical by definition.
If you dont' want (or can't due to memory constraints) to do it in Matlab, you can look for a linear algebra library (ARPACK is just one of them) and have the calculation done outside of Matlab.
First if A is sparse, single(A) won't work. Single sparse matrices are not implemented in MATLAB, see comments:
how to create a single float sparse matrix in mex files
The call to ishermitian may fail because you can't store two copies of your matrix (A and A'). Bypass this problem by commenting the line out and setting issymA to true or false, depending on whether your matrix is Hermitian.
If you find further problems with memory inside eigs, try to reduce its memory footage by asking less solutions, eigs(A,1), or reducing the maximum size of the basis (option p), which by default is twice the number of asked solutions:
opts.p = 3
[x,d] = eigs(A,2,'LM',opts)