I am having Out of Memory error while trying to solve a certain linear equation (I will put the code below). Since I am used to coding in C where you have every control over the objects you create I am wondering if I am using matlab inefficiently. Here is the relevant part of the code
myData(n).AMatrix = sparse(fscanf(fid2, '%f', [2*M, 2*M]));
myData(n).AMatrix = transpose(myData(n).AMatrix);
%Read the covariance^2 matrix
myData(n).CovMatrix = sparse(fscanf(fid2, '%f', [2*M,2*M]));
myData(n).CovMatrix = reshape(myData(n).CovMatrix, [4*M*M,1]);
%Kronecker sum of A with itself
I=sparse(eye(2*M));
myData(n).AA=kron( I, myData(n).AMatrix)+kron( myData(n).AMatrix,I);
myData(n).AMatrix=[];
I=[];
%Solve (A+A)x = Vec(CovMatrix)
x=myData(n).CovMatrix\myData(n).AA;
Trying to use this code I get the error
Error using \
Out of memory. Type HELP MEMORY for your options.
Error in COV (line 62)
x=myData(n).CovMatrix\myData(n).AA;
Before this piece of code I only open some files (which contain two 100x100 array of floats) so I dont think they contribute to this error. The element AMatrix is a 100 x 100 array. So the linear equation in question has dimensions 10000 x 10000. Also AA has one dimensional kernel, I dont know if this affects the numerical computations. Later I project the obtained solution to the orthogonal complement of the kernel to get the "good" solution but it comes after the error. For people who are familiar with it this is just a solution to the Lyapunov equation AX + XA = Cov. The matrix A is sparse, it has 4 50x50 sublocks one of which is all zeros, the other is identity, the other is diagonal and the other has less than 1000 non-zero elements. The matrix CovMatrix is diagonal with 50 non-zero elements in the diagonal.
The problem is at the moment I can only do the calculations on a small personal computer with 2GB RAM with 2.5-6GB of virtual memmory. When I run memmory on matlab it gives
>> memory
Maximum possible array: 311 MB (3.256e+08 bytes) *
Memory available for all arrays: 930 MB (9.749e+08 bytes) **
Memory used by MATLAB: 677 MB (7.102e+08 bytes)
Physical Memory (RAM): 1931 MB (2.025e+09 bytes)
I am not very knowledgable when it comes to memory so I am open to even simple advices. Thanks.
Complex functions usually allocate temp memory during computation. 10000x10000 looks quite large if a temp dense matrix of such size is allocated during the computation. You could try a few smaller problem sizes and find out the upper limit of your current computer.
Related
People have asked similar questions before but none has a satisfactory answer. I'm trying to solve Lindblad Master Equation and the matrix size I'm trying to simulate are of order 10000 x 10000. But the problem is with exponentiation of the matrix, which is consuming a lot of RAM.
The MATLAB and Python expm() function take around 20s and 80s for a matrix of size 1000 x 1000 respectively. The code I've shown below.
pd = makedist('Normal');
N = 1000;
r = random(pd ,[N, N]);
t0 = tic;
r = expm(r);
t_total = toc(t0);
The problem comes when I try to do the same for a matrix of size 10000 x 10000. Whenever I apply expm(), the RAM usage grows and it take all the RAM and SWAP memory on my PC (I've 128 GB RAM and 64 Core CPU) and it's same in case of both MATLAB and Scipy. I don't understand what is taking so much RAM and how can I efficiently rum expm() or if it is not possible at all? Even if I could do it on any other language efficiently it would be really helpful!
I am using GPU for computation in matlab. And I keep on getting Out of memory problem.
So I think I could convert some of my variables from double, which is the default type of matlab, to single. Then I did the following experiment
A = gpuArray([1,2,3])
A =
1 2 3
whos A
Name Size Bytes Class
A 1*3 4 gpuArray
B = gpuArray(single([1,2,3]))
B =
1*3 gpuArray single row vector
1 2 3
whos B
Name Size Bytes Class
B 1*3 4 gpuArray
Now I am a little bit confusing. On one hand, it does show me that B is a 1*3 gpuArray single row vector. However, on the other hand, the whos command shows no difference between A and B.
I am wondering if this double to single conversion will indeed help me reduce the memory usage of my GPU in matlab. Basically, my question is: when I move 2 variables on cpu, one is double and the other is single, to gpu, do they consume same amount of memory of GPU in matlab? whos command shows no difference.
Note the following:
A = gpuArray([1:1000])
whos A
Name Size Bytes Class Attributes
A 1x1000 4 gpuArray
Interesting! Only 4 bytes!
But this has an easy explanation: whos is only giving you the size of the variable on CPU RAM. Its 4 bytes because its just a memory address, not the data itself. The data is on the GPU, and it can not "easily" be accessed by the CPU.
Answering your question: Yes, single will take half of the memory of double on the GPU.
I am trying to store a matrix of size 4 x 10^6, but the Matlab can't do it when running it, it's like it can't store a matrix with that size or I should use another way to store. The code is as below:
matrix = [];
for j = 1 : 10^6
x = randn(4,1);
matrix = [matrix x];
end
The problem it still running for long time and can't finish it, however when I remove the line matrix = [matrix x]; , it finishes the loop very quickly. So what I need is to have the matrix in file so that I can use it wherever I need.
It is determined by your amount of available RAM. If you store double values, like here, you require 64 bits per number. Thus, storing 4M values requires 4*10^6*64 = 256M bits, which in turn is 32MB RAM.
A = rand(4,1e6);
whos A
Name Size Bytes Class Attributes
A 4x1000000 32000000 double
Thus you only cannot store this if you have less than 32MB RAM free.
The reason your code takes so long, is because you grow your matrix in place. The orange wiggles on the line matrix = [matrix x]; are not because the festive season is almost here, but because it is very bad practise to do this. As the warning tells you: preallocate your matrix. You know how large it will be, so just initialise it as matrix = zeros(4,1e6); instead of growing it.
Of course in this case you can simply do matrix = rand(4,1e6), which is even faster than looping.
For more information about preallocation see the official MATLAB documentation, this question (which I answered), or this one.
I am developing a program to read a time series of NIfTY format images to a 4D matrix in MATLAB. There are about 60 images in the stack and the program runs without problems until the 28th image. (All the images are approximately same size, same details) But after that the reading get slower and slower.
In fact, the delay is accumulating.
I checked the program again and there are no open files. Everything looks fine.
Can someone give me an advice?
Size of current array (double)
Unless you are running on a machine with more than ~20GB RAM memory your matrix simply becomes too large to handle.
To check the size of the first three dimensions of your matrix:
A = rand(512,512,160);
whos('A')
Output:
Name Size Bytes Class Attributes
A 512x512x160 335544320 double
Now multiply by 60 to obtain the size of your 4D matrix and divide by 1024^3 to obtain GB's:
335544320*60/1024^3 = 18.7500 GB
So yes, your matrix is most likely too large to handle efficiently/effectively.
A matrix exceeding your RAM memory forces MatLab to use the swap file (HDD/SSD) which is orders of magnitude slower than your random access memory (even if you have a SSD).
Switch to different data types
I you do not require double precision, i.e. 16 digits of accuracy, you can always switch to less digits, i.e. single precision floating point numbers. By doing this you can reduce size. You can even reduce size further is the numbers are for example unsigned integers in the range of 0-255. See code below:
% Create doubles
A_double = rand(512,512,160);
S1=whos('A_double');
% Create floats
A_float = single(A_double);
S2=whos('A_float');
% Create unsigned int range 0-255
A_uint=uint8(randi(256,[512,512,160])-1);
S3=whos('A_uint');
fprintf('Size A_double is %4.2f GB\n',(S1.bytes*60)/1024^3)
fprintf('Size A_float is %4.2f GB\n',(S2.bytes*60)/1024^3)
fprintf('Size A_uint is %4.2f GB\n',(S3.bytes*60)/1024^3)
Output:
Size A_double is 18.75 GB
Size A_float is 9.38 GB
Size A_uint is 2.34 GB
Which may just fit inside your RAM. Make sure you indeed pre-allocate memory first, i.e. create an empty matrix using the zeros() function.
I create a 3560 x 3560 sparse matrix, A. I then create two 1 X 3560 vectors, S and T.
When I run the following code (which concatenates S and T as rows in A and afterwards also as columns in A)
A=[A;S;T];
S=[S 0 0];
T=[T 0 0];
A=[A, S', T'];
The last line produces an out of memory error.
I guess I am running out of memory since I have other variables stored, but it seems odd to me that adding two 3560 vectors would be the point in which I am exactly hitting my limit, so I think (or more accurately, wishfully think) that somehow the concatenations aren't done in a smart way...
Am I right or is there no hope (except for optimizing other pieces in my code)?
EDIT:
At the request of yoda, I am posting the full code.
Basically what it does is get a N X N matrix of edge weights between the nodes of a graph, and adds two vectors that will act as a source and sink in a max flow computation.
nbr_sim(nbr_sim<0.8)=0;
A=sparse(size(nbr_sim,1)+2,size(nbr_sim,2)+2);
nelements=size(nbr_sim,1);
A(nbr_sim>0)=nbr_sim(nbr_sim>0);
clear nbr_sim;
S=abs([1 0 0]*n);
T=abs([0 1 0]*n);
A(1:nelements,end-1)=S';
A(1:nelements,end)=T';
A(end-1,1:nelements)=S;
A(end,1:nelements)=T;
EDIT:
As you say you have used considerable resources before this operation, it is entirely likely that you are close to the tipping point, when MATLAB gives you an out of memory error.
Remember that when you grow matrices on the fly either by concatenating or by indexing out of range, MATLAB creates a copy of the matrix in memory. So you're not just using up resources for that extra row, but for a copy of that entire matrix!
Here's an example on my machine where I try to grow a vector that's large enough to tip it over the memory limit.
clear
a=rand(2*10^9+1,1); %#create a large array
whos a
Name Size Bytes Class Attributes
a 2000000001x1 16000000008 double
%#Now repeat the same, but by growing the array by one element
clear
a=rand(2*10^9,1);
a=[a;0];
??? Error using ==> vertcat
Out of memory. Type HELP MEMORY for your options.
So you see that although MATLAB can create a matrix with 2*10^9+1 elements in one go, when you try to create an array of the same size by append a single element to a 2*10^9 element vector, it runs out of memory.
If S and T are column vectors as you say, then A=[A;S;T] should give you an error:
??? Error using ==> vertcat
CAT arguments dimensions are not consistent.
So you must be doing something else. Concatenating will not change sparseness of the matrix i.e., it won't switch from sparse to full.
A=sprand(3560,3560,0.01); %#test matrices
S=rand(3560,1);
T=rand(3560,1);
B=[A,S,T]; %#join the columns
issparse(B)
ans =
1
Moreover, a 3560x3560 matrix of doubles is only ~97 MB, which shouldn't give you an "out of memory" error...
When dealing with large matrix:
For full matrix, you'd better preallocate memory to avoid memory copy during extending.see why
The sparse case is more complicated, and can be even less efficiency than extending in full matrix, because the elements is stored in a compressed manner. Setting an "inner" entry may cause large memory overwrites(have a look here).
So you'd better edit all the entries in advance and create with sparse() function, rather than call sparse() and then pad the data.