Multiplication of matrices when cannot loaded into memory at once - matlab

I've read some similar posts, while none of them actually tackled my problem.
I need to do a series of multiplication-similar operations for A, B, specifically calculating kernel matrices, on Windows Platform. While, the problem is both of A, B could be really large, let us say, 20000-by-360000. While, my server can only provide 96 GB memory. It may seem infeasible to have them in memory at the same time and do the calculation. So is there any good way to efficiently handle such a large multiplication? Btw, The size of result, which is 20000-by-20000, is much less than the multiplier and can fit in the memory properly.
Because I do it on Windows, it may be not feasible to call functions like mmap2.
I wonder whether converting them into sparse matrix is a good option. However, it may heavily depend on the properties of data.
Another solution I've come up with is to partition the origin matrix into blocks. Then do the calculation block-by-block.
Is there any other better solution? Any practical suggestions would be really appreciated.
Best regards,
Peiyun

If I where you I'd look into the block processing function:
B = blockproc(filename,[M N],fun)
and use the Destination parameter to allow saving the results without overflowing your memory.

Related

Optimizing compression using HDF5/H5 in Matlab

Using Matlab, I am going to generate several data files and store them in H5 format as 20x1500xN, where N is an integer that can vary, but typically around 2300. Each file will have 4 different data sets with equal structure. Thus, I will quickly achieve a storage problem. My two questions:
Is there any reason not the split the 4 different data sets, and just save as 4x20x1500xNinstead? I would prefer having them split, since it is different signal modalities, but if there is any computational/compression advantage to not having them separated, I will join them.
Using Matlab's built-in compression, I set deflate=9 (and DataType=single). However, I have now realized that using deflate multiplies my computational time with 5. I realize this could have something to do with my ChunkSize, which I just put to 20x1500x5 - without any reasoning behind it. Is there a strategic way to optimize computational load w.r.t. deflation and compression time?
Thank you.
1- Splitting or merging? It won't make a difference in the compression procedure, since it is performed in blocks.
2- Your choice of chunkshape seems, indeed, bad. Chunksize determines the shape and size of each block that will be compressed independently. The bad is that each chunk is of 600 kB, that is much larger than the L2 cache, so your CPU is likely twiddling its fingers, waiting for data to come in. Depending on the nature of your data and the usage pattern you will use the most (read the whole array at once, random reads, sequential reads...) you may want to target the L1 or L2 sizes, or something in between. Here are some experiments done with a Python library that may serve you as a guide.
Once you have selected your chunksize (how many bytes will your compression blocks have), you have to choose a chunkshape. I'd recommend the shape that most closely fits your reading pattern, if you are doing partial reads, or filling in in a fastest-axis-first if you want to read the whole array at once. In your case, this will be something like 1x1500x10, I think (second axis being the fastest, last one the second fastest, and fist the slowest, change if I am mistaken).
Lastly, keep in mind that the details are quite dependant on the specific machine you run it: the CPU, the quality and load of the hard drive or SSD, speed of RAM... so the fine tuning will always require some experimentation.

'Out of memory' in Matlab. A slow but a permanent solution?

I am wondering if my suggestion to 'Out of Memory' problem is impossible. Here is my suggestion:
The idea is seamlessly saving huge matrices (say BIG = rand(10^6)) to HDD as a .mat(-v7.3) file when it is not possible to keep it in memory and call it seamlessly whenever required. Then, when you want to use it like:
a = BIG(3678,2222);
s = size(BIG);
, it seamlessly does this behind the scene:
m = matfile('BIG.m');
a = m.BIG(3678,2222);
s = size(m,'BIG');
I know that speed is important but suppose that I have enough time but not enough memory. And also its better to write an memory efficient program but again suppose that I need to use someone else's function which cant be optimized. I do actually have some more related questions: Can this be implemented using objects? Or does it require a infrastructural change in Matlab?
Seems to me like this is certainly possible, as this essentially what many operating systems do in the form of paging.
Moreover, something similar is provided by MATLAB Distributed Computing Server. This allows you (among other things) to store the data for a single matrix on multiple machines, and access it seamlessly in the way that you propose.
IMHO, allowing for data to be paged to file/swap should be a setting in MATLAB. Unfortunately, that is not how MATLAB's memory model works, and I suspect it is very difficult to implement this on their side. Plus, when this setting is enabled, users will not be protected anymore against making silly mistakes like zeros(1e7) instead of zeros(1e7,1); it will simply seem to hang the system, as MATLAB is busy filling your entire drive with zeros.
Anyway, I think it is possible using MATLAB classes. But I wouldn't recommend it. Note that implementing a proper subsref and subsasgn is *ahum* challenging, plus, it is likely that you'll have to re-implement many algorithms (like mldivide). This will most likely mean you'll lose a great deal of performance; think factors of in the thousands.
Here's an interesting random relevant paper I found while googling around a bit.
What could probably be a solution to your problem is memory mapped io (which matlab supports).
There a file is mapped to the memory and all read/writes to that memory address are actually read/writes to the file.
This only reserves/blocks memory addresses, it does not consume physical memory.
However, I would only advise it with 64 bit matlab, since with 32 bit matlab the address space is simply not large enough to use ram for data, code for matlab and dlls, and memory mapped io.
Check out the examples for the documentation page of memmapfile(), e.g.,
m = memmapfile('records.dat', ...
'Offset', 1024, ...
'Format', {'uint32' [4 10 18] 'x'});
A = m.Data(1).x;
whos A
Name Size Bytes Class
A 4x10x18 2880 uint32 array
Note that, accesses to m.Data(1).x redirect to file IO, i.e. no memory is consumed. So it provides efficient random access to parts of possibly very large data files residing on disk. Also note, that more complex data structures as in the example can be realized.
It seems to me that this provides the "implementation with objects" you had in mind.
Unfortunately, this does not allow to memmap MATfiles directly, which would be really useful. Probably this is difficult because of the compression.
Write a function:
function a=BIG(x,y)
m = matfile('BIG.mat');
a = m.BIG(x,y);
end
Every time you write BIG(a,b) the function is called.

Matlab memory management

I am simulating a set of differential equations in Matlab, for which I will save a struct of at least 400 x 80 000 x 24 doubles.
What in your opinion is the easiest way to control memory load? Memory mapping or a parallel process for memory checking, data writing and clearing? The program is single thread, but potentially will be re-written for a parallel computing.
Problem statement
Here are two problems, I think you are facing one of these:
Your data is really a block of mxnxz
Your data is not really a block of mxnxz
Solution for 1
If your data is really a block form, it is likely the best solution to store your data in a matrix.
Solution for 2
If your data is not a nice block, there are some choices to be made.
If your data is nearly a nice block, (for example mx(0.99~1.01)nxz, still consider using a matrix. Think about padding the gaps with zeros or NaN values.
If your data is very much not a block, (for example mx(0.01~100)nxz, consider using a more flexible data structure.
On flexible data structure usage
The trick to using data in a flexible way, is to try to identify the big matrices (that may vary in size) and let those be regular matrices. In your case the data is about 400 x 80000 x 24, hence you will definitely want the 80000 to be the dimension of a simple storage structure. The 24 and 400 are quite small so we don't care whether they are flexible or not.
Conclusion
The most efficient data structure that is not very flexible is a matrix of 400x80000x24
The most flexible data structure that is still fairly efficient is a cell array of 400x24, that contains vectors of approximately 80000x1
The most efficient data structure that is still fairly flexible is a cell array of 1x24 that contains matrices of approximately 400x80000. As 24 is small you could even use a struct for this in a meaningfull way, but usually a cell array will make more sense.

2d matrix to a 3d matrix without using a loop

I have a 300x300 matrix. I need to make a 300x300x1024 matrix where each "slice" is the original 300x300 matrix. Is there any way to do this without a loop? I tried the following:
old=G;
for j=2:N;
G(:,:,j)=old;
end
where N is 1024, but I run out of memory.
Know any shorter routes?
use repmat.
B = repmat(A,[m n p...])
produces a multidimensional array B composed of copies of A. The size of B is [size(A,1)*m, size(A,2)*n, size(A,3)*p, ...].
In your case ,
G=repmat(old,[1 1 1024]);
Will yield the result you wanted without the for loop. The memory issue is a completely different subject. A 300x300x1024 double matrix will "cost" you ~740 MB of memory, that's not a lot these days. Check your memory load before you try the repmat and see why you don't have these extra 700 MB. use memory and whos to see what is the available memory and which variables can be cleared.
You are likely running out of memory because you haven't pre-initialized your matrix.
if you do this first,
old = G;
G = zeros(size(old,1), size(old,2), 1024);
and then start the loop from 1 instead of 2, you will probably not run out of memory
Why this works is because you first set aside a block of memory large enough for the entire matrix. If you do not initialize your matrix, matlab first sets aside enough memory for a 300x300x1 matrix. Next when you add the second slice, it moves down the memory, and allocates a new block for a 300x300x2 matrix, and so on, never being able to access the memory allocated for the first matrices.
This occurs often in matlab, so it is important to never grow your matrices within a loop.
Quick answer is no, you will need to loop.
You might be able to do something smart like block-copying your array's memory but you didn't even give us a language to work with.
You will probably want to make sure each entry in your matrix is a minimum size, even at byte matrix size you will require 92mb, if you are storing a 64bit value we're talking nearly a gig. If it's an object your number will leap into the many-gig range in no time. Bit packing may come in handy... but again no idea what your other constraints are.
Edit: I updated your tags.
I'm not sure if I can help much, but doubles are 64bits each so at bare minimum you're talking about 2gb (You're already past impossible if you are on a 32 bit os). This could easily double if each cell involves one or two pointers to different memory locations (I don't know enough about matlab to tell you for sure).
If you're not running on an 8gb 64 bit machine I don't think you have a chance. If you are, allocate all the memory you can to matlab and pray.
Sorry I can't be of more help, maybe someone else knows more tricks.

Where do I find the memory requirements of a MATLAB function?

I have a 3D array of values (0 or 1), which is very large (approx 2300x2300x11). I want to fit a surface to these values using for example interp3, but when I try MATLAB runs out of memory. Thus, I've decided to reduce the size of my array enough for MATLAB to accomodate it in memory.
Now, the smaller I make the reduced array, the worse my results will be (the surface fitting is part of a measurement process with high precision requirements), so I want to reduce the array as little as possible.
Is there any way to determine on beforehand how much memory a certain array size will demand and how much memory is available, and then use this information to resize the array enough to avoid out of memory exceptions, but not more?
I don't know the answer to this, but I wonder if you can have your cake and eat it, too.
If your data set is too big, why not do a piecewise fit? Do it in chunks rather than omitting data points.
Or be smarter about how you omit data points. You want them in areas of high curvature - where your data is changing fastest. Leave out points in areas far away from the action, where nothing interesting is happening. You might have to do a fit, look at the surface, add and remove more points and try again.
It might an iterative process, but I'll bet you'll be able to get a nice fit with a little luck and effort.
You can look at the maximum array sizes that are supported on different platforms. In general, if you have a PxQxR sized 3D array of doubles, then the size of your array in bytes is P*Q*R*8. For your matrix, the size is ~ 444 MB. You can also try reducing it to a single, using single(A). single uses 4 bytes per element and you can reduce the size of your array by a factor 2.
I haven't really poked into the inner workings of interp3, but the exact memory requirements will depend on the interpolation option you choose. So, you can first try to convert it to single and see if it works. If not, try with 80% (90%) of the number of rows and columns. This way you have a good chunk of the original array, but the memory requirement is only 64% (81%) of the original.
If that doesn't help, duffymo's suggestion is what you should be looking into.