Maximum array size in MATLAB? - matlab

I'm writing a MATLAB program that will generate a matrix with 1 million rows and an unknown amount of columns (at max 1 million).
I tried pre-allocating this matrix:
a=zeros(1000000,1000000)
but I received the error:
"Maximum variable size allowed by the program is exceeded."
I have a feeling that not pre-allocating this matrix will seriously slow the code down.
This made me curious: What is the maximum array size in MATLAB?
Update: I'm going to look into sparse matrices, because the result I am aiming for in this particular problem will be a matrix consisting for the larger part of zeros.

Take a look at this page, it lists the maximum sizes: Max sizes
It looks to be on the order of a few hundred million. Note that the matrix you're trying to create here is: 10e6 * 10e6 = 10e12 elements. This is many orders of magnitude greater than the max sizes provided and you also do not have that much RAM on your system.
My suggestion is to look into a different algorithm for what you are trying to accomplish.

To find out the real maximum array size (Windows only), use the command user = memory. user.maxPossibleArrayBytes shows how many bytes of contiguous RAM are free. Divide that by the number of bytes per element of your array (8 for doubles) and you know the max number of elements you can preallocate.
Note that as woodchips said, Matlab may have to copy your array (if you pass by value to a subfunction, for example). In my experience 75% of the max possible array is usually available multiple times.

The Limits
There are two different limits to be aware of:
Maximum array size (in terms of number of elements) allowed by MATLAB, regardless of current memory availability.
Current bytes available for a single array -- the (current) maximum possible array size in bytes.
The first limit is what causes "Maximum variable size allowed by the program is exceeded", not the second limit. However the second one is also a practical limit of which you must be aware!
Checking the Limits
The maximum number of elements allowed for an array is checked as follows:
>> [~,maxsize] = computer
maxsize =
2.8147e+14
According to the documentation for the computer command, this returns:
maximum number of elements allowed in a matrix on this version of MATLAB
This is a static MATLAB limit on number of elements, not affected by the state of the computer (hardware specs and current memory usage). And at over 2 petabytes for a double array of that length, it's also way higher than any computer of which I am aware!
On the other hand, the largest practical array size that you can create at any given moment can be checked by the memory command:
>> memory
Maximum possible array: 35237 MB (3.695e+10 bytes) *
Memory available for all arrays: 35237 MB (3.695e+10 bytes) *
Memory used by MATLAB: 9545 MB (1.001e+10 bytes)
Physical Memory (RAM): 24574 MB (2.577e+10 bytes)
* Limited by System Memory (physical + swap file) available.
As the message says, these values are based on actual current memory availability, taking into account both physical memory and the swap file (collectively, virtual memory).
If needed, these values can accessed programmatically by m = memory;.
Adjusting the Limits
The first limit (the hard limit) has been fixed up until R2015a, where it can now be changed (but only reduced to a fraction of system memory) through the following setting:
You can't increase it beyond your system limits.
The second limit obviously has no "setting" in MATLAB since it's based on available memory and computer configuration. Aside from adding RAM, there's not a lot you can do: (1) pack to consolidate workspace memory and perform "garbage collection", but this may only help on certain platforms, and (2) increasing page file size to allow other stuff to swap out and give MATLAB more physical memory. But be cautious when relying on your page file as your computer may become unresponsive if page file thrashing happens.

In older versions of Matlab that don't include the memory command, you can use:
feature memstats
Physical Memory (RAM):
In Use: 738 MB (2e2c3000)
Free: 273 MB (11102000)
Total: 1011 MB (3f3c5000)
Page File (Swap space):
In Use: 1321 MB (529a4000)
Free: 1105 MB (45169000)
Total: 2427 MB (97b0d000)
Virtual Memory (Address Space):
In Use: 887 MB (37723000)
Free: 1160 MB (488bd000)
Total: 2047 MB (7ffe0000)
Largest Contiguous Free Blocks:
1. [at 4986b000] 197 MB ( c585000)
2. [at 3e1b9000] 178 MB ( b2a7000)
3. [at 1f5a0000] 104 MB ( 6800000)
4. [at 56032000] 77 MB ( 4d3e000)
5. [at 68b40000] 70 MB ( 4660000)
6. [at 3a320000] 54 MB ( 3610000)
7. [at 63568000] 45 MB ( 2d48000)
8. [at 35aff000] 40 MB ( 2821000)
9. [at 60f86000] 37 MB ( 25ca000)
10. [at 6f49d000] 37 MB ( 25b3000)
======= ==========
842 MB (34ac0000)
ans =
207114240
You can't suppress the output, but it returns the largest memory block available ( 207,114,240 Bytes / 8 = 25,889,280 doubles )

Related

Virtual Memory page table growth

When processes are allowed to grow larger than memory, page tables also grow very large. How could we organize page tables and TLB to keep access times as quick as possible for codes with good locality? For example, assume physical memory is 512K, each page is 1K, and a TLB of size 128. If we assume most processes are 256K or less, then we could allocate a fixed-size page table with 256 entries. Now in the unexpected case, where the page table grows larger than 256 entries, how should we organize it? What implications does your design have on average access time and on the maximum virtual memory size of a program?
The solution used on x86 is to have "sparse" page tables, that is there isn't a full table to contain a mapping for each page. Rather a two level mechanism is used:
The virtual memory is 4 GB large. A single page has size 4 KB. Using a one level approach would thus require a table of 4 GB / 4 KB = 1024 * 1024 entries. If an entry consumed 4 bytes, then every process would need 4 MB just to store its table.
Using a two level approach we have a page directory with 1024 entries, each of size 4 bytes (making it fit perfectly into a single 4 KB page). Thus each entry in that directory manages 4 GB / 1024 = 4 MB. If (and only if) there should be a mapping of some pages of virtual memory to physical memory in that 4 MB range, then the entry points to an instance of another structure, a page table. That contains 1024 entries, too, so each one manages 4 MB / 1024 = 4 KB exactly one page.
If there's a process that just needs a single page to operate, then using the single level approach we need 4 MB to store its virtual memory configuration. Using the two level mechanism described above, we need 4 KB for the page directory and 4 KB for the page table containing the mapping for that single page. Thus only 8 KB are used to store the virtual memory configuration.
If the process needs additional memory at runtime, and if that memory is at a (virtual) address not within the 4 MB range managed by its page table, then a second page table needs to be provided, increasing the memory used to store the mappings by another 4 KB.
Using this two level approach slightly increases access times for pages not in the TLB, because the memory management unit needs to access two memory locations (the page directory, and afterwards the respective page table) to be able to compute the physical address.
The TLB is unaffected by this: It stores mappings of single pages. How these mappings have been established isn't relevant to its operation.
Let's apply this to the example configuration you gave above:
A singe page has 1 KB size. Most processes, as you said, will have 256 KB or less memory. But we want to be able to have processes using more virtual memory.
If we choose to have the last level handle a full 256 KB, then we have
256 KB / 1 KB = 256 entries. Assuming a 32 bit architecture, this in turn means we can have each entry with size of 4 byte (to hold an address). 256 entries * 4 Byte = 1 KB and thus a full page. Nice.
To be able to handle more virtual memory than 256 KB we add another layer. Because it's easy, we let this level use tables with 256 entries (a 4 byte), too, to make such a table exactly fit into a page.
This gives us a virtual memory of 256 * 256 KB (roughly 65 MB). An virtual address in that system would then be 26 bit long:
DDDDDDDDTTTTTTTTPPPPPPPPPP
D := Index to page directory, highest level.
8 bit to be able to index 256 entries.
T := Index to page table, lower level.
8 bit to be able to index 256 entries.
P := Offset inside page.
10 bit to be able to address 1024 bytes.
A process using less than 256 KB needs then 2 KB to manage its memory configuration. Each additional 256 KB of virtual memory needed add another 1 KB of configuration memory.
Assuming the TLB can hold 128 entries (your question is a bit unclear here) it would need 128 * (16 + X - 10) bit, where X is the number of bits used to address physical memory. (Though this depends on the actual implemenation. I was thinking about16 bit per entry to store the indices of the paging structures + the upper bits of the physical address, not counting the 10 bits offset)
I hope this answers your question. An actual implementation will need to make design choices based on a lot of constraints.

'Out of memory' error in Matlab

I know this is quite a common problem, but all the solutions I have tried have failed.
Basically I want to train a big neural network and I obtain 'Out of memory' error.
My training set is a 729 x 3456 matrix of doubles and the neural network is a so called 'autoencoder' with layers of these sizes
3456 - 4000 - 2000 - 1000 - 300 - 1000 - 2000 - 4000 - 3456
In my code, first of all I do
net = feedforwardnet([layer1, layer2, layer3, layer4, layer3, layer2, layer1], 'trainscg');
net = configure(net, Dtrain', Dtrain');
where I use the 'trainscg' function because I read that it is the one that uses less memory.
Then I initialize the weights and biases according to some values (which I have already calculated), set the 'transferFcn' and start training.
I tried cleaning the workspace as much as possible and I also tried to put
net.efficiency.memoryReduction = 4;
before training, since I read it can help. Anyway I still have 'Out of memory', even if I increase the value to 60.
Here is the output of the command 'memory', executed when the workspace contains just the training set and four numbers (the size of the layers)
>> memory
Maximum possible array: 4508 MB (4.727e+09 bytes) *
Memory available for all arrays: 4508 MB (4.727e+09 bytes) *
Memory used by MATLAB: 1927 MB (2.020e+09 bytes)
Physical Memory (RAM): 8080 MB (8.472e+09 bytes)
* Limited by System Memory (physical + swap file) available.
What else can I do to solve the problem?
You can check the data type of your data and the memory used by that data type. Try using the one with minimum memory requirement. For eg: double takes 8 bytes and single takes 4 bytes for same number. You can use whos command to check the memory usage.
Also, you can check the performance of your system from task manager as shown in image below before running the code. May be some other process on your system is taking most of the memory and can be stopped.

Matlab: your opinion about a small memory issue working with matrix

I have a small question regarding MATLAB memory consumption.
My Architecture:
- Linux OpenSuse 12.3 64bit
- 16 GB of RAM
- Matlab 2013a 64 bit
I handle a matrix of double with size: 62 x 11969100 (called y)
When I try the following:
a = bsxfun(#minus,y,-1)
or simply
a = minus(y, -1)
I got a OUT of MEMORY error (in both cases).
I've just computed the ram space allocated for the matrix:
62 x 11969100 x 8 = 5.53 GB
Where am I wrong?!
Thanks a lot!
I'm running on Win64, with 16GB RAM.
Starting with a fresh MATLAB, with only a couple of other inconsequential applications open, my baseline memory usage is about 3.8GB. When I create y, that increases to 9.3GB (9.3-3.8 = 5.5GB, about what you calculate). When I then run a = minus(y, -1), I don't run out of memory, but it goes up to about 14.4GB.
You wouldn't need much extra memory to have been taken away (1.6GB at most) for that to cause an out of memory error.
In addition, when MATLAB stores an array, it requires a contiguous block of memory to do so. If your memory was a little fragmented - perhaps you had a couple of other tiny variables that happened to be stored right in the middle of one of those 5.5GB blocks - you would also get an out of memory error (you can sometimes avoid that issue with the use of pack).
The output of memory on windows platform:
>> memory
Maximum possible array: 2046 MB (2.145e+009 bytes) *
Memory available for all arrays: 3226 MB (3.382e+009 bytes) **
Memory used by MATLAB: 598 MB (6.272e+008 bytes)
Physical Memory (RAM): 3561 MB (3.734e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
The output of computer on linux/mac:
>> [~,maxSize] = computer
maxSize =
2.814749767106550e+14 % Max. number of elements in a single array
with some hacks (found here):
>> java.lang.Runtime.getRuntime.maxMemory
ans =
188416000
>> java.lang.Runtime.getRuntime.totalMemory
ans =
65011712
>> java.lang.Runtime.getRuntime.freeMemory
ans =
57532968
As you can see, aside from memory limitations per variable, there are also limitations on total storage for all variables. This is not different for Windows or Linux.
The important thing to note is that for example on my Windows machine, it is impossible to create two 1.7GB variables, even though I have enough RAM, and neither is limited by maximum variable size.
Since carrying out the minus operation will assign a result of equal size to a new variable (a in your case, or ans when not assigning to anything), there need to be at least two of these humongous things in memory.
My guess is you run into the second limit of total memory space available for all variables.
bsxfun is vectorized for efficiency. Typically vectorized solutions require more than just minimal memory.
You could try using repmat, or if that does not work a simple for loop.
In general I believe the for loop will require the least memory.

Out of memory on a rather small matrix

I am doing motion detection on a fairly small video. 56 frames of 288x384xRGB. I keep two copies of it, so it should amount to about 40 Mb tops, including my other variables.
Now, this line gives me an out of memory error
output = uint8(zeros(this.videoHeight,2.*this.videoWidth,3,size(this.originalFrames,4)));
typing memory reports
>> memory
Maximum possible array: 202 MB (2.114e+08 bytes) *
Memory available for all arrays: 863 MB (9.045e+08 bytes) **
Memory used by MATLAB: 527 MB (5.526e+08 bytes)
Physical Memory (RAM): 3071 MB (3.220e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
>>
I'm new to MATLAB, but not totally new to programming. What am i not understanding?
EDIT
So i did som disp'ing:
disp(this.videoHeight)
disp(2.*this.videoWidth)
disp(size(this.originalFrames,4))
produces:
288
768
54
So, it is actually smaller than i suggested...
You should use
zeros(..., 'uint8')
rather than
uint8(zeros(...))
to avoid creating the array in double-precision first, and then copying it to a uint8 array.
I haven't looked in detail but I would be surprised if there weren't quite a bit of overhead imposed by Matlab. You're probably using a lot more memory than you might suspect.
Try dialing down the number of frames you process to see if that fixes the problem.

How can we handle large matrices in matlab(larger than 10000x10000)

In my program I am faced with some matrices that are larger than 10000x10000.
I cannot transpose or inverse them, how can this problem be overcome?
??? Error using ==> ctranspose
Out of memory. Type HELP MEMORY for your options.
Error in ==> programname1 at 70
B = cell2mat(C(:,:,s))';
Out of memory. Type HELP MEMORY for your options.
Example 1: Run the MEMORY command on a 32-bit Windows system:
>> memory
Maximum possible array: 677 MB (7.101e+008 bytes) *
Memory available for all arrays: 1602 MB (1.680e+009 bytes) **
Memory used by MATLAB: 327 MB (3.425e+008 bytes)
Physical Memory (RAM): 3327 MB (3.489e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
Example 2: Run the MEMORY command on a 64-bit Windows system:
>> memory
Maximum possible array: 4577 MB (4.800e+009 bytes) *
Memory available for all arrays: 4577 MB (4.800e+009 bytes) *
Memory used by MATLAB: 330 MB (3.458e+008 bytes)
Physical Memory (RAM): 3503 MB (3.674e+009 bytes)
==============================================================================
memory
% Maximum possible array: 1603 MB (1.681e+009 bytes) *
% Memory available for all arrays: 2237 MB (2.346e+009 bytes) **
% Memory used by MATLAB: 469 MB (4.917e+008 bytes)
% Physical Memory (RAM): 3002 MB (3.148e+009 bytes)
I have used sparse for C.
B = cell2mat(C);
clear C %# to reduce the allocated RAM
P=B\b;
Name Size Bytes Class Attributes
B 5697x5697 584165092 double sparse, complex
C 1899x1899 858213576 cell
b 5697x1 91152 double complex
==============================================================================
??? Error using ==> mldivide
Out of memory. Type HELP MEMORY for your options.
Error in ==> programname at 82
P=B\b;
==============================================================================
Edit: 27.05.11
Name Size Bytes Class Attributes
C 997x997 131209188 cell
B 2991x2991 71568648 single complex
Bdp 2991x2991 143137296 double complex
Bsparse 2991x2991 156948988 double sparse, complex
Bdp=double(B);
Bsparse=sparse(Bdp);
I used single precision, witch gave the same accuracy as in double precision
It's better, Am I right?
A few suggestions:
If possible, as #yoda suggested, use sparse matrices
Do you really need the inverse? If you're solving a linear system (Ax=b), you should use MATLAB's backslash operator.
If you really need huge dense matrices, you can harness the memory of several machines using distributed arrays and MATLAB distributed computing server.
3GB isn't alot when each matrix you have is 600 MB all by it's self. If you can't make the algorithmic changes, you need 64-bit matlab on a 64-bit OS, with alot more RAM. It's the only way to get alot of memory. Notice that with 3 GB, Matlab only has 2.2 GB, and the largest chunk is 1.5 GB - that's only 2 of your matricies.
Matlab has an easy way to handling huge matrices of orders like 1000000*1000000.
These matrices are usually sparse matrices and it's not necessary to allocate RAM memory for matrix elements of zero value.
So you should just use this command:
A=sparse(1000000,1000000); "Defining a 1000000 by 1000000 matrix of zeroes."
Then you can set the diagonal nonzero elements by commands like "spdiags".
See this Link : http://www.mathworks.nl/help/matlab/ref/spdiags.html
Note that you can not use "inv" command to invert matrix A, because "inv" made a normal matrix and uses a lot of space of the RAM.(probably with "out of memory" error)
To solve an equation like A*X=B, you can use X=A\B.