Variable allocation and RAM - cpu-architecture

Does a number assigned to a variable always fit the allocated amount of RAM ?

When a variable is initialized, it is initialized on the stack, not the heap. Though both the stack and the heap are part of memory, we usually only talk about allocation with respect to the heap. This is because the stack is controlled entirely by the program running at the time, and no calls to the OS are required to push anything onto it.
All that being said, there is a maximum size the stack can grow to, and once we grow past that size, we have (coincidentally) "stack overflow". So, yes, there is a point at which creating another variable will not fit, but using the term allocation is the wrong way to describe it.

Related

stacks growing downward and heaps growing upward, what if they encounter?

This picture could be found on Operating system concepts, beginning of Chapter 9. The size of virtual address space is 0 to max. My question is:
what will decide the max value? Is it fixed?
what will happen if the hole between stack and heap is filled and one of them of both of them want to grow continually?
I know that my question may be duplicate, but I've read a lot threads and I still cannot get my answer. Thanks in advance!
Keep in mind that what you are seeing is a very simplified diagram of what happens. First of all all, the underlying hardware sets a maximum logical address range.
Some part of that range will be reserved (either through hardware or software, depending upon the processor) for the operating system. The remaining addresses are for the user address space.
So what you are looking at is a conceptual view of a user address space. This can be further limited by system parameters and process quotas.
what will decide the max value? Is it fixed?
Thus MAX is a combination of hardware limits, operating system address allocation, system parameters, and process quotas. It can, therefore, be unfixed.
what will happen if the hole between stack and heap is filled and one of them of both of them want to grow continually?
First of all remember this diagram is only conceptual. One simplification is that the valid addresses within the address space need not be contiguous. There could be holes. Second, memory layout is usually controlled by the linker. The "text" and the "data" can be reversed or even interleaved.
The blue "hole" will generally be unallocated (invalid) memory pages. Some OS's do not grow the stack. It is preallocated by the linked. In a multi-threaded system, there could be multiple stacks (another simplification of the diagram) and there are often multiple heaps.
As various function map pages into the logical address space, the blue area shrinks. If go goes to zero, the next attempt to map pages will fail.

MATLAB and clearing the swap space

In the debugging mode I stop at some breakpoint and do some matrix manipulation in order to test the program. These manipulations are computationally expensive so MATLAB uses the swap space on my linux system. Then, after continuing the program running, the swap space is almost full so MATLAB crushes. Is there a way I could clean the swap at the debugging node? Doing clear all and clear classes makes effect only on RAM memory, but do not affect the swap.
You can't. Swap isn't special, so just work through this as as a regular out-of-memory issue. If you free up memory, you'll indirectly free up the swap that's being used to back it (or avoid having to use swap to supplement it).
Swap space is just an OS-managed backing store for virtual memory. From a normal program's point of view, swap is RAM (just slow RAM) and you don't manage it separately. (Well... you can "wire" pages to prevent them from being swapped out and so on, or use OS APIs to directly manipulate swap, but those are low-level platform-specific details, (like, below malloc), and not exposed to you as a Matlab M-code programmer, and not what you want to do here.) If your Matlab program runs out of memory, that means it's used up or fragmented its process's virtual memory, not something in particular about your swap space. (Unless there's a low-level bug somewhere.)
When this happens, you may need to look elsewhere in your Matlab program (e.g. in global variables, figure handle properties, or other levels of the function call stack) to find additional data that hasn't been cleared yet, or just restart the Matlab process to fix memory fragmentation (which can happen if your code fills up the memory with lots of small arrays).
Like #siliconwafer suggests, memory, whos, and feature memstats are good tools for debugging this. And if you're stopped inside the debugger, realize you can't actually clear everything until you dbquit out of it.
Doing large matrix operations inside the debugger is not necessarily a recoverable operation: if you've modified arrays held in local variables in the stack frame(s) you're working on, but there are still copies of them held in other variables or frames, Matlab's copy-on-write mechanism needs to hold on to both copies of the arrays, and you might be out of luck for that run of the program if you hit your RAM limits.
If clear all and clear classes after exiting the debugger are not recovering enough memory for you, that smells like either memory fragmentation or a C-level memory leak (like in a MEX file). In either case, you need to restart Matlab to resolve it. Avoid the use of large cellstr arrays or other arrays-of-small-arrays to reduce fragmentation. And take a good hard look at your C code if you're using any custom MEX functions.
Or you just might not have enough memory to do the operations you're doing.

Dynamic allocation in kernel space

I have been trying to allocate space using malloc in kernel space for a driver I am working on (using malloc is a constraint here; I am not allowed to allocate space in any other manner), but if I try to allocate "too many" elements (~500 times a very small struct), only a fraction of the space I required is actually allocated.
Reducing the number of allocated elements did work for me with no problems. Does dynamic allocation in kernel space have limits which might be causing the behavior I am seeing?
malloc is a user space library function. You cannot use it in kernel space. There is a function called kmalloc() which is used to allocate memory in kernel space.
You can use vmalloc() also. I suggest you to read this thread What is the difference between vmalloc and kmalloc? for some clarification on vmalloc() and kmalloc().
And also I suggest you to search your queries in SO, then Ask Questions. Because, Already someone asked here

why Matlab don't use Swap but error "Out of memory"?

I was wondering why Matlab doesn't use swap, but instead throws the error "Out of memory"?
Shouldn't Matlab just slow down instead of throwing an "Out of memory"?
Is this Java related?
added:
I know "out of memory" means it's out of contiguous memory. Doesn't swap have contiguous memory, or? I'm confused...
It is not about MATLAB. What happens when you try allocate more memory than exists in your hardware is an OS specific behavior.
On Linux, by default the OS will 'optimistically' allocate almost anything you want, i.e. swap space is also counted as allocatable memory. You will get what you want - no OOM error, but slow computations with swap-allocated data. This 'feature' is called overcommit. You can change this behavior by modifying the overcommit settings in Linux (have a look e.g. here for a brief summary).
Overcommit is probably not the best idea to use this for solving larger problems in MATLAB, since the entire OS starts to work really slow. It can definitely not be compared to optimized 'out-of-core' implementations that consciously use the hard disk in computations.
This is how it is on Linux. I do not know how to change the memory allocation behavior on Windows, but I doubt you really want to do that. You need more RAM.
And you do confuse things - swap has nothing to do with contiguous memory. Memory allocated by the OS is 'virtual memory', which is contiguous regardless of whether the underlying physical memory can be mapped to contiguous pages.
Edit For transparent 'out-of-core' operations on large matriices using disk space as extra memory you might want to have look at VVAR fileexchange project. This class pretends to be a usual MATLAB class, but it operates on an underlying HDD file. Note that the usual array size limitations of MATLAB still apply.

Maximum array size in objective C on iPhone?

I have a VERY large array (96,000 elements of type GLfloat). It was previously 24,000 elements, until I made a couple of changes. Now I'm getting a crash. I haven't done much to debug it yet, but when I noticed how ridiculously large one of my arrays was getting I thought it might be worth looking into. So, my only question is whether 96,000 elements (or 384,000 bytes) is too much for a single array?
That should be fine on the heap, but you should avoid allocations of that size on the stack. So malloc/free or new[]/delete[] is what you should use to create and destroy an array of that size.
If the device has low memory, you can expect requests for large amounts of memory to occasionally return NULL. There are applications (such as photo/image processing) which request allocations at tens of megabytes -- many times greater than your 384 KiB allocation.
There is no upper bound on the size of an array, save the amount of available RAM on the device.
I don't think it's too big. Some image resources would take up that much or more contiguous space without problem. For example, a 400x400px image would take about 160,000*4 = 640,000 bytes of memory. I think the problem is somewhere else.