MATLAB not throwing OutOfMemoryExceptions or Maximum variable size errors - matlab

In a previous version of MATLAB (7.6), I used to get OutOfMemoryErrors that I thought were kind of annoying. But since I upgraded to 7.11, for some reason it's not throwing the errors anymore.
This means that when I accidentally try to make a variable that's way too large, the MATLAB shell will try to create the variable and bring my machine to a halt.
I'd really like to have these errors get thrown, so that I can exit out gracefully or debug my code, but I can't find the solution anywhere.
Possibly useful details:
I'm using OSX 10.5 on a 64-bit machine, with 4GB of RAM.
In MATLAB 7.6:
$ rand(50000);
??? Error using ==> rand
Maximum variable size allowed by the program is exceeded.
In MATLAB 7.11:
$ rand(50000);
(hang)

Between version 7.6 and 7.11 the Macintosh version of MATLAB switched from a 32-bit application to a 64-bit application. So now instead of running out of address space MATLAB thrashes.

Matlab doesn't hang. It's just paging, which takes forever. Try assigning a large array, open the Activity Monitor, and see the 'Virtual Memory' grow and grow.
If you reduce the page file size on your system, you can avoid that issue.

Related

Stop execution when RAM is filled (i.e. avoid writing to Disk)

I have this problem:
I run some large calculations before going to sleep (or work).
When I return sometimes RAM is already filled and the program starts writing to Disk, which is a problem since then computer becomes almost non responsive, also the button "Interrupt the current operation" doesn't stop mserver.exe from executing a task.
This is what I saw 10 mins after I pressed the button "Interrupt the current operation":
Not to mention that calculations are probably like 100 or even 1000 times slower when it starts using Disk instead of RAM (so it's pointless anyway).
Another problem is that I was unable to save some variables to file since in Maple I couldn't type anything while mserver.exe was executing a task and after I killed the process mserver.exe I was still unable to save those variables since Maple commands don't work when connection to kernel is lost.
So, my question: can I make it so that mserver.exe won't use Disk at all (I mean from Maple alone, not by disabling page file in Windows) and just stop execution automatically when RAM is full (just like Classic Maple does when it hits 2GB limit)?
Also it would be nice to be able to limit Maple from using processor too much, for example up to 75% or so, so that I could work on that computer without problems.
You might experiment with a few of the options available for specifying limits on the Maple (kernel, mserver) engine.
In particular,
--init-reserve-mem=memorysize
(or, possible, the -T option). See here for some more detail:
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=maple
On Linux/OSX you could pass that in a call to the maple script that launches Maple. On MS-Windows you could add that to the command string/Property in the launcher (icon).
You might try setting it to a fraction of your total RAM, eg. 50-75%, and see how it goes. Presumably you'll have some other processes running.
As far as restricting the CPU use goes, that's more of a OS issue. On Linux/OSX you could use the system's nice facility. I don't know what's avaliable on MS-Windows (built-in or 3rd party). You might be able to set the Priority of the running mserver process from the Task Manager. Or you might look at something like the START facility:
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/start

Ever-increasing memory usage in netlogo headless behaviorspace

I'm trying to run a Netlogo model in behaviorspace, in headless mode, on a linux server.
My netlogo version is 5.3.1 (the 64b version).
The server has 32 cores with 64gigs of RAM.
I'm setting Xmx to 3072m.
After a few runs (~300) the memory usage is so high that I get a Java heap space error.
Surprisingly, the memory usage grows regularly, as if there were no flush-like function called between runs. And it gets to a point it shouldn't reach if I understand things well (for example, for 15 parallel threads it reaches 64000MB and beyond when it should stay around 15 * 3072 = 46080.
I'm using ca at setup so I thought everything was supposed to be flushed out between runs. I'm not opening any file from the code (I use the standard behaviorspace output, in table format, not spreadsheet), and I'm not using any extension.
I'm kind oh puzzled here. Is there something I should look at into behaviorspace specific parameterization that says to keeps track of variables, turtles, etc. between runs ? I couldn't find such a thing.
Could someone help me ?
Thanks a lot !
Thomas

Fortran Input Files from Mac OS to XP

I recently got some Fortran code, which successfully ran on Mac OS. This code along with input files were later sent to me to get compiled. I precisely used the same code and the same input files but an error "array bounds exceeded" appeared. I am using CVF 6.6 on Windows XP.
I wanted to know the following things:
Is this a compiler or OS problem?
Shall I arrange a Mac OS to get them compiled?
After surfing so much on internet I think the wise thing to do is to get my data "format free". But I don't how to do that when my data is a time series with time in one column and voltage in the second.
The error message array bounds exceeded always (I think) indicates that your code has tried to access an array element outside the bounds of an array, for example element 25 in an array with 24 elements. This can only occur at run-time, and your compiler/run-time will only spot it if, when compiling, you set on the compiler option(s) for array bounds checking; your compiler documentation will tell you what those options are.
The error message should have been accompanied by some more information telling you where in the program the error occurred and the index of the out-of-bounds array access.
Given that your source code and your input data are identical how could this have occurred ? Since you have compiled the program on 2 different platforms your compilations cannot have been identical, it is entirely possible that array bounds checking is switched off on your Mac and on on your Windows PC.
Fortran programs may execute apparently successfully despite making accesses to out-of-bounds array elements. If the memory address of array element 25 out of 24 holds a value which is meaningful and the address is within your program's space the computation is likely to continue. It is also likely to be useless, but you can go for many years before finding that out.
I suggest that you go back to the Mac, recompile with array bounds checking, and run again, see what happens.
It's also possible that the routines which read your file find a different number of values on XP and Mac; I suspect that can be caused by different line ending characters, even by whether or not the input file has a newline at the end. Check this too.

Matlab MCR program errors corrupt terminal

I have compiled a Matlab routine using the MCR and deployed it to other computers without having them installed matlab. So far, so good. But of course, the routine is not completely error-free, particularly the GUI part. The problem is that when the MCR tries to write the error message to the terminal, it seems to corrupt the terminal so that everything is no longer legible - not even the prompt. Sometimes I also get an extra window, vaguely resembling the matlab editor window, full of illegible ascii characters. Does anyone know what is causing this, or how to avoid it?
My first attempt was a big try-catch block around everything, but whatever it is still seems to get through. The catch block just tries to divert the error to an errordlg rather than the command prompt:
catch e
errordlg({e.message;['in: ',e.stack.name]})
end
MATLAB Compiler does not support command window functions.
Peter Webb tells on Loren's blog:
Certain MATLAB functions cannot be deployed because they act on
objects that are not present in a deployed application. For example,
since deployed applications have no command window, functions that
modify the command window can't be deployed.
So, you probably need to get rid of any function that prints to the command window.
Also, you can check out the mccExcludedFiles.log file.

Need help debugging a minidump with WinDbg

I've read a lot of similar questions, but I can't seem to find an answer to exactly what my problem is.
I've got a set of minidumps from a 32-bit application that was running on 64-bit Windows 2008. The 32-bit Visual Studio on my 32-Bit Vista Business wouldn't touch them at all, so I've been trying to open them in WinDbg.
I don't have the EXACT corresponding .pdb files (we only started saving them AFTER this particular release), but I have .pdbs built by the same machine with the same code. I also have access to the exact executable that created the minidumps.
I found a nifty little application called ChkMatch that can make .pdbs match an executable... the only difference (according to ChkMatch) was age, so I matched my newer .pdbs to the original executable.
However, when I load it in WinDbg, it still says that it is a "mismatched pdb" then, since I had set .symopts+0x40 it tries to load them anyway. I then get the warning:
*** WARNING: Unable to verify checksum for myexe.exe
I ran !lmi myexe and saw that, indeed, the checksum of the executable was in fact zero. From poking around a bit, I've found that the executable should have been built with the /release flag to have a checksum. That's all well and good, but I can't exactly go back in time and rebuild (if I did though, I'd definitely save the original .pdbs :-P ).
Is there anything I can do here? Seems a little ridiculous I can't make things match here at least enough to get a call stack.
you don't need the checksum to get a call stack - this warning can be safely ignored.
to get the stack you need to issue the stack command (any variant of k).
if the minidumps are any good (i.e. describe an actual fault), you should first try the auto analysis !analyze -v which will get you started.
come back when you have exhausted your expertise :o)
If you're working with minidumps then you have to set your image path (Ctrl+I) to point to a location with the images in the dump. The trouble with minidumps is that they don't contain any code or data from the executables on the target, so you have to supply them yourself.
-scott