I'm trying to run a Netlogo model in behaviorspace, in headless mode, on a linux server.
My netlogo version is 5.3.1 (the 64b version).
The server has 32 cores with 64gigs of RAM.
I'm setting Xmx to 3072m.
After a few runs (~300) the memory usage is so high that I get a Java heap space error.
Surprisingly, the memory usage grows regularly, as if there were no flush-like function called between runs. And it gets to a point it shouldn't reach if I understand things well (for example, for 15 parallel threads it reaches 64000MB and beyond when it should stay around 15 * 3072 = 46080.
I'm using ca at setup so I thought everything was supposed to be flushed out between runs. I'm not opening any file from the code (I use the standard behaviorspace output, in table format, not spreadsheet), and I'm not using any extension.
I'm kind oh puzzled here. Is there something I should look at into behaviorspace specific parameterization that says to keeps track of variables, turtles, etc. between runs ? I couldn't find such a thing.
Could someone help me ?
Thanks a lot !
Thomas
Related
I have this problem:
I run some large calculations before going to sleep (or work).
When I return sometimes RAM is already filled and the program starts writing to Disk, which is a problem since then computer becomes almost non responsive, also the button "Interrupt the current operation" doesn't stop mserver.exe from executing a task.
This is what I saw 10 mins after I pressed the button "Interrupt the current operation":
Not to mention that calculations are probably like 100 or even 1000 times slower when it starts using Disk instead of RAM (so it's pointless anyway).
Another problem is that I was unable to save some variables to file since in Maple I couldn't type anything while mserver.exe was executing a task and after I killed the process mserver.exe I was still unable to save those variables since Maple commands don't work when connection to kernel is lost.
So, my question: can I make it so that mserver.exe won't use Disk at all (I mean from Maple alone, not by disabling page file in Windows) and just stop execution automatically when RAM is full (just like Classic Maple does when it hits 2GB limit)?
Also it would be nice to be able to limit Maple from using processor too much, for example up to 75% or so, so that I could work on that computer without problems.
You might experiment with a few of the options available for specifying limits on the Maple (kernel, mserver) engine.
In particular,
--init-reserve-mem=memorysize
(or, possible, the -T option). See here for some more detail:
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=maple
On Linux/OSX you could pass that in a call to the maple script that launches Maple. On MS-Windows you could add that to the command string/Property in the launcher (icon).
You might try setting it to a fraction of your total RAM, eg. 50-75%, and see how it goes. Presumably you'll have some other processes running.
As far as restricting the CPU use goes, that's more of a OS issue. On Linux/OSX you could use the system's nice facility. I don't know what's avaliable on MS-Windows (built-in or 3rd party). You might be able to set the Priority of the running mserver process from the Task Manager. Or you might look at something like the START facility:
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/start
I am new at ladder/grafcet programming for PLC's.
I have a Windows application of my own that will write on a OMRON PLC memory (D register). The idea is to fill blocks of memory that will trigger some output (ladder programming).
So imagine for example a memory block of 5 words (D0000 to D0004). The outputs will be trigger by the contents of this 5 words.
My idea is to have one simple ladder program to "run" block of memory. So each 5 memory blocks will contain "instructions" to activate my outputs.
I tough : maybe I can implement like a "program counter" concept where the program counter points to the first 5 words and co+y/move its content to a general location on memory that will trigger the contracts of the ladder program. Then after the execution of the first 5 words the program counter will point to the next 5 words block to copy it content again and the ladder program execute its "instructions" and keep this for undefined number of 5 words block.
I am not sure if I was able to clarify my idea. There is a way to implement this using PLC ladder logic ?
Or there is any other ways to implement such thing ?
Keep in mind the idea is to have blocks of memory (composed by a fixed number of words) and each memory block will have on its bit the necessary configuration to trigger the necessary outputs (using the same ladder diagram/program).
Any help or better ideas will much appreciated.
Thank you very much
This is to use with a OMRON C2JM PLC.
You're thinking too hard about this. A PLC is a state machine, not a procedural processor. Just route the bits directly to the outputs they need to control.
For example, bit 0 of D1234 should control CIO output 1.00 then
D1234.00 1.00
----| |------------------------()
and if D1234 bit 12 should control CIO 2.15
D1234.12 2.15
----| |------------------------()
etc.
In a previous version of MATLAB (7.6), I used to get OutOfMemoryErrors that I thought were kind of annoying. But since I upgraded to 7.11, for some reason it's not throwing the errors anymore.
This means that when I accidentally try to make a variable that's way too large, the MATLAB shell will try to create the variable and bring my machine to a halt.
I'd really like to have these errors get thrown, so that I can exit out gracefully or debug my code, but I can't find the solution anywhere.
Possibly useful details:
I'm using OSX 10.5 on a 64-bit machine, with 4GB of RAM.
In MATLAB 7.6:
$ rand(50000);
??? Error using ==> rand
Maximum variable size allowed by the program is exceeded.
In MATLAB 7.11:
$ rand(50000);
(hang)
Between version 7.6 and 7.11 the Macintosh version of MATLAB switched from a 32-bit application to a 64-bit application. So now instead of running out of address space MATLAB thrashes.
Matlab doesn't hang. It's just paging, which takes forever. Try assigning a large array, open the Activity Monitor, and see the 'Virtual Memory' grow and grow.
If you reduce the page file size on your system, you can avoid that issue.
Is there a way to save a compiled version of my perl scripts?
Or a way to do a JavaScript style compile where you just remove comments, whitespace, etc?
You're trying to optimize in the wrong place. If you are running scripts in a web/cgi environment, there is no need to take a compile hit every time the script is executed. The scripts should be running persistently, which you can do with Apache mod/perl, FastCGI, or a number of newer technologies and frameworks such as Plack and Catalyst. If you are more specific about your needs, you will discover that there are a number of options available to you.
Do you realize that Javascript is minified to save bandwidth, not startup time or runtime? And that the practice of minifying Javascript started in the times of dialup connections?
Sure, there was a time where interpreted programs were often minified like that, but back then typical CPUs were Z80s and 8086's running at 4-8 MHz, and using loads of cycles to execute a single instruction. To show: my Athlon XP-M 2400 is ~10,000 times faster than my 8MHz 8086 for CPU-bound programs.
Try the perl compiler, to C B::C or to B::Bytecode (similar to python pyc).
http://search.cpan.org/dist/B-C/perlcompile.pod
You could use PPI to strip out comments and POD.
Perl::Squish is the "minifier" you're looking for. Caveat: It's not going to help you at all. You're trying to optimize on the wrong end.
If you're doing this for fun you might want to check out parrot vm
If not.. see my comment ;)
SOLVED see Edit 2
Hello,
I've been writing a Perl program to handle automatic upgrading of local (proprietary) programs (for the company I work for).
Basically, it runs via cron, and unfortunately has a memory leak (or something similar). The problem is that the leak only happens when I'm not looking (aka when run via cron, not via command line).
My code does not contain any circular (or other) references, so the commonly cited tools will not help me (Devel::Cycle, Devel::Peek).
How would I go about figuring out what is using so much memory that the kernel kills it?
Basically, the code SFTPs into a server (using ```sftp...`` `), calls OpenSSL to verify the file, and then SFTPs more if more files are needed, and installs them (untars them).
I have seen delays (~15 sec) before the first SFTP session, but it has never used so much memory as to be killed (in my presence).
If I can't sort this out, I'll need to re-write in a different language, and that will take precious time.
Edit: The following message is printed out by the kernel which led me to believe it was a memory leak:
[100023.123] Out of memory: kill process 9568 (update.pl) score 325406 or a child
[100023.123] Killed Process 9568 (update.pl)
I don't believe it is an issue with cron because of the stalling (for ~15 sec, sometimes) when running it via the command-line. Also, there are no environmental variables used (at least by what I've written, maybe underlying things do?)
Edit 2: I found the issue myself, with help from the below comment by mobrule (in response to this question). It turns out that the script was called from a crontab of a user (non-root) just once a day and that (non-root privs) caused a special infinite loop situation.
Sorry guys, I feel kinda stupid for not finding this before, but thanks.
mobrule, if you submit your comment as an answer, I will accept it as it lead to me finding the problem.
End Edits
Thanks,
Brian
P.S. I may be able to post small snippets of code, but not the whole thing due to company policy.
You could try using Devel::Size to profile some of your objects. e.g. in the main:: scope (the .pl file itself), do something like this:
use Devel::Size qw(total_size);
foreach my $varname (qw(varname1 varname2 ))
{
print "size used for variable $varname: " . total_size($$varname) . "\n";
}
Compare the actual size used to what you think is a reasonable value for each object. Something suspicious might pop out immediately (e.g. a cache that is massively bloated beyond anything that sounds reasonable).
Other things to try:
Eliminate bits of functionality one at a time to see if suddenly things get a lot better; I'd start with the use of any external libraries
Is the bad behaviour localized to just one particular machine, or one particular operating system? Move the program to other systems to see how its behaviour changes.
(In a separate installation) try upgrading to the latest Perl (5.10.1), and also upgrade all your CPAN modules
How do you know that it's a memory leak? I can think of many other reasons why the OS would kill a program.
The first question I would ask is "Does this program always work correctly from the command line?". If the answer is "No" then I'd fix these issues first.
On the other hand if the answer is "Yes", I would investigate all the differences between having the program executed under cron and from the command line to find out why it is misbehaving.
If it is run by cron, that shouldn't it die after iteration? If that is the case, hard for me to see how a memory leak would be a big deal...
Are you sure it is the script itself, and not the child processes that are using the memory? Perhaps it ends up creating a real lot of ssh sessions , instead of doing a bunch of stuff in one session?