Got stuck with one problem in our live server.
Have script (perl) which runs almost 15 to 18 hrs a day. it creates 100+ sub process every day . One place it has command (product command which we run in command line solaris box) which is being triggerred with back ticks inside perl code.
It looks like the back ticks command gets skipped or failed randomly.
for eg. if i need to run for 50 customers 2 or 3 gets failed randomly.
I do not see the evidence that the command has been triggerred in anywhere.
since its live server we can't even try making much in code change until we are sure about the problem.
here is the code..
my $comm = "inventory -noX customer1"; #sample command i have given here
my $newLogFile = "To capture command output here we have path whre the file gets created");
my $piddy = `$comm 2>&1 > $newLogFile`;
Is it because of the back ticks it happens I am really not sure :(.
Also tried various analysis like memory/CPU/diskspace/Adding librtld_db.so in LD_LIBRARY_PATH etc....but no luck...Also the perl is in 64 bit ...what else Can i? :(
I suspect you are not checking for errors (and perl doesn't make that easy to do correctly for backticks).
Consider using IPC::System::Simple's capture in place of your backticks/qx.
As its doc says, "If there's an error, it will die with a detailed description of what went wrong."
It shouldn't fail just because of backticks, however because it is spawning a new process, that process may be periodically subject to failure due to system conditions (eg. sysLoad). Backticks are really a "fire and forget" method and should never be used for anything critical in a production environment. As previously suggested, there are far more detailed ways to manage spawning external processes.
If the command's output is being lost due to buffering, you might try turning off buffering, but keep an eye on it for performance degradation (it's usually not significant).
Buffering can be turned off for an entire script by adding this near the top:
$|=1;
When calling external commands, I'm using system of IPC::System::Simple or open3 of IPC::Open3.
Related
I'm using a a few system() commands in my perl script that is running on Linux.
The commands I run with the system() function output their data to a log which I then parse to decide what to do next.
I noticed that sometimes it looks like the code that parses the log file (which comes after the system() function) doesn't use the final log.
For example, I search for a "test pass" phrase in the log file - and the script doesn't find it even though when I open the log it is there.
Another example - I try to delete the folder where the log was placed but it doesn't let me because it's "Not empty". When I try to delete it manually it is deleted with errors.
(These examples happen every now and then, but most of the time they don't)
It looks like some kind of "timing" problem to me. How can I solve it?
If you want to be safe and are on Linux, call system sync; after every command that writes to the disk before reading from the disk. That will force the OS to write everything still buffered to the filesystem and only return afterwards. Thus you can be sure that when it is finished, everything you wrote to files now actually has arrived there.
But be aware, that may be overkill in many situations. There are reasons for those buffers and manually calling sync constantly is most likely not the fastest way of achieving things. See also http://linux.die.net/man/8/sync
If, for example, you have something else that you could do between the writing and the reading, like some calculations or whatever, that would likely be enough and you would not waste time by telling the OS that you know better how and when it has to do it's jobs. ^^
But if perfect efficiency is not your main concern (and you should not be using Perl if it was), system sync; after everything that modifies files and before accessing those files is probably okay and safe.
I'm trying to find a way to start a perl process up under the debugger, but have in just start running automatically without stopping at the first statement and having me enter 'c' to start it. This module is run by a larger system and I need to be able to periodically (based on external conditions) interrupt the program via interrupt signal, examine some data structures and have it continue.
Obviously, I can have the supervising program start my process using "perl -d myProcess", but how to get it to just run without the initial break. Anybody know how to get this to happen?
Much Thanks.
Thanks. That was a big hint. I see several options including "NonStop".
It looks like using the line PERLDB_OPTS="NonStop" perl -d myprog.pl & does the trick. Then I just kill -INT <pid> and fg it to get it up in the debugger. After I 'c' to continue executing and bg it so it will continue.
You can also add this config option to the .perldb file. The format of the config file is a bit unusual and not so well documented, so here goes
DB::parse_options("NonStop=1");
Other useful options:
DB::parse_options("dumpDepth=3");
DB::parse_options("PrintRet=0");
$DB::deep = 1000;
I have this Perl script for monitoring a folder in Linux.
To continuously check for any updates to the directory, I have a while loop that sleeps for 5 minutes in-between successive loops :
while(1) {
...
sleep 300;
}
Nobody on my other question suggested using cron for scheduling instead of a for loop.
This while construct, without any break looks ugly to me as compared to submitting a cronjob using crontab :
0 */5 * * * ./myscript > /dev/null 2>&1
Is cron the right choice? Are there any advantages of using the while loop construct?
Are there any better ways of doing this except the loop and cron?
Also, I'm using a 2.6.9 kernel build.
The only reasons I have ever used the while solution is if either I needed my code to be run more than once a minute or if it needed to respond immediately to an external event, neither of which appear to be the case here.
My thinking is usually along the lines of: cron has been tested by millions and millions of people over decades so it's at least as reliable as the code I've just strung together.
Even in situations where I've used while, I've still had a cron job to restart my script in case of failure.
My advice would be to simply use cron. That's what it's designed for. And, as an aside, I rarely redirect the output to /dev/null, that makes it too hard to debug. Usually I simply redirect to a file in the /tmp file system so that I can see what's going on.
You can append as long as you have an automated clean-up procedure and you can even write to a more private location if you're worried about anyone seeing stuff in the output.
The bottom line, though, is that a rare failure can't be analysed if you're throwing away the output. If you consider your job to be bug-free then, by all means, throw the output away but I rarely consider my scripts bug-free, just in case.
Why don't you make the build process that puts the build into the directory do the notification? (See SO 3691739 for where that comes from!)
Having cron run the program is perfectly acceptable - and simpler than a permanent loop with a sleep, though not by much.
Against a cron solution, since the process is a simple one-shot, you can't tell what has changed since the last time it was run - there is no state. (Or, more accurately, if you provide state - via a file, probably - you are making life much more complex than running a single script that keeps its state internally.)
Also, stopping the notification service is less obvious. If there's a single process hanging around, you kill it and the notifications stop. If the notifications are run by cron, then you have to know that they're run out of a crontab, know whose crontab it is, and edit that entry in order to stop it.
You should also consider persuading your company to upgrade to a version of Linux where the inotify mechanism is available.
If you go for the loop instead of cron and want your job run at regular intervals, sleep(300) tends to drift. (consider the execution time of the rest of your script)
I suggest using a construct like this:
use constant DELAY => 300;
my $next=time();
while (1){
$next+=DELAY;
...;
sleep ($next-time());
};
Yet another alternative is the 'anacron' utility.
if you don't want to use cron.
this http://upstart.ubuntu.com/ can be used to babysit processes.
or you can use watch whichever is easier.
I have Perl program based on IO::Async, and it sometimes just exits after a few hours/days without printing any error message whatsoever. There's nothing in dmesg or /var/log either. STDOUT/STDERR are both autoflush(1) so data shouldn't be lost in buffers. It doesn't actually exit from IO::Async::Loop->loop_forever - print I put there just to make sure of that never gets triggered.
Now one way would be to keep peppering the program with more and more prints and hope one of them gives me some clue. Is there better way to get information what was going on in a program that made it exit/silently crash?
One trick I've used is to run the program under strace or ltrace (or attach to the process using strace). Naturally that was under Linux. Under other operating systems you'd use ktrace or dtrace or whatever is appropriate.
A trick I've used for programs which only exhibit sparse issues over days or week and then only over handfuls among hundreds of systems is to direct the output from my tracer to a FIFO, and have a custom program keep only 10K lines in a ring buffer (and with a handler on SIGPIPE and SIGHUP to dump the current buffer contents into a file. (It's a simple program, but I don't have a copy handy and I'm not going to re-write it tonight; my copy was written for internal use and is owned by a former employer).
The ring buffer allows the program to run indefinitely with fear of running systems out of disk space ... we usually only need a few hundred, even a couple thousand lines of the trace in such matters.
If you are capturing STDERR, you could start the program as perl -MCarp::Always foo_prog. Carp::Always forces a stack trace on all errors.
A sudden exit without any error message is possibly a SIGPIPE. Traditionally SIGPIPE is used to stop things like the cat command in the following pipeline:
cat file | head -10
It doesn't usually result in anything being printed either by libc or perl to indicate what happened.
Since in an IO::Async-based program you'd not want to silently exit on SIGPIPE, my suggestion would be to put somewhere in the main file of the program a line something like
$SIG{PIPE} = sub { die "Aborting on SIGPIPE\n" };
which will at least alert you to this fact. If instead you use Carp::croak without the \n you might even be lucky enough to get the file/line number of the syswrite, etc... that caused the SIGPIPE.
SOLVED see Edit 2
Hello,
I've been writing a Perl program to handle automatic upgrading of local (proprietary) programs (for the company I work for).
Basically, it runs via cron, and unfortunately has a memory leak (or something similar). The problem is that the leak only happens when I'm not looking (aka when run via cron, not via command line).
My code does not contain any circular (or other) references, so the commonly cited tools will not help me (Devel::Cycle, Devel::Peek).
How would I go about figuring out what is using so much memory that the kernel kills it?
Basically, the code SFTPs into a server (using ```sftp...`` `), calls OpenSSL to verify the file, and then SFTPs more if more files are needed, and installs them (untars them).
I have seen delays (~15 sec) before the first SFTP session, but it has never used so much memory as to be killed (in my presence).
If I can't sort this out, I'll need to re-write in a different language, and that will take precious time.
Edit: The following message is printed out by the kernel which led me to believe it was a memory leak:
[100023.123] Out of memory: kill process 9568 (update.pl) score 325406 or a child
[100023.123] Killed Process 9568 (update.pl)
I don't believe it is an issue with cron because of the stalling (for ~15 sec, sometimes) when running it via the command-line. Also, there are no environmental variables used (at least by what I've written, maybe underlying things do?)
Edit 2: I found the issue myself, with help from the below comment by mobrule (in response to this question). It turns out that the script was called from a crontab of a user (non-root) just once a day and that (non-root privs) caused a special infinite loop situation.
Sorry guys, I feel kinda stupid for not finding this before, but thanks.
mobrule, if you submit your comment as an answer, I will accept it as it lead to me finding the problem.
End Edits
Thanks,
Brian
P.S. I may be able to post small snippets of code, but not the whole thing due to company policy.
You could try using Devel::Size to profile some of your objects. e.g. in the main:: scope (the .pl file itself), do something like this:
use Devel::Size qw(total_size);
foreach my $varname (qw(varname1 varname2 ))
{
print "size used for variable $varname: " . total_size($$varname) . "\n";
}
Compare the actual size used to what you think is a reasonable value for each object. Something suspicious might pop out immediately (e.g. a cache that is massively bloated beyond anything that sounds reasonable).
Other things to try:
Eliminate bits of functionality one at a time to see if suddenly things get a lot better; I'd start with the use of any external libraries
Is the bad behaviour localized to just one particular machine, or one particular operating system? Move the program to other systems to see how its behaviour changes.
(In a separate installation) try upgrading to the latest Perl (5.10.1), and also upgrade all your CPAN modules
How do you know that it's a memory leak? I can think of many other reasons why the OS would kill a program.
The first question I would ask is "Does this program always work correctly from the command line?". If the answer is "No" then I'd fix these issues first.
On the other hand if the answer is "Yes", I would investigate all the differences between having the program executed under cron and from the command line to find out why it is misbehaving.
If it is run by cron, that shouldn't it die after iteration? If that is the case, hard for me to see how a memory leak would be a big deal...
Are you sure it is the script itself, and not the child processes that are using the memory? Perhaps it ends up creating a real lot of ssh sessions , instead of doing a bunch of stuff in one session?