Perl processes disappears after some days - perl

I have a perl file (test.pl).
It will work in recurring manner.
The purpose of the file is send emails from DB
Following is the code in test.pl
sub send_mail{
$db->connect();
# Some DB operations #
# Send mail #
$db->disconnect();
sleep(5);
send_mail();
}
send_mail();
Iam executing 5 instance of this file ,like as below
perl test.pl >> /var/www/html/emailerrorlog/error1.log 2>&1 &
perl test.pl >> /var/www/html/emailerrorlog/error2.log 2>&1 &
perl test.pl >> /var/www/html/emailerrorlog/error3.log 2>&1 &
perl test.pl >> /var/www/html/emailerrorlog/error4.log 2>&1 &
perl test.pl >> /var/www/html/emailerrorlog/error5.log 2>&1 &
if i execute the command ps -ef | grep perl | grep -v grep
I can see 5 instances of above mentioned perl file
That file will work perfectly for some days
But after some days, the perl processes will start to disappear one by one .
After some days all process will disappear.
Now. if i execute the command ps -ef | grep perl | grep -v grep ,I can't see any process,
I can't see any error log in the log files.
So, what may be the chances for disappearing the perl processes?
How can i debugg it ?
Where can i see the perl error log?
It has the same issue in Centos and Red Hat Linux
Any one have idea?

I'm not 100% sure if that is the problem but it would probably help if you avoid recursion in a permanently executing process... That slowly increases the stack use and will eventually kill the process when the stack size limit is reached.
try something like this instead:
sub send_mail{
$db->connect();
# Some DB operations #
# Send mail #
$db->disconnect();
}
while (1) {
send_mail();
sleep(5);
}

Related

Output of perl debugger to file (Windows)

I tried to list all the subroutines of a script with the perl debugger and put the results in an external file. But It didn't work.
My code:
perl -d -S myscript.pl > results.txt
-S = list all subroutines
-d = debug perl script
Greets,
The -S isn't supposed to be used as a command line switch. Running perl -d will start a debugger process, and one of the commands you can use there is S.
Example:
$ perl -d tmp/splithttpdconf.pl
Loading DB routines from perl5db.pl version 1.28
Editor support available.
Enter h or `h h' for help, or `man perldebug' for more help.
main::(tmp/splithttpdconf.pl:6): my $basedir = shift;
DB<1> S main::
main::BEGIN
main::debug
main::splitconf
DB<2>
In order to get the kind of output you want, you can to use the profiler module Devel::DProf instead. It'll output profiler info into a file which can be read by the dprofpp program. Here's an example to get the list of subroutines:
perl -d:DProf perlscript.pl; dprofpp -T
If you only want the subroutines within your own script, and not those loaded from other modules, add a grep to it, e.g.:
perl -d:DProf perlscript.pl; dprofpp -T | grep main::
Though for the particular question of knowing what subroutines exist in a given program, provided you use a consistent coding style it'd probably be easier to just do a grep "sub.*{" to start with.
In your home directory, create a file called .perldb with the following contents:
parse_options("NonStop=1 LineInfo=results.txt AutoTrace=1 frame=2");
And then run the command
perl -d myscript.pl
If you want to scan and list the entire subroutine's what Perl see's before it runs:
perl -MO=Deparse -f myscript.pl

Pass parameter to a perl script executed trough qsub

Hi would like to pass a parameter to my perl script that should be executed trough qsub.
So I run:
qsub -l nodes=node01 -v "i=500" Test.pl
While in Test.pl I try to call i parameter in several way:
use Getopt::Long;
$result = GetOptions ("i" => \$num);
open(FILE,">/data/home/FILEout.txt");
print FILE "$num\n";
print FILE "$ARGV[0]";
close(FILE);
Unfortunatelly output file of the perl script is always empty.
Do you have any suggestions? Where I'm wrong? Help please
According to all documentation I can find -v sets an environment var, so you'd use $ENV{i} to get 500. (Check your own documentation to confirm.)
If you wanted to actually pass an arg to your script, you could try using
qsub ... Test.pl -i=500
But based on my web search, that might only work for some versions of qsub. Others would require that you make a helper script (say Test.sh)
#!/bin/sh
Test.pl "-i=$i"
along with the command
qsub ... -v 'i=500' Test.sh
If qsub ... Test.pl ...args... is supported and you can change your script, the simplest solution is
qsub ... Test.pl 500
and
my ($i) = #ARGV;
I Finally get the solution that works with PBRProfessional 10.4.
There are two way to solve it:
First one is the following
echo "perl /path/to/Test.pl -i 500" | qsub -l nodes=node06
Second one is two use
qsub -l nodes=node06 -v i=500 Test.pl
and read the parameter in the Test.pl through $ENV{i}

How is this bash script resulting in an infinite loop?

From some Googling (I'm no bash expert by any means) I was able to put together a bash script that allows me to run a test suite and output a status bar at the bottom while it runs. It typically takes about 10 hours, and the status bar tells me how many tests passed and how many failed.
It works great sometimes, however occasionally I will run into an infinite loop, which is bad (mmm-kay?). Here's the code I'm using:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_test_suite 2>&1) | tee out.txt |
while IFS=read -r line;
do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c FAILED out.txt)${WHITE}\r" 2>&1;
done
What happens when I encounter the bug is I'll have an error message repeat infinitely, causing the log file (out.txt) to become some multi-megabyte monstrosity (I think it got into the GB's once). Here's an example error that repeats (with four lines of whitespace between each set):
warning caused by MY::Custom::Perl::Module::TEST_FUNCTION
print() on closed filehandle GEN3663 at /some/CPAN/Perl/Module.pm line 123.
I've tried taking out the 2>&1 redirect, and I've tried changing while IFS=read -r line; to while read -r line;, but I keep getting the infinite loop. What's stranger is this seems to happen most of the time, but there have been times I finish the long test suite without any problems.
EDIT:
The reason I'm writing this is to upgrade from a black & white test suite to a color-coded test suite (hence the ANSI codes). Previously, I would run the test suite using
run_test_suite > out.txt 2>&1 &
watch 'grep -c FAILED out.txt; grep -c passed out.txt; tail -20 out.txt'
Running it this way gets the same warning from Perl, but prints it to the file and moves on, rather than getting stuck in an infinite loop. Using watch, also prints stuff like [32m rather than actually rendering the text as green.
I was able to fix the perl errors and the bash script seems to work well now after a few modifications. However, it seems this would be a safer way to run the test suite in case something like that were to happen in the future:
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
run_full_test > out.txt 2>&1 &
tail -f out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
There are some downsides to this. Mainly, if I hit Ctrl-C to stop the test, it appears to have stopped, but really run_full_test is still running in the background and I need to remember to kill it manually. Also, when the test is finished tail -f is still running. In other words there are two processes running here and they are not in sync.
Here is the original script, slightly modified, which addresses those problems, but isn't foolproof (i.e. can get stuck in an infinite loop if run_full_test has issues):
#!/bin/bash
WHITE="\033[0m"
GREEN="\033[32m"
RED="\033[31m"
(run_full_test 2>&1) | tee out.txt | while IFS= read line; do
printf "%$(tput cols)s\r" " ";
printf "%s\n" "$line";
printf "${WHITE}Passing Tests: ${GREEN}$(grep -c passed out.txt)\t" 2>&1;
printf "${WHITE}Failed Tests: ${RED}$( grep -c 'FAILED!!' out.txt)${WHITE}\r" 2>&1;
done
The bug is in your script. That's not an IO error; that's an illegal argument error. That error happens when the variable you provide as a handle isn't a handle at all, or is one that you've closed.
Writing to a broken pipe results in the process being killed by SIGPIPE or in print returning false with $! set to EPIPE.

How to check if a Perl script doesn't have any compilation errors?

I am calling many Perl scripts in my Bash script (sometimes from csh also).
At the start of the Bash script I want to put a test which checks if all the Perl scripts are devoid of any compilation errors.
One way of doing this would be to actually call the Perl script from the Bash script and grep for "compilation error" in the piped log file, but this becomes messy as different Perl scripts are called at different points in the code, so I want to do this at the very start of the Bash script.
Is there a way to check if the Perl script has no compilation error?
Beware!!
Using the below command to check compilation errors in your Perl program can be dangerous.
$ perl -c yourperlprogram
Randal has written a very nice article on this topic which you should check out
Sanity-checking your Perl code (Linux Magazine Column 91, Mar 2007)
Quoting from his article:
Probably the simplest thing we can tell is "is it valid?". For this,
we invoke perl itself, passing the compile-only switch:
perl -c ourprogram
For this operation, perl compiles the program,
but stops just short of the execution phase. This means that every
part of the program text is translated into the internal data
structure that represents the working program, but we haven't actually
executed any code. If there are any syntax errors, we're informed, and
the compilation aborts.
Actually, that's a bit of a lie. Thanks to BEGIN blocks (including
their layered-on cousin, the use directive), some Perl code may have
been executed during this theoretically safe "syntax check". For
example, if your code contains:
BEGIN { warn "Hello, world!\n" }
then you will see that message,
even during perl -c! This is somewhat surprising to people who
consider "compile only" to mean "executes no code". Consider the
code that contains:
BEGIN { system "rm", "-rf", "/" }
and you'll see the problem with
that argument. Oops.
Apart from perl -c program.pl, it's also better to find warnings using the command:
perl -w program.pl
For details see: http://www.perl.com/pub/2004/08/09/commandline.html
I use the following part of a bash func for larger perl projects :
# foreach perl app in the src/perl dir
while read -r dir ; do
echo -e "\n"
echo "start compiling $dir ..." ;
cd $product_instance_dir/src/perl/$dir ;
# run the autoloader utility
find . -name '*.pm' -exec perl -MAutoSplit -e 'autosplit($ARGV[0], $ARGV[1], 0, 1, 1)' {} \;
# foreach perl file check the syntax by setting the correct INC dirs
while read -r file ; do
perl -MCarp::Always -I `pwd` -I `pwd`/lib -wc "$file"
# run the perltidy inline
# perltidy -b "$file"
# sleep 3
ret=$? ;
test $ret -ne 0 && break 2 ;
done < <(find "." -type f \( -name "*.pl" -or -name "*.pm" \))
test $ret -ne 0 && break ;
echo "stop compiling $dir ..." ;
echo -e "\n\n"
cd $product_instance_dir ;
done < <(ls -1 "src/perl")
When you need to check errors/warnings before running but your file depends on mutliple other files you can add option -I:
perl -I /path/to/dependency/lib -c /path/to/file/to/check
Edit: from man perlrun
Directories specified by -I are prepended to the search path for modules (#INC).

Perl: Reading from a 'tail -f' pipe via STDIN

There were a number of other threads like this, but the usual conclusion was something like "Install File::Tail". But, I'm on an old box that we're decomissioning, and I just want to write a one-liner to monitor a log. I tried installing File::Tail, but the environment for CPAN just isn't working, and I don't want to take the time to figure out what the problem is.
I just want a basic script that parses out an IP address and keeps a count of it for me. For some reason, though, even this simple test doesn't work:
$ tail -f snmplistener.log|grep IPaddress |perl -ne 'print "LINE: $_\n";'
I think it has something to do with output buffering, but I've always been a bit fuzzy on how that works. How can I get this one-liner working?
tail -f doesn't generally buffer output, but grep probably does. Move the "grep" functionality into your Perl one-liner:
tail -f snmplistener.log | perl -ne 'print "LINE: $_\n" if /IPaddress/'
man grep
--line-buffered
Use line buffering on output. This can cause a performance penalty.
so:
tail -f /log/file.txt | grep --line-buffered SomePattern | perl ...
Or without using tail at all:
perl -e 'open($h,$ARGV[0]); while (1) { /IPaddress/ and print "LINE: $_" for <$h>; sleep 1 }' snmplistener.log