Perl system call throws strange error - perl

I just wrote a perl script that is restarting a list of services on a linux server. It's intended to run as a cron job. when I execute the script though, I keep getting this error;
root#www:~/scripts# ./ws_restart.pl
* Stopping web server apache2 [ OK ]
sh: Syntax error: "(" unexpected
* Stopping MySQL database server mysqld [ OK ]
sh: Syntax error: "(" unexpected
The call that is being used to do this is;
system("/etc/init.d/apache2 stop");
system("/etc/init.d/mysql stop");
I can paste the entire script code if needed, but I figured that this is the source of the problem and just need to know how to stop it.
Any ideas?
Here's the entire script;
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
my $old_pids = {};
my $post_stop_ids = {};
my #services = qw/apache2 mysql solr/;
my $app_dir = '/home/grip/apps/eventfinder';
# collect existing pids then kill services
foreach my $service (#services) {
# gather up existing pids
$old_pids->{$service} = [ get_pids_by_process($service) ];
# issue stop command to each service
set_service_state($service, 'stop');
# attempt to regather same ids
$post_stop_ids->{$service} = [ get_pids_by_process($service) ];
# kill any rogue ids left over
kill_rogue_procs($post_stop_ids->{$service});
# give each kill time to finish
sleep(5);
}
# attempt to restart killed services
foreach my $service (#services) {
# issue start command to each service
set_service_state($service, 'start');
# Let's give each service enough time to crawl outta bed.
# I know how much I hate waking up
sleep(5);
}
# wait for it!...wait for it! :P
# Pad an extra 5 seconds to give solr enough time to come up before we reindex
sleep(5);
# start the reindexing process of solr
system("cd $app_dir ; RAILS_ENV=production rake reindex_active");
# call it a day...phew!
exit 0;
sub kill_rogue_procs {
my #ids = shift;
# check if we still have any rogue processes that failed to die
# if so, kill them now.
if(scalar #ids) {
foreach my $pid (#ids) {
system("kill $pid");
}
}
}
sub set_service_state {
my ($proc, $state) = #_;
if($proc eq 'apache2') {
system("/etc/init.d/apache2 $state");
} elsif($proc eq 'mysql') {
system("/etc/init.d/mysql $state");
} elsif($proc eq 'solr') {
system("cd $app_dir ; RAILS_ENV=production rake sunspot:solr:$state");
}
}
sub get_pids_by_process {
my $proc = shift;
my #proc_ids = ();
open(PSAE, "/bin/ps -ae | grep $proc |") || die("Couldn't run command");
while(<PSAE>) {
push #proc_ids, $_ =~ /(\d{1,5})/;
}
close PSAE;
return #proc_ids;
}

Actually, I'd be more suspicious of what's in #ids in kill_rogue_procs. It's the result of a ps followed by a grep, so might have bogus values if ps doesn't return any results or if the pid isn't 5 digits long.

This is wrong:
sub kill_rogue_procs {
my #ids = shift;
# check if we still have any rogue processes that failed to die
# if so, kill them now.
if(scalar #ids) {
From what you're passing to this sub, #ids will always contain a single array reference, so (scalar #ids) will always be true. It also means you end up passing something like the following to sh:
kill ARRAY(0x91b0768)
You want something like (if the arrayref is empty, there's nothing to loop over anyway):
my $ids = shift;
...
for my $pid (#$ids) {
kill SIGTERM => $pid;
Or instead of the loop:
kill SIGTERM => #$ids;
Also, there is no need to call system to kill a process.
To this, I'd add the last line, so you don't grep the grep process itself:
sub get_pids_by_process {
my $proc = shift;
$proc =~ s/^(.)/[$1]/;

As sh is raising the errors, I'm pretty sure one of the parameters to system is being expanded to something unexpected. I'd print all parameters just prior to passing them to system for a quick debug.

Related

Managing parallel processes

I am starting multiple bash scripts from a Perl script and I want to monitor them and log their behavior.
I know that I can tell whether a process is still running with kill 0, $pid and I can get the exit code from $?, but with launching multiple scripts in the background I can't relate values of $? to the processes that gave it as an exit code.
How can I launch those scripts in parallel, but get the exit code from each them? I need something like proc_get_status from PHP.
Sorry for not providing the code from the beginning.
I stripped down the code, so the important things are to see.
use warnings;
use strict;
use IPC::Open3;
use IO::Handle;
my $timeLimit = 60*60; # some time limit not to be crossed
my $startTime = time();
my #commands = (); # fill up with commands to be executed
my #processes = ();
foreach my $cmd (#commands) {
my $stdout = IO::Handle->new;
my $stderr = IO::Handle->new;
my $pid = open3(undef, $stdout, $stderr, $cmd);
push #processes, {"pid" => $pid, "out" => $stdout, "err" => $stderr, "cmd" => $fullcmd};
}
do {
if (time() - $startTime > $timeLimit) {
kill 2, $_->{pid} foreach (#processes);
#processes = ();
last;
} else {
for (my $i = 0; $i < #processes; $i++) {
unless (kill 0, $processes[$i]) {
# if it's not running, I would like to check the exit code and log it from here on.
# also remove it from the array, thats why I used for and not foreach, so I can use splice later.
}
}
}
} while (#processes > 0);
You have already hit upon the insight of storing background job data in mini-objects. Take the next step and try a full-featured parallelization package like Forks::Super. You can create background process objects that you can then query for their status and exit code. Forks::Super supports process timeouts and an open3-like interface.
use Forks::Super;
$Forks::Super::MAX_PROC = 10; # optional, block while 10 jobs already running
...
foreach my $cmd (#commands) {
my $job = fork {
cmd => $cmd, # run $cmd in background process
child_fh => 'out,err', # child STDOUT,STDERR available to parent
timeout => $timeLimit # kill the job after $timeLimit seconds
};
push #processes, $job;
}
while (#processes) {
sleep 5;
foreach my $job (#processes) {
if ($job->is_complete) {
$job->wait;
my $exit_code = $job->status;
my $output = $job->read_stdout;
my $error = $job->read_stderr;
# ... log status, output, error, $job->{cmd}, etc. ...
$job->dispose; # close filehandles and other clean up
}
}
#processes = grep { !$_->is_reaped } #processes;
}
You can use wait and waitpid to get the status of individual children. The perlipc documentation gives a few examples in the section on "Signals".
Since you're using IPC::Open3, the Synopsis also has an example of using waitpid():
my($wtr, $rdr, $err);
use Symbol 'gensym'; $err = gensym;
$pid = open3($wtr, $rdr, $err,
'some cmd and args', 'optarg', ...);
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
First, take a look at Perl's fork() function. This would be the typical way I do things like this. There's a good explanation with examples here.
An easy to use forking module is provided by Parallel::ForkManger.
There is also Perl's interpreter-base threads which is a bit lower-level, harder to use, and spawns threads rather than forking processes.
Another possible way is with GNU Parallel. parallel is a very powerful tool to run commands in parallel. You can easily run and manage multiple commands and scripts with it. It has a ---joblog option which might be helpful for you.
All of these approaches provide ways to get the exit code of the sub-processes. In the end, the best choice depends on your current implementation which you did not provide.

Perl script is not kill tcl process

I have a perl script. It's not killing my tcl process. Here is what my process looks like:
UID 14439 10897 0 13:55:44 pts/26 0:00 /prod/bin/wish /prod/bin/q
Here is my perl script:
#!/usr/bin/env perl
#
# Simple kill script that allows users to kill specific processes
# using sudo
use BPub::Platform qw(solaris);
# Assume a taint mode type environment
$ENV{PATH} = '/opt/csw/bin:/usr/bin:/bin';
use strict;
use Getopt::Long;
if (!#ARGV) {
print "Usage: bkill [-9] pid <pid...>\n";
exit 1;
}
my $dashnine;
GetOptions('9' => \$dashnine);
# If the process name isn't in this hash, then you don't get to kill
# it
my %allowed_process_names = (
'dtsession' => 1,
'wish' => 1,
'compose' => 1,
);
# True if the given process name is one that we can kill
sub allowed_to_kill {
return $allowed_process_names{$_[0]};
}
# Takes a pid and returns the process name
sub process_name {
my ($pid) = #_;
my $out = `/usr/ucb/ps -c $pid | /bin/grep $pid | /bin/cut -b 25-1000`;
if ($?) {
print "Error running ps command: $!\n";
exit 2;
}
chomp($out);
return $out;
}
foreach my $pid (#ARGV) {
# tainted, remember?
if ($pid =~ /^(\d+)$/) {
my $safe_pid = $1;
my $name = process_name($safe_pid);
if (allowed_to_kill($name)) {
my #cmd = "/usr/bin/kill";
push (#cmd, '-9') if $dashnine;
push #cmd, $pid;
print "#cmd\n";
print "Killing $name ($pid)\n";
system(#cmd);
} else {
print "Not allowed to kill $safe_pid: $name.\n";
}
} else {
print "Invalid pid: must be a number: $pid\n";
}
}
When I run the script using sudo bkill PID. I get an error message saying:
bpub-xserver7-prod^UID118-> sudo bkill 14439
Password:
Not allowed to kill 14439: wish8.0.
Is there somethings that i can better in this script? How can i fix this problem that i am having getting rid of tcl process.
The program name your error message is emitting is wish8.0. So the simplest fix (and most strict) is to add 'wish8.0' => 1 to %allowed_process_names.
Or you could make your search less strict by doing something like:
my #allowed_process_names = qw(dtsession wish compose)
sub allowed_to_kill {
return scalar grep { index($_[0], $_) == 0 } #allowed_process_names
}
Which will find any string that starts with any of the allowed process names. (change to >= 0 or != -1 to allow it to match anywhere)
You could also use a regular expression instead. The following ones will match any program that starts with the provided program names. If you remove the leading caret, it would instead match them anywhere in the string.
my $allowed_process_names = qr/^(?:dtsession|wish|compose)/;
sub allowed_to_kill {
return $_[0] =~ /$allowed_process_names/;
}
or if you want to build up the regular expression programmatically:
my #allowed_process_names = qw(dtsession wish compose)
my $allowed_process_names = join('|', map quotemeta, #allowed_process_names);
$allowed_process_names = qr/^(?:$allowed_process_names)/;
# If the process name isn't in this hash, then you don't get to kill
# it
my %allowed_process_names = (
'dtsession' => 1,
'wish' => 1,
'compose' => 1,
);
...
my $out = `/usr/ucb/ps -c $pid | /bin/grep $pid | /bin/cut -b 25-1000`;
...
Not allowed to kill 14439: wish8.0.
Your ps pipeline is identifying the process as wish8.0 instead of wish or /prod/bin/wish. You will need to add an entry for "wish8.0" to this %allowed_process_names hash in order to use this perl script to kill this process.
If you were to look at /prod/bin/wish, you might find that it's a symbolic link to a file named wish8.0.

How can Perl's `system` proceed without wait for the completion.

In Perl, the command, will wait till the "command" is completed. Is there a way to let command wait only for 20 sec ? One scenario is like the following:
The command is an infinite loop and won't finish. The command will freeze and the program can't proceed. What I want to let the program not blocked by command.
I know Ruby has a way to do this. Does Perl have a solution?
Thanks,
=Y
Use alarm:
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm 20;
system("<Your command>")
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
#!/usr/bin/perl -w
use strict;
use 5.010;
my $foo = 123;
my $pidChild = fork(); # All objects before this fork statement will be copied.
given ($pidChild)
{
when (!defined($_)) {
die "Cannot fork: $!";
}
when ($_ == 0) {
# The child process goes here.
$foo = 456; # This is a duplicate.
system 'subprocess options'; # Or: exec 'suprocess options';
}
default {
# The original process goes here.
waitpid($_, 0); # Whether to wait or not is up to you.
say $foo; # Output: 123
}
}
If Inter-Process Communication (IPC) is needed, before the invocation of fork, the built-in function pipe can be used to create 2 handlers, one for input and another for output, they'll be shared by the original process and the subprocess.
There's surely more than one way to do IPC. The built-in function open, the subroutine open2 offered by the module IPC::Open2, and the open3 offered by IPC::Open3 all can run a subprocess asynchronously.

Looping through data provided by Net::SSH2

I am using the following lib Net::SSH2
I can connect to my device and get the output OK in most case. Below is the relevant code:
sub logIntoDevice
{
my $self = shift;
my ($ssh2) = #_;
if(! $ssh2->connect($self->deviceIP))
{
say "Failed to connect to device:",$self->deviceIP;
$ssh2->disconnect();
exit 0;
}
if(! $ssh2->auth_password($self->usr, $self->pass))
{
say "Authentication Fail to ",$self->deviceIP;
$ssh2->disconnect();
exit 0;
}
my $channel = $ssh2->channel();
$channel->blocking(0);
$channel->shell();
return $channel;
}
sub sendCmd
{
my $self = shift;
my ($cmd,$channel) = #_;
my #cmdOutput;
print $channel "$cmd\n";
while (<$channel>)
{
chomp $_;
push(#cmdOutput, $_);
}
return #cmdOutput;
}
So below are the cmd i sent to the sub's. They work fine and the output is write to file OK.
$self->writeToFile($fileName,$self->sendCmd("show clock",$channel));
$self->writeToFile($fileName,$self->sendCmd("\n",$channel));
$self->writeToFile($fileName,$self->sendCmd("dir",$channel));
with the exception of when i sent the following cmd:
$self->writeToFile($fileName,$self->sendCmd("sh run",$channel));
the output of the cmd on the device using putty is:
sh run
Building configuration...
Current configuration : 16575 bytes
!
!
!Last configuration change at 16:37:19 CET+1 Sat Mar 15 2014
.....
but in the log file all you see is
sh run
Building configuration...
so the issue is the blank lines after the Building configuration output make the while (<$channel>) think its the end of the output.
My issue is I cant figure a way to loop through the data without using a While loop.
UPDATE
Ok come up with this solution but seems very clunky. must be a better way if doing this
sub sendCmd
{
my $self = shift;
my ($cmd,$channel) = #_;
my #cmdOutput;
my $currPrompt;
#get prompt. i am sure there is a better way!!! just cant figure it out
print $channel "\n";
while (<$channel>)
{
$currPrompt = $_;
}
print $channel "$cmd\n";
while(42)
{
my $inerOutput;
while (<$channel>)
{
chomp $_;
$inerOutput = $_;
push(#cmdOutput, $_);
}
if($inerOutput ne $currPrompt)
{
sleep(7);
}
else
{
last;
}
}
return #cmdOutput;
}
I don't think your issue is blank lines. Most likely, the issue is that you are using non-blocking mode, and it takes time for the device to perform the command. So you are getting an empty line (or an undef) after "Building configuration..." is read because there is no extra output produced yet.
I would use Net::SSH2's poll method with a timeout, which will let you know when there is something to read. If "sh run" takes substantially longer than other commands you are issuing, your sendCmd method needs to be aware of this, and allow more time to pass before it decides no more output is coming its way.
Alternatively, you can (as is the custom when using, for example, Net::Telnet) wait for more output until you see the prompt, whatever the prompt is for the device in question,
and then you will know that the command has finished its execution.
Net::SSH2->poll is deprecated as result of libss2_poll deprecation

Initiating Non-waiting Background Process in Perl

I have a Perl script that needs to kick off another process in the background and exit without waiting for the other script to finish. There are lots of threads on StackOverflow that cover how to wait in Perl or how to not wait for other programming languages, but I can't seem to find the right answer for Perl.
I've read up quite a bit and thought I was doing the right things but none of my attempts seem to be working correctly. Here are all the variations I've tried so far to no avail:
system(qq|perl /util/script.pl $id|);
system(qq|perl /util/script.pl $id &|);
exec(qq|perl /util/script.pl $id|);
exec(qq|perl /util/script.pl $id &|);
With each of these the parent process continues to wait for the child to finish before exiting. Please let me know what I'm doing wrong and the right way to fork the background process.
Thanks in advance for your help!
Full code to help with debugging. Note that the API->Function() calls are object oriented modules our code base uses for specific functions that database interactions, etc:
sub Add {
my $self = shift;
my $domain = shift;
if(!$self->IsValid($domain)) { return; }
my $SQL = qq| insert into domains set added=NOW(),domain=| . API->Database->Quote($domain);
my $sth = API->DatabaseQuery($SQL);
$SQL = qq| select last_insert_id() |;
$sth = API->DatabaseQuery($SQL);
my $id = $sth->fetchrow_array();
my $command = qq|perl /home/siteuser/util/new.pl $id &|;
system($command);
return {'id'=>$id,'domain'=>$domain};
}
The first one is designed to work that way - system executes the command and finishes for it to end.
The last two are also designed that way - exec specifically is designed to never return. It basically replaces the parent process with the child process.
However, the second one should do the trick: the command launched from the system call is shell, which is given your string to execute. Since the string ends with "&", that means the shell will launch your command as a background process, and finish its own execution after that launch.
Can you please post more code illustrating how #2 didn't work?
Also, see what happens if you try backticks or qx:
my $output = qx|perl /util/script.pl $id &|;
print $output;
Also, as a way of reducing unknowns, can you please run the following and tell me what prints:
my $output = qx|(echo "AAAAAAA"; /bin/date; sleep 5; /bin/date; echo "BBBBBBB") &|;
print $output;
Are you calling fork() before calling system or exec?
my $pid = fork();
if (defined($pid) && $pid==0) {
# background process
my $exit_code = system( $command );
exit $exit_code >> 8;
}
my $pid = fork();
if (defined($pid) && $pid==0) {
# background process
exec( $command );
# doesn't^H^H^H^H^H^H shouldn't return
}
You need to disassociate the child from the parent.
See perldoc -q daemon. Or Proc::Daemon
Using fork is a good way to background processes:
my $pid = fork;
die "fork failed" unless defined $pid;
if ($pid == 0) {
# child process goes here
do '/util/script.pl';
exit;
}
# parent process continues here