Why exit is not always called in perl? - perl

I have a perl script for setting up a remote host. And this is its interrupt handler in case if something will go wrong:
sub interrupt
{
$SIG{'__DIE__'} = '';
my $signal = shift;
print STDERR "Error $SELF_NAME: Bootstrapping of host '$REMOTE_HOST' is interrupted with error '$signal', deleting remote temporary directory $REMOTE_TEMP_DIR.\n";
remote_exec("perl -e \"use File::Path; rmtree('$REMOTE_TEMP_DIR');\"", $REMOTE_HOST, $REMOTE_EXEC_METHOD, $REMOTE_EXEC_PORT, $USERNAME, $PASSWORD, 0, 1);
exit 1;
}
And this handler is always called when there is a need. So I can see the error about interrupted bootstrapping in STDERR. But exit 1; is not called and script returns with exit_code = 0. BUT if I add this line print STDERR "After remote_exec and before exit"; between last two lines of my handler it works fine (i.e. returns with exit_code = 1).
Note: remote_exec just calls system($COMMAND) inside as I'm testing it on a local host.
UPDATE
Adding some details about how the script is being called:
I run the script from my C++ program which tracks its standard logs and checks exit status and in case when exit status is not equal to 0 it prints some error. So, when I add some extra line in my script between exit and system I can see the error my C++ program prints, but if there is not such extra line the C++ program tells that the script is successfully exited, which means that exit status is 0.

You didn't actually demonstrate the problem, so I had to guess at how to demonstrate it, and failed.
$ perl -e'
sub interrupt {
$SIG{"__DIE__"} = "";
my $signal = shift;
print STDERR "...\n";
system("echo ...");
exit 4;
}
$SIG{INT} = \&interrupt;
<>;
'
^C...
...
$ echo $?
4
(Used 4 cause it's more distinctive than 1.)
What do you even mean by "not called"? You seem to indicate the program did exit as a result of the interrupt, which means it got called.

Related

Perl : implementing socket programming ( system() never returns)

My aim : implement socket programming such that client tries connecting to server if server not installed on remote machine , client(host) on its part transfers a tar file to server(target) machine and a perl script. This perl script untar the folder and runs a script (server perl script) , now the problem is : this server script has to run forever ( multiple clients ) until the machine restarts or something untoward happens.
so the script runs properly : but since it is continuously running the control doesnt go back to the client which will again try to connect to the server ( on some predefined socket) , so basically i want that somehow i run the server but bring back control to my host which is client in this case.
here is the code :
my $sourcedir = "$homedir/host_client/test.tar";
my $sourcedir2 = "$homedir/host_client/sabkuch.pl";
my $remote_path = "/local/home/hanmaghu";
# Main subroutines
my $ssh = Net::OpenSSH->new ( $hostmachine, user =>$username, password => $password);
$ssh->scp_put($sourcedir,$sourcedir2,$remote_path)
or die "scp failed \n" . $ssh->error;
# test() is similar to system() in perl openssh package
my $rc = $ssh->test('perl sabkuch.pl');
# check if test function returned or not -> this is never executed
if ($rc == 1) {
print "test was ok , server established \n";
}
else {
print "return from test = $rc \n";
}
exit;
The other script which invokes our server script is :
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
#system('perl utpsm_lts_server.pl');
# Tried with system but in both cases it doesn't return,
# this xxx_server.pl is my server script
exit;
The server script is :
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket::INET;
#flush after every write
$| =1;
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => '7783',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $! \n" unless $socket;
print "server waiting for client on port $socket->LocalPort \n";
while (1)
{
# waiting for new client connection
my $client_socket = $socket->accept();
# get info about new connected client
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port \n";
# read upto 1024 characters from connected client
my $data = "";
$client_socket->recv($data,1024);
print "rceeived data = $data";
# write response data to the connected client
$data = "ok";
$client_socket->send($data);
# notify client response is sent
shutdown($client_socket,1);
}
$socket->close();
Please help how to execute this : in terms of design this is what i want but having this issue while implementation, can i do it some other work around method.
In short, your 'driver' sabkuch.pl starts the server using exec -- which never returns. From exec
The "exec" function executes a system command and never returns; ...
(Emphasis from the quoted documentation.) Once an exec is used, the program running in the process is replaced by the other program, see exec wiki. If that server keeps running the exit you have there is never reached, that is unless there are errors. See Perl's exec linked above.
So your $ssh->test() will block forever (well, until the server does exit somehow). You need a non-blocking way to start the server. Here are some options
Run the driver in the background
my $rc = $ssh->test('perl sabkuch.pl &');
This starts a separate subshell and spawns sabkuch.pl in it, then returns control and test can complete. The sabkuch.pl runs exec and thus turns into the other program (the server), to run indefinitely. See Background processes in perlipc. Also see it in perlfaq8, and the many good links there. Note that there is no need for perl ... if sabkuch.pl can be made executable.
See whether Net::OpenSSH has a method to execute commands so that it doesn't block.
One way to 'fire-and-forget' is to fork and then exec in the child, while the parent can then do what it wants (exit in this case). Then there is more to consider. Plenty of (compulsory) information is found in perlipc, while examples abound elsewhere as well (search for fork and exec). This should not be taken lightly as errors can lead to bizarre behavior and untold consequences. Here is a trivial example.
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar') == 0 or die "Error with system(...): $!";
my $pid = fork;
die "Can't fork: $!" if not defined $pid;
# Two processes running now. For child $pid is 0, for parent large integer
if ($pid == 0) { # child, parent won't get into this block
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl');
die "exec should've not returned: $!";
}
# Can only be parent here since child exec-ed and can't get here. Otherwise,
# put parent-only code in else { } and wait for child or handle $SIG{CHLD}
# Now parent can do what it needs to...
exit; # in your case
Normally a concern when forking is to wait for children. If we'd rather not, this can be solved by double-forking or by handling SIGCHLD (see waitpid as well), for example. Please study perlfaq8 linked above, Signals in perlipc, docs for all calls used, and everything else you can lay your hands on. In this case the parent should by all means exit first and the child process is then re-parented by init and all is well. The exec-ed process gets the same $pid but since cd will trigger the shell (sh -c cd) the server will eventually run with a different PID.
With system('command &') we need not worry about waiting for a child.
This is related only to your direct question, not the rest of the shown code.
Well i figured the best way would be to fork out a child process and parents exists thus child can now go on forever running the server.pl
but it still is not working please let me knoe where in this code i am going wrong
#!/usr/bin/perl
use strict;
use warnings;
system('tar -xvf test.tar');
my $child_pid = fork;
if (!defined $child_pid){
print "couldn't fork \n";}
else {
print "in child , now executing \n";
exec('cd utpsm_run_automation && perl utpsm_lts_server.pl')
or die "can't run server.pl in sabkuch child \n";
}
the output is my script still hangs and the print statement "in child now executing " gets run twice , i dont understand why,
i work mostly on assembly language hence this all is new to me.
help will be appreciated.

unable to capture stderr while performing openssh to a variable- perl

I want to capture the standard error displayed on host machine after (ssh->capture) to a variable.
for example when i try:
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($host);
my $out=$ssh->capture("cd /home/geek");
$ssh->error and
die "remote cd command failed: " . $ssh->error;
out put is:
child exited with code 1 at ./change_dir.pl line 32
i am not able to see what is the error. i get no such file or directory on the terminal. I want to capture the same "no such file or director" in $out.
example 2,
my ($stdout,$stderr)=$ssh->capture("cd /home/geek");
if($stderr)
print"Error = $stderr";
else
print "$stdout"
i see "Error=" printed but does not seee that $stderr on the screen.
i see $stdout is printed on success but print $stderr does not get printed only"Error= " gets printed.
When an error occurs it is most likely not going to be in STDOUT, and if it is in STDERR you are not catching that. You need to get to the application's exit code, in the following way. (Given the update to the question which I only see now: See the end for how to get STDERR.)
After the capture method you want to examine $? for errors (see Net-OpenSSH). Unpack that to get to the exit code returned by what was actually run by $ssh, and then look in that application's docs to see what that code means
$exit_code = $?;
if ($exit_code) {
$app_exit = $exit_code >> 8;
warn "Error, bit-shift \$? --> $app_exit";
}
The code to investigate is $app_exit.
An example. I use zip in a project and occasionally catch the error of 3072 (that is the $?). When that's unpacked as above I get 12, which is zip's actual exit. I look up its docs and it nicely lists its exit codes and 12 means Nothing to update. That's the design decision for zip, to exit with 12 if it had no files to update in the archive. Then that exit gets packaged into a two-byte number (in the upper byte), and that is returned and so it is what I get in $?.
Failure modes in general, from system in Perl docs
if ($? == -1) { warn "Failed to execute -- " }
elsif ($? & 127) {
$msg = sprintf("\tChild died with signal %d, %s coredump -- ",
($? & 127), ($? & 128) ? 'with' : 'without');
warn $msg;
} else {
$msg = sprintf("\tChild exited with value %d -- ", $? >> 8);
warn $msg;
}
The actual exit code $? >> 8 is supplied by whatever ran and so its interpretation is up to that application. You need to look through its docs and hopefully its exit codes are documented.
Note that $ssh->error seems designed for this task. From the module's docs
my $output = $ssh->capture({ timeout => 10 }, "echo hello; sleep 20; echo bye");
$ssh->error and warn "operation didn't complete successfully: ". $ssh->error;
The printed error needs further investigation. Docs don't say what it is, but I'd expect the unpacked code discussed above (the question update indicates this). Here $ssh only runs a command and it doesn't know what went wrong. It merely gets back the command's exit code, to be looked at.
Or, you can modify the command to get the STDERR on the STDOUT, see below.
The capture method is an equivalent of Perl's backticks (qx). There is a lot on SO on how to get STDERR from backticks, and Perl's very own FAQ has that nicely written up in perlfaq8. A complication here is that this isn't qx but a module's method and, more importantly, it runs on another machine. However, the "output redirection" method should still work without modifications. The command (run by $ssh) can be written so that its STDERR is redirected to its STDOUT.
$cmd_all_output = 'your_whole_command 2>&1';
$ssh->capture($cmd_all_output);
Now you will get the error that you see at the terminal ("no such file or directory") printed on STDOUT and so it will wind up in your $stdout. Note that one must use sh shell syntax, as above. There is a big bit more to it so please look it up (but this should work as it stands). Most of the time it is the same message as in the exit code description.
The check that you have in your code is good, the first line of defense: One should always check $? when running external commands, and for this the command to run need not be touched.

perl "kill 0" command works via .pl script, and command line, but not via browser

maddened here: the code in my .pl script runs successfully via the command line in these ways:
perl 5_pidfiletest.pl
perl -e 'do "5_pidfiletest.pl"'
and, also the heart of the script, here, runs fine via command line:
perl -e 'my $pid=10020; my $running = $pid ? kill 0, $pid : undef;print $running;'
NOTE: it reads a pid file[i hard code the file with the # of a process running that has an id of 10020]. it reads the #, then checks if the process is running. if it is, it leaves the file alone. if the process with that same # is not running, it creates another .pid file with a new number.
NOTE: i must run the script as the same user as the owner of the process.
yet, when i call the .pl file via $.ajax [$.ajax({ url: 'cgi-bin/perlFuncs.pl', ...], all code in the "perlFuncs.pl" runs fine, except the "kill 0" ignores the process it is testing.
code for .pl script:
my $pidfile = '/opt/myid/logs/testCron.pid';
if ( -e $pidfile ) {
open( FH, "<$pidfile" );
my $pid = <FH>;
close FH;
#for testing file-exists, hard code $pid to a running process, say "top"
$pid = 10020; #hard code, so that it will look for process 10020["top"]
warn "Pid file Exists with pid: $pid";
##### HEART OF THE CODE #####
my $running = $pid ? kill 0, $pid : undef;
if ($running) {
warn "Pid file Exists with pid: $pid and process was still running";
} else {
warn "Pid file Exists with pid: $pid, but appears process is no longer running, so creating another pid.";
open( my $fh, "+>$pidfile" ) or warn("[CRITICAL] Fatal - Can not create pidfile $pidfile");
warn "Pid file created $$";
print $fh $$;
close $fh;
}
} else {
warn "Pid file DOES NOT exist, going to create";
open( my $fh, "+>$pidfile" ) or warn("[CRITICAL] Fatal - Can not create pidfile $pidfile");
warn "Pid file created $$";
print $fh $$;
close $fh;
warn "### >>> END OF PID CHECK <<< ###";
}
THAT code in a .pl file by itself runs fine, detects the process 10020 running.
when it is called via the $.ajax url it does not properly "kill 0" the process. NOTE: "kill 0" is just a test, does not kill the job. typically, if one does not have permissions to send a signal, it will also ignore the process as if it is not running. YET, the code calling the .pl file via $.ajax if the same owner as the process. the $.ajax code:
function getData() { //function getData
$.ajax({
url: 'cgi-bin/perlFuncs.pl', //ajax call to cgi-bin/ajaxFuncs.pl
data: ({ ajaxAction: 'getDBInfo'}), //call handle is getDBInfo
dataType: 'json',
type: 'POST',
success: function(data) {
whoever can solve this one, name your price! my troubleshooting matrix and deadline are running out of options! [thx for your time!!]
Your Perl script is running as a CGI, so it was run by fork+pipe+exec from an Apache child process (or some other HTTPD), which means it will have inherited signal handlers from the parent HTTP server process which are likely blocking or handling the signal.
EDIT: Oops, I failed to notice you are using kill 0. This argument may not apply.
Even so, anytime I've run into issues like this, I look into the mechanics of executing a CGI, and look at traits of the parent process.
Secondly, it isn't clear that you've stated that your target process is running as the same user as the HTTP server (commonly "apache" or "nobody" for example). Unless you've configured setuid scripts, or are running suEXEC, or otherwise overridden it in your httpd.conf, your CGI will run as the default "apache" user, regardless of the owner of the file. Check your httpd.conf file and you'll find something like:
User apache
Group apache

Perl : Cancel a batch process, ran using a system call, in a thread, from main

Hello pips,
I have a Tkx gui which runs a batch file using button.
The batch file is executed in a different thread, cause I want the GUI to still be usable. I want to implement a cancel button to cancel the execution of the batch file.
I tried sending a Kill signal but it will only terminate the thread and not the batch file. Below are the codes for the run and cancel subroutines.
Oh and I am not allowed to edit the batch file.
my $t1;
sub runbutton{
$bar->g_grid();
$bar->start();
$t1 = threads->create(sub {
local $SIG{'KILL'} = sub { threads->exit };
system("timer.bat");
});
$t1->set_thread_exit_only(1);
my $start = time;
my $end = time;
while ($t1->is_running()) {
$end = time();
$mytext = sprintf("%.2f\n", $end - $start);
Tkx::update();
}
$bar->stop();
$bar->g_grid_forget();
$b4->g_grid_forget();
}
sub cancelbutton
{
$t1->kill('KILL')->detach();
}
You are running this on windows, since you say 'batch'?
I believe you'll have to 'identify' and 'kill' the process using OS-specific tools, e.g. pslist/pskill (sysinternals)
I suspect that the Perl thread is waiting for system to return before "firing" the kill signal is executed.
I would suggest using Win32::Process to start the batch file as a separate process and then have a variable to signal the process should be terminated. When the variable is set, the thread can kill the process and then exit.
Here is the small test case I used to check it out using Win::Process to create and kill a batch file as a separate process using Active State Perl Version 5.16.1:
use strict;
# Modules needed for items
use Win32::Process;
use Win32;
# Subroutine to format the last error message which occurred
sub ErrorReport
{
print Win32::FormatMessage( Win32::GetLastError() );
}
print "Starting Process\n";
# Declare a scalar for the process object
my $ProcessObj;
# Create the process to run the batch file
Win32::Process::Create($ProcessObj,
"C:\\Users\\Glenn\\temp.bat",
"C:\\Users\\Glenn\\temp.bat",0,
NORMAL_PROIRITY_CLASS,
".") || ErrorReport();
print "Process Started\n";
# Sleep for a few seconds to let items start running
sleep(2);
# Kill the process with a -9
# $ProcessObj->Kill(0) does not seem to work to stop the
# process. Kill will kill the process with the process id
kill -9,$ProcessObj->GetProcessID();
# Done with efforts
print "Complete\n";
If you are using Windows 7, you will need to run as administrator to allow the process to be created, otherwise you will get an Access Denied message when trying to create the process.

How can I display the execution status of the command "ssh-copy-id" from Perl?

I have a Perl script which does this: It generates a ssh authentication key on my system and then copies this key to a remote Linux system for passwordless ssh connects. The code is as below:
# Generate an rsa key and store it in the given file
system("ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa 1>/dev/null");
# Copy the generated key to a remote system whose username
# is stored in variable $uname and IP address is stored in variable $ip
system("ssh-copy-id -i /root/.ssh/id_rsa.pub $uname\#$ip 2>&1 1>/dev/null");
The problem I have is this: The ssh-copy-id command takes quite some time to copy the key to the remote system. So, when this Perl script is run, it will appear like the script has hung.
Therefore, I want to do display a "progress message": When the copy is in progress I want to display "SSH authentication key copy is progress" and if the copy has failed, display "Failed to copy" and if it has succeeded, display "Copy succeeded".
How do I go about doing this?
One more(based on Chas's answer):
I tried the code as Chas suggested
die "could not fork: $!" unless defined(my $pid = fork);
#child sleeps then exits with a random exit code
unless ($pid) {
print "Connecting to server ";
exec "ssh-copy-id -i /root/.ssh/id_rsa.pub $uname\#$ip 2>&1 1>/dev/null";
exit int rand 255;
}
#parent waits for child to finish
$| = 1;
print "waiting: ";
my #throbber = qw/ . o O o . /;
until ($pid = waitpid(-1, WNOHANG)) {
#get the next frame
my $frame = shift #throbber;
#display it<br />
print $frame;
#put it at the end of the list of frames
push #throbber, $frame;
#wait a quarter second<br />
select undef, undef, undef, .25;<br />
#backspace over the frame<br />
print "\b";<br />
}
The problem is this:
Now ssh-copy-id asks for a Password input while connecting to the remote server. So, the "throbber" output(i.e the circle of varying size that get's displayed) comes after the Password input which looks weird. This is how it looks like:
CURRENT OUTPUT
Connecting to remote server o0O0o #This is the throbber. The output doesn't exactly look like this but I can't print dynamically changing output, can I
Password:o0O0oXXXXXo0O0o #You get it right, the throbber unnecessarily comes at the Password prompt too
THE OUTPUT THAT I WANT:
Connecting to remote server o0O0o #The throbber should be displayed HERE ONLY, NOWHERE ELSE
Password:XXXXX
Any ideas, anyone?
It is fairly simple to fork off another process to do work for you while the main process lets the user know things haven't stopped happening:
#!/usr/bin/perl
use strict;
use warnings;
use POSIX ":sys_wait_h";
die "could not fork: $!" unless defined(my $pid = fork);
#child sleeps then exits with a random exit code
unless ($pid) {
#your code replaces this code
#in this case, it should probably just be
#exec "ssh-copy-id -i /root/.ssh/id_rsa.pub $uname\#$ip 2>&1 1>/dev/null";
#as that will replace the child process with ssh-copy-id
sleep 5;
exit int rand 255;
}
#parent waits for child to finish
$| = 1;
print "waiting: ";
my #throbber = qw/ . o O o . /;
until ($pid = waitpid(-1, WNOHANG)) {
#get the next frame
my $frame = shift #throbber;
#display it
print $frame;
#put it at the end of the list of frames
push #throbber, $frame;
#wait a quarter second
select undef, undef, undef, .25;
#backspace over the frame
print "\b";
}
#exit code is in bits 8 - 15 of $?, so shift them down to 0 - 7
my $exit_code = $? >> 8;
print "got exit code of $exit_code\n";
system() returns the exit status of the program as returned by the wait call. Nothing is returned if the system call was successful.
Thus u can see code like this:
system( 'ls' ) and die "Unable to call ls: $?";
which is very unintuitive ;-)
I normally do the following:
my $status = system( 'ls' );
die "Unable to call ls: $?" if $status;
However if you look at the perldoc you see a nicer alternative:
system( 'ls' ) == 0
or die "Unable to call ls: $?"
I would go with this method. But it would be amiss of me not to mention method suggested in the Perl Best Practices book:
use Perl6::Builtins qw( system );
system 'ls' or die "Unable to run ls: $?";
However note the PBP recommendation list points you away from this towards using autodie:
use autodie qw( system );
eval { system 'ls' };
die "Unable to run ls: $#" if $#;
So this is probably the canonical way going forward.
I don't think it's possible in an easy way (without resorting to forking, timers and whatnot) to add a progress bar to a single external command run via system() that doesn't generate output.
On the other hand, I think the reason your ssh-copy-id takes a long time to complete is because of an inproper DNS setup (check the sshd logs on the server side for a clue). The ssh server probably tries to resolve the reverse mapping for the client IP and times out. Fixing that will probably speed things up quite a lot.
When it comes to your messages. Can't you just use a print before running the system() command and using the return value from system to print the completion message (as already suggested by some of the other answers)?