Perl 5.10 - POSIX Signals ignored unless received during sleep() call? - perl

I'm running into a bit of an issue when it comes to how Perl handles POSIX signals. Namely, Perl seems to ignore the signals unless they're received during a call to sleep().
For example, the following code works fine:
#/usr/bin/perl
$SIG{PIPE} = sub { print STDERR "WARNING: Received SIGPIPE"; exit(1); };
while (1) { print "Waiting on signal...\n"; sleep(10); }
When using the above SIGPIPE handler in another script which reads from an Oracle database, the subroutine never seems to get called.
#/usr/bin/perl
use DBI;
$SIG{PIPE} = sub { print STDERR "WARNING: Received SIGPIPE"; exit(1); };
my $db = "redacted";
my $user = "redacted";
my $pass = "redacted";
my $table = "redacted";
my $ora = DBI->connect("dbi:Oracle:" . $db, $user, $pass);
my $sql = "SELECT * FROM " . $table;
my $query = $ora->prepare($sql);
$query->execute();
while (my #row = $query->fetchrow_array()) {
print(join('|', #row) . "\n");
}
if ( $DBI::err ) { print STDERR "ERROR: Unload terminated due to error"; }
I'm sending the SIGPIPE signal to both scripts in the same manner (kill -sPIPE pid), but only the first script responds to it. The second script simply carries on. No message, no exit, nothing.
How can I rectify this situation?

Setting up the signal handler before the DBI calls somehow leads it to being ignored after some of the DBI methods are called. The solution was to move the signal handler subroutine to just before the processing loop, but after the call to execute:
#/usr/bin/perl
use DBI;
# SIGPIPE handler used to be here
my $db = "redacted";
my $user = "redacted";
my $pass = "redacted";
my $table = "redacted";
my $ora = DBI->connect("dbi:Oracle:" . $db, $user, $pass);
my $sql = "SELECT * FROM " . $table;
my $query = $ora->prepare($sql);
$query->execute();
$SIG{PIPE} = sub { print STDERR "WARNING: Received SIGPIPE"; exit(1); };
while (my #row = $query->fetchrow_array()) {
print(join('|', #row) . "\n");
}
if ( $DBI::err ) { print STDERR "ERROR: Unload terminated due to error"; }
I'm not exactly sure why it fixes the issue, but it does.

It is likely that the DBI driver being used to communicate with the database is written in XS code. XS code that is going to block for a long time has to be carefully written to cope with signals and perl's "safe signals" delivery system. It is possible that the DB driver you are using doesn't take account of this, and therefore won't work.

Related

$^S Doesn't Catch Eval Deaths in Perl

I override dying in Perl for the FastCGI implementation of my code and include a test for $^S in the override:
$SIG{__DIE__} = sub {
return if $^S;
say STDERR 'Contents of $^S:' . $^S;
&SAFARI::Core::safariErrorLogWriter('Dying from error.'); };
}
sub safariErrorLogWriter {
my $message = shift;
return if $^S;
my ($file,$line,$id) = id(2);
return if ($file =~ /^\(eval/);
my $datestring = localtime();
my $ipAddress = ($ENV{'REMOTE_ADDR'} // 'Local') . ': ';
$message = $ipAddress . $datestring . ': ' . $message . '; At file: ' . $file . '; line: ' . $line . '; id: ' . $id . "\n";
state $moduleFileHomeDir = require File::HomeDir;
my $filePath = File::HomeDir->my_home . "/safari_error_log";
open(my $DATA,">>$filePath") || CORE::die "Couldn't open file file.txt, $!";
print $DATA $message;
close($DATA);
print STDERR $message;
}
The result in each case shows $^S as empty, as would be expected since the routine returns upon $^S being true.:
The result:
Local: Sat Jul 31 12:00:57 2021: Dying from error.; At file: /usr/local/lib64/perl5/CryptX.pm; line: 14; id:
However, during the normal course of loading modules return if ($file =~ /^\(eval/); is evaluated as true several times, seemingly indicating $^S is not giving the proper result. Several evals slip by both that test and $^S, for example, the one shown above when loading CryptX.pm, which is performing this eval on line 14:
eval { require Cpanel::JSON::XS }
Is there anything that would cause $^S to be inaccurate? Is there a better way to avoid getting evals in the mix here?
I tried to create a minimally reproducible example, but it doesn't seem to exhibit the same behavior, so there must be something else that is messing something up in the much larger code base. I guess that changes my question to: "What could alter the behavior such that $^S doesn't work as expected?
This works as expected:
#!/usr/bin/perl
Core::encodedSessionArray;
package Core;
$SIG{__DIE__} = sub { say STDERR "The result: " . $^S; return if $^S; &Core::safariErrorLogWriter('Dying from error.'); };
sub safariErrorLogWriter {
my $message = shift;
return if $^S;
my ($file,$line,$id) = id(2);
state $evalRegEx = qr#^\(eval#;
return if ($file =~ /$evalRegEx/);
my $datestring = localtime();
my $ipAddress = ($ENV{'REMOTE_ADDR'} // 'Local') . ': ';
$message = $ipAddress . $datestring . ': ' . $message . '; At file: ' . $file . '; line: ' . $line . '; id: ' . $id . "\n";
state $moduleFileHomeDir = require File::HomeDir;
my $filePath = File::HomeDir->my_home . "/safari_error_log";
open(my $DATA,">>$filePath") || CORE::die "Couldn't open file file.txt, $!";
print $DATA $message;
close($DATA);
print STDERR $message;
}
sub _makeIpKeyCryptObject {
my $ipAddress = $ENV{'REMOTE_ADDR'};
$ipAddress =~ s/\.//g;
# Make a 16 byte key out of the IP address info.
my $key = substr(sprintf("%016d", $ipAddress), 0, 16);
state $moduleCryptModeCBCEasy = require Crypt::Mode::CBC::Easy;
return Crypt::Mode::CBC::Easy->new(key => $key);
}
sub encodedSessionArray {
my $self = shift;
my $params = shift;
$params->{'sessionId'} = 0 unless $params->{'sessionId'};
$params->{'uid'} = '0' unless $params->{'uid'};
my $crypt = $self->_makeIpKeyCryptObject;
my $encrypted = $crypt->encrypt(($params->{'sessionId'}, $params->{'uid'},time()));
$encrypted =~ s/\n/\\n/g;
return $encrypted;
}
1;
For reference, the $^S variable shows
Current state of the interpreter.
$^S State
--------- -------------------------------------
undef Parsing module, eval, or main program
true (1) Executing an eval
false (0) Otherwise
The error message shown in the question, apparently triggered when CryptX.pm died inside of an eval statement, is printed from a sub called from the __DIE__ handler. In my tests $^S is 1 in a situation like that, but my tests aren't what is happening there.
When in your code does this happen -- when loading CryptX? What other code gets involved? Does your handler get that die or something re-thrown along the way (not from an eval)? The CryptX loads C code first. Basics:
use warnings;
use strict;
use feature 'say';
$SIG{__DIE__} = sub {
say "in __DIE__ handler, \$^S = $^S. call a sub";
handler(#_)
};
sub handler {
print "in handler(), got: #_";
say "\$^S = $^S"
}
eval { require NoMod }; # note: in this namespace, not in another package
say "done";
This prints (my #INC suppressed)
in __DIE__ handler, $^S = 1. call a sub
in handler(), got: Can't locate NoMod.pm in #INC (#INC contains:...) at... line 15.
$^S = 1
done
But if a die is thrown from an eval in another package then my handler isn't triggered.† That appears to be the case in your code -- but then how does that error handling get triggered? This is an additional big complication with what is shown.
Altogether I wouldn't conclude that $^S is wrong but rather that we don't know what is going on, much as stated in the second part of the question.
The question also says
...during the normal course of loading modules return if ($file =~ /^\(eval/); is evaluated as true several times,...
(the quoted return... statement is in a sub shown to be called out of a __DIE__ handler)
It is mentioned in comments that the sub id, which return is assigned to $file, comes from CGI::Carp and is much like caller. Then $file is a (misnamed) name of a sub that's been called? Then in case of a match that would presumably be an eval -- but we don't know how closely that id mimics caller. Then, id(2) is presumably the frame level in the callstack? Are we still in eval execution? This all matters but is unclear. (And why 2?)
But above all note what the docs say by the end of %SIG in perlvar
Having to even think about the $^S variable in your exception handlers is simply wrong. $SIG{__DIE__} as currently implemented invites grievous and difficult to track down errors. Avoid it and use an END{} or CORE::GLOBAL::die override instead.
I'd recommend to heed that advice. Here is an article from Effective Perler on it
A few more notes
Pass #_ from __DIE__ handler to the next sub so to see the error
That & in front of SAFARI::Core::safariErrorLogWriter seems unneeded for that sub. It doesn't affect this discussion but I don't see that you need it there
The ( caller(LEVEL) )[7] says whether this comes from require. Could be useful here
† Unless it's defined in a BEGIN block, along with subs it uses. But then this is a bad idea since then all code after that point gets affected, libraries included

Append resulted data to scalar variable in Parallel::ForkManager Perl

I have a code, which is working as expected. But I have difficulty in storing the output of each of the executed command in $result = $ssh->capture($command_to_execute);. This uses Parallel::ForkManager module to run the commands in different Hosts by using different files as an input parameters.
Once the command execution is done, I want the resulted output should be stored in $result variable. It should append each of the hosts results in same variable and at the end I want to process the values which are in $result. I am using .= to append the resulted data to $result but it doent seems to be working.
Pasting my code here for reference:
.
.
.
my $result;
my $pm = Parallel::ForkManager->new(5);
DATA_LOOP:
foreach my $n (1..$num_buckets) {
my $pid = $pm->start and next DATA_LOOP;
$command_to_execute = $my_command." ".$Files{$n};
my $ssh = SSH_Connection( $list_of_ips[$n-1], 'username', 'pass' );
$result = $ssh->capture($command_to_execute);
$result .= "Result from File:$Files{$n} and Host:$list_of_ips[$n-1] is $result\n";
print "Result: INSIDE: $result";
$pm->finish;
}
$pm->wait_all_children;
print "Result: OUTSIDE: $result";
print "Done\n";
sub SSH_Connection {
my ( $host, $user, $passwd ) = #_;
my $ssh = Net::OpenSSH->new($host,
user => $user,
password => $passwd,
master_opts => [-o => "StrictHostKeyChecking=no"]
);
$ssh->error and die "Couldn't establish SSH connection: ". $ssh->error;
return $ssh;
}
print "Result: INSIDE: $result"; Could able to print result one by one. But print "Result: OUTSIDE: $result";
is empty, which should actually have the combined results of $results which has been taken from inside the for loop.
As shown in the documentation of Parallel::ForkManager, to get a result from a child, you need to supply a reference to the result as another parameter to finish.
$pm->finish(0, [$Files{$n}, $list_of_ips[$n-1], $result]);
Use run_on_finish to gather the results:
my $result;
$pm->run_on_finish( sub {
my ($pid, $exit_code, $ident, $exit_signal, $core_dump, $single_result) = #_;
$result .= "Result from File: $single_result->[0] and Host: $single_result->[1]"
. " is $single_result->[2]\n";
Each time you run $pm->start, you are forking a new process to run the code until $pm->finish. This forked process cannot affect the parent process in any way except by the mechanism Parallel::ForkManager provides to send data back to the parent. This mechanism is described at https://metacpan.org/pod/Parallel::ForkManager#RETRIEVING-DATASTRUCTURES-from-child-processes.
$pm->run_on_finish(sub {
my ($pid, $exit_code, $ident, $exit_signal, $core_dump, $data) = #_;
my $result = $$data;
...
});
DATA_LOOP:
foreach my $n (1..$num_buckets) {
my $pid = $pm->start and next DATA_LOOP;
...
$pm->finish(0, \$result);
}
Forking is in fact not needed for these operations if you are willing to restructure a bit. Net::OpenSSH can provide commands that can be managed simultaneously by an event loop such as IO::Async::Loop, thus all Perl operations will occur in the same process (but not necessarily in the order they appear). Since IO::Async::Loop->run_process returns a Future, Future::Utils provides a way to manage concurrency of these commands.
use strict;
use warnings;
use Net::OpenSSH;
use IO::Async::Loop;
use Future::Utils 'fmap_concat';
my $loop = IO::Async::Loop->new;
my $future = fmap_concat {
my $n = shift;
...
my $remote_command = $ssh->make_remote_command($command_to_execute);
return $loop->run_process(command => $remote_command, capture => ['stdout'])
->transform(done => sub { "Result from File:$Files{$n} and Host:$list_of_ips[$n-1] is $_[0]\n"; });
} foreach => [1..$num_buckets], concurrent => 5;
my #results = $future->get;
There is a lot of flexibility to how the individual and overall (returned by fmap) Futures are managed, but by default any failure to execute a process will result in the whole Future failing immediately (causing get to throw an exception) and any nonzero exit will be ignored.

How to die Correctly on Soap::Lite in perl

I have written a perl script that connects using Soap::Lite and collect data from a web-service and update a database. This works well, until the password gets locked out and I get a server 500 error which is where my question comes in. How do I let the Soap::Lite query die when it does not make a successful connection, so it does not continue with the rest of the script?
.....
my $host = "hostname";
my $user = "user";
my $pass = "pass";
$soap_proxy = "https://" . $user . ":" . $pass . "#" . $host . ":8090/services/ApiService";
$uri = "http://api.config.common.com";
$client = new SOAP::Lite
uri => $uri,
proxy => $soap_proxy,
autotype => 0;
my $soap_respons = $client->getSomething();
....
I have tried the usual or die $! but that does not die like other queries do and still continues with the remaining script.
according to the SOAP::Lite examples on CPAN, you could use:
if ($#) {
die $#;
}
But I do not know where to put this. I tried directly under my $soap_respons but still it does not die.
SOAP::Lite queries will give a fault with a faultstring result if errors occur, something like this should work.
die $soap_respons->faultstring if ($soap_respons->fault);
print $soap_respons->result, "\n";
You could set the on_fault callback. This way you wouldn't have to check every response.
$client->on_fault(sub { die($_[1]) });

Displaying a portion of the configuration (--More)

I have got this error when i try to connect to my switch !
use Net::OpenSSH;
use warnings;
use Expect;
my $password = 'admin';
my $enable = '';
my $ip = '192.16.25.39';
my $username='user';
my $ssh = Net::OpenSSH->new("$username:$password\#$ip", timeout => 200) ;
$ssh->error and die "unable to connect to remote host: ". $ssh->error;
my $output = $ssh->capture({stdin_data => "enable\n"."admin%\n"."show vlan"."\n"});
if ($output) {print $output . ' ';}
my $line;
print "\n";
# closes the ssh connection
$ssh->close();
I have tried this with the Expect module:
use Net::OpenSSH;
if ($output) {
print $output . ' ';
my $expect = Expect->init($output);
$expect->raw_pty(1);
#$expect->debug(2);
my $debug and $expect->log_stdout(1);
while(<$pty>) {
print "$. $_ "
}
}
which produces this error:
Can't bless non-reference value at /usr/local/share/perl5/Expect.pm line 202 (#1) (F) Only hard references may be blessed. This is how Perl "enforces" encapsulation of objects. See perlobj. Uncaught exception from user code: Can't bless non-reference value at /usr/local/share/perl5/Expect.pm line 202. at /usr/local/share/perl5/Expect.pm line 202. Expect::exp_init("Expect", "\x{d}\x{a}witch>enable\x{d}\x{a}password:\x{d}\x{a}switch#show vlan\x{d}\x{a}\x{d}\x{a}VLA"...) called at b.pl line 19 "
This might be a better approach to your problem. There is a Net::Telnet::Cisco module that simplifies a lot of the interaction with the remote router. Apparently you can first set up an encrypted SSH connection with Net::OpenSSH and then use the filehandle from that connection to start a Net::Telnet::Cisco session.
So I think something like this would be more promising than trying to use Net::OpenSSH directly:
use Net::OpenSSH;
use Net::Telnet::Cisco;
my $password = 'admin';
my $enable = '';
my $ip = '192.16.25.39';
my $username='user';
my $ssh = Net::OpenSSH->new("$username:$password\#$ip", timeout => 200) ;
my ($pty, $pid) = $ssh->open2pty({stderr_to_stdout => 1})
or die "unable to start remote shell: " . $ssh->error;
my $cisco = Net::Telnet::Cisco->new(
-fhopen => $pty,
-telnetmode => 0,
-cmd_remove_mode => 1,
-output_record_separator => "\r");
my #vlan = $cisco->cmd("show vlan");
I am not familiar with the ins and outs of configuring Cisco routers, so you'll have to take it up from here, but this looks to me like a much easier route to get what you need.

Flush INET Socket response data with BLOCKING enabled

I am making a program that interfaces with Teamspeak, and I have an issue where the responses received will not match the commands sent. I run the program multiple times and each time, I will get different results when they should be the same, due to responses being out of sync.
my $buf = '';
use IO::Socket;
my $sock = new IO::Socket::INET (
PeerAddr => 'localhost'
,PeerPort => '10011'
,Proto => 'tcp'
,Autoflush => 1
,Blocking => 1
,Timeout => 10
);
sub ExecuteCommand{
print $sock $_[0]."\n";$sock->sysread($buf,1024*10);
return $buf;
};
ExecuteCommand("login ${username} ${password}");
ExecuteCommand("use sid=1");
ExecuteCommand("clientupdate client_nickname=Idle\\sTimer");
my $client_list = ExecuteCommand("clientlist");
Each command is executed properly, however the server likes to return extra lines, so a single sysread will not be enough and I will have to execute another. The size of responses are at most 512, so they aren't being cut off. If I try to run the sysread multiple times in an attempt to flush it, when there is nothing to read it will just make the program hang.
The end of the executions are followed with "error id=0 msg=ok"
How would I be able to read all the data that comes out, even if it's multiple lines? Or just be able to flush it all out so I can move onto the next command without having to worry about old data?
So you want to read until you find a line starting with error. In addition to doing that, the following buffers anything extra read since it's part of the next response.
sub read_response {
my ($conn) = #_;
my $fh = $conn->{fh};
our $buf; local *buf = \($conn->{buf}); # alias
our $eof; local *eof = \($conn->{eof}); # alias
$buf = '' if !defined($buf);
return undef if $eof;
while (1) {
if ($buf =~ s/\A(.*?^error[^\n]*\n)//ms) {
return $1;
}
my $rv = sysread($fh, $buf, 64*1024, length($buf));
if (!$rv) {
if (defined($rv)) {
$eof = 1;
return undef;
} else {
die "Can't read response: $!\n";
}
}
}
}
my $conn = { fh => $sock };
... send command ...
my $response = read_response($conn);
...
... send command ...
my $response = read_response($conn);
...
I changed my ExecuteCommand subroutine to include a check for "error code=[0-9]{1,}", which is what is always at the end of a response for Teamspeak 3 servers.
sub ExecuteCommand{
print $sock $_[0]."\n";
my $response = "";
while (1){
$sock->sysread($buf,1024*10);
last if($buf =~ /error id=([0-9]{1,})/);
$response .= $buf;
};
return $response;
};