Socket timeout using Net::SMPP as ESME Receiver - perl

I have a simple script that should bind to an SMSC and listen for incoming messages. The problem I'm having is that it will time out if it doesn't receive any messages.
Here is the script:
#!/usr/bin/perl
use Net::SMPP;
use Data::Dumper;
$Net::SMPP::trace = 1;
$smpp = Net::SMPP->new_receiver('--removed--',
port => '--removed--',
system_id => '--removed--',
password => '--removed--',
) or die;
while (1)
{
$pdu = $smpp->read_pdu() or die;
print "Received #$pdu->{seq} $pdu->{cmd}:". Net::SMPP::pdu_tab->{$pdu->{cmd}}{cmd} ."\n";
print "From: $pdu->{source_addr}\nTo: $pdu->{destination_addr}\nData: $pdu->{data}\n";
print "Messsage: $pdu->{short_message}\n\n";
}
Here's the error I'm getting:
premature eof reading from socket at /usr/lib/perl5/site_perl/5.8.8/Net/SMPP.pm line 2424.
$VAR1 = undef;
And here's the relevant sub from SMPP.pm:
sub read_hard {
my ($me, $len, $dr, $offset) = #_;
while (length($$dr) < $len+$offset) {
my $n = length($$dr) - $offset;
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm ${*$me}{enquire_interval} if ${*$me}{enquire_interval};
warn "read $n/$len enqint(${*$me}{enquire_interval})" if $trace>1;
while (1) {
$n = $me->sysread($$dr, $len-$n, $n+$offset);
next if $! =~ /^Interrupted/;
last;
}
alarm 0;
};
if ($#) {
warn "ENQUIRE $#" if $trace;
die unless $# eq "alarm\n"; # propagate unexpected errors
$me->enquire_link(); # Send a periodic ping
} else {
if (!defined($n)) {
warn "error reading header from socket: $!";
${*$me}{smpperror} = "read_hard I/O error: $!";
${*$me}{smpperrorcode} = 1;
return undef;
}
#if ($n == 0) { last; }
if (!$n) {
warn "premature eof reading from socket";
${*$me}{smpperror} = "read_hard premature eof";
${*$me}{smpperrorcode} = 2;
return undef;
#return 0;
}
}
}
#warn "read complete";
return 1;
}
In the sub, the if statement it's hitting is the one where $n is 0 or undef.
My guess is that the socket is timing out and disconnecting. How can I keep the listener up indefinitely?
In addition, this listener blocks while waiting for a pdu. Is there a way to listen without blocking?
I'm a Telecom Engineer who does programming on the side, and I've gone through all the material I could find but couldn't find an answer.

It looks as if the sysread() call simply returns 0. It can do that only, if the connection status is known to be disconnected. Since your side did not disconnect or timeout, i would deduce that the remote side disconnected. If a timeout would have occured on your side, you should not have been able to see the premature eof... message.
So, you are already 'keeping the listener up indefinitely', since you do not set the enquire_interval option.
Regarding 'Is there a way to listen without blocking?' the description section describes asynchronous mode at the end: Module can also be used asynchronously by specifying async=>1 to the constructor. You have to implement the data polling then yourself.

Have you tried to set a parameter for the enquire link (SMPP ping) timeout?
On your new_receiver, verify if "enquire_interval" parameter exists and set it to 15 seconds, for example...
I have tried with new_transceiver() method, and it works.
my $smpp = Net::SMPP->new_transceiver(
$self->host,
port => $self->port,
system_id => $self->user,
password => $self->password,
smpp_version => $self->version,
interface_version => $self->interface_version,
enquire_interval => $self->timeout,
addr_ton => $self->addr_ton,
addr_npi => $self->addr_npi,
source_addr => $self->source_addr,
source_addr_ton => $self->source_addr_ton,
source_addr_npi => $self->source_addr_npi,
dest_addr_ton => $self->dest_addr_ton,
dest_addr_npi => $self->dest_addr_npi,
system_type => $self->system_type,
facilities_mask => $self->facilities_mask
) or die "Could not connect to $self->host: $!";
It (Net::SMPP) handles enquire link automaticallly.

I am also receiving the same error of premature termination.
For blocking, you can fork or thread another process and both can run parallel. There is no way around the blocking.

Related

PERL Net::SFTP::Foreign autodie=>0 then 1

I'm writing a script that retrieves some files automatically once a day on some sftp server.
The problem is this sftp server is not very reliable and sometimes the client have to retry a couple of times until opening the session successfully.
I choose Net::SFTP::Foreign for different reasons (especially because it uses the native ssh command from the system).
I wrote a loop in order to retry the opening sftp session 3 times before giving up.
My problem :
I want to keep the autodie=1 because it automatically handles the non-recoverable errors for all methods used later in the code.
But the autodie=1 prevents me to trap any error during the session opening (Net::SFTP::Foreign->new) and therefore the retries part is useless.
Here is the part of the code I wrote, the autodie is set to 0 in order to make work the retries part (but I want autodie=1).
Is it possible to open the sftp connection with autodie=>0 so that the retries part actually works, and then change this value with autodie=>1 in order to have the auto handling of non-recoverable errors ?
Any help would be much appreciated :)
use Net::SFTP::Foreign;
print "Opening SFTP session...\n";
my $j = 1;
my $sftp_max_retry = 5;
while (1) {
$sftp = do {
local $SIG{TERM} = 'IGNORE'; # used to avoid the message "Killed by signal 15".
Net::SFTP::Foreign->new(
host => "some_host_unavailable",
port => 22,
user => "some_user",
password => "some_pwd",
autodie => 0,
timeout => 10,
autoflush => 1,
);
};
if ($sftp->error) {
if ($j > $sftp_max_retry) {
print "Opening SFTP failed, maximum retry reached !\n";
exit 2;
}
print "Opening SFTP session (retry $j/$sftp_max_retry)...\n";
sleep $sftp_retry_loop;
$j++;
}else{
print "\nConnection successful\n";
last;
}
}
You can wrap your connection into eval statement and set autodie to 1.
This should work:
use Net::SFTP::Foreign;
print "Opening SFTP session...\n";
my $j = 1;
my $sftp_max_retry = 5;
my $sftp;
while (1) {
eval {
$sftp = do {
local $SIG{TERM} = 'IGNORE'; # used to avoid the message "Killed by signal 15".
Net::SFTP::Foreign->new(
host => "some_host_unavailable",
port => 22,
user => "some_user",
password => "some_pwd",
autodie => 1,
timeout => 10,
autoflush => 1,
);
};
}
if ($#) {
if ($j > $sftp_max_retry) {
print "Opening SFTP failed, maximum retry reached !\n";
exit 2;
}
print "Opening SFTP session (retry $j/$sftp_max_retry)...\n";
sleep $sftp_retry_loop;
$j++;
}else{
print "\nConnection successful\n";
last;
}
}

Better way to handle perl sockets to read/write to active proccess

First of all I would thank you guys not offering a work around as a solution (although it would be cool to know other ways to do it). I was setting up tg-master project (telegram for cli) to be used by check_mk alert plugin. I found out that telegram runs on a stdin/stdout proccess so I tought it would be cool to "glue" it, so i wrote with a lot of building blocks from blogs and cpan the next 2 pieces of code. They already work (i need to handle broken pipes sometimes) but I was wondering if sharing this could come from some experts new ideas.
As you could see my code relies on a eval with a die reading from spawned process, and I know is not the best way to do it. Any suggestions? :D
Thank you guys
Server
use strict;
use IO::Socket::INET;
use IPC::Open2;
use POSIX;
our $pid;
use sigtrap qw/handler signal_handler normal-signals/;
sub signal_handler {
print "what a signal $!\nlets kill $pid\n";
kill 'SIGKILL', $pid;
#die "Caught a signal $!";
}
# auto-flush on socket
$| = 1;
# creating a listening socket
my $socket = new IO::Socket::INET(
LocalHost => '0.0.0.0',
LocalPort => '7777',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $!\n" unless $socket;
print "server waiting for client connection on port 7777\n";
my ( $read_proc, $write_proc );
my ( $uid, $gid ) = ( getpwnam "nagios" )[ 2, 3 ];
POSIX::setgid($gid); # GID must be set before UID!
POSIX::setuid($uid);
$pid = open2( $read_proc, $write_proc, '/usr/bin/telegram' );
#flush first messages;
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) { }
};
while (1) {
my $client_socket = $socket->accept();
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port\n";
# read until \n
my $data = "";
$data = $client_socket->getline();
# write to spawned process stdin the line we got on $data
print $write_proc $data;
$data = "";
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) {
$client_socket->send($_);
}
};
# notify client that response has been sent
shutdown( $client_socket, 1 );
}
$socket->close();
Client
echo "contact_list" | nc localhost 7777
or
echo "msg user#12345 NAGIOS ALERT ... etc" | nc localhost 7777
or
some other perl script =)
If you are going to implement a script that performs both reads and writes from/to different handles, consider using select (the one defined as select RBITS,WBITS,EBITS,TIMEOUT in the documentation). In this case you will totally avoid using alarm with a signal handler in eval to handle a timeout, and will only have one loop with all of the work happening inside it.
Here is an example of a program that reads from both a process opened with open2 and a network socket, not using alarm at all:
use strict;
use warnings;
use IO::Socket;
use IPC::Open2;
use constant MAXLENGTH => 1024;
my $socket = IO::Socket::INET->new(
Listen => SOMAXCONN,
LocalHost => '0.0.0.0',
LocalPort => 7777,
Reuse => 1,
);
# accepting just one connection
print "waiting for connection...\n";
my $remote = $socket->accept();
print "remote client connected\n";
# simple example of the program writing something
my $pid = open2(my $localread, my $localwrite, "sh -c 'while : ; do echo boom; sleep 1 ; done'");
for ( ; ; ) {
# cleanup vectors for select
my $rin = '';
my $win = '';
my $ein = '';
# will wait for a possibility to read from these two descriptors
vec($rin, fileno($localread), 1) = 1;
vec($rin, fileno($remote), 1) = 1;
# now wait
select($rin, $win, $ein, undef);
# check which one is ready. read with sysread, not <>, as select doc warns
if (vec($rin, fileno($localread), 1)) {
print "read from local process: ";
sysread($localread, my $data, MAXLENGTH);
print $data;
}
if (vec($rin, fileno($remote), 1)) {
print "read from remote client: ";
sysread($remote, my $data, MAXLENGTH);
print $data;
}
}
In the real production code you will need to carefully check for errors returned by various function (socket creation, open2, accept, and select).

Flush INET Socket response data with BLOCKING enabled

I am making a program that interfaces with Teamspeak, and I have an issue where the responses received will not match the commands sent. I run the program multiple times and each time, I will get different results when they should be the same, due to responses being out of sync.
my $buf = '';
use IO::Socket;
my $sock = new IO::Socket::INET (
PeerAddr => 'localhost'
,PeerPort => '10011'
,Proto => 'tcp'
,Autoflush => 1
,Blocking => 1
,Timeout => 10
);
sub ExecuteCommand{
print $sock $_[0]."\n";$sock->sysread($buf,1024*10);
return $buf;
};
ExecuteCommand("login ${username} ${password}");
ExecuteCommand("use sid=1");
ExecuteCommand("clientupdate client_nickname=Idle\\sTimer");
my $client_list = ExecuteCommand("clientlist");
Each command is executed properly, however the server likes to return extra lines, so a single sysread will not be enough and I will have to execute another. The size of responses are at most 512, so they aren't being cut off. If I try to run the sysread multiple times in an attempt to flush it, when there is nothing to read it will just make the program hang.
The end of the executions are followed with "error id=0 msg=ok"
How would I be able to read all the data that comes out, even if it's multiple lines? Or just be able to flush it all out so I can move onto the next command without having to worry about old data?
So you want to read until you find a line starting with error. In addition to doing that, the following buffers anything extra read since it's part of the next response.
sub read_response {
my ($conn) = #_;
my $fh = $conn->{fh};
our $buf; local *buf = \($conn->{buf}); # alias
our $eof; local *eof = \($conn->{eof}); # alias
$buf = '' if !defined($buf);
return undef if $eof;
while (1) {
if ($buf =~ s/\A(.*?^error[^\n]*\n)//ms) {
return $1;
}
my $rv = sysread($fh, $buf, 64*1024, length($buf));
if (!$rv) {
if (defined($rv)) {
$eof = 1;
return undef;
} else {
die "Can't read response: $!\n";
}
}
}
}
my $conn = { fh => $sock };
... send command ...
my $response = read_response($conn);
...
... send command ...
my $response = read_response($conn);
...
I changed my ExecuteCommand subroutine to include a check for "error code=[0-9]{1,}", which is what is always at the end of a response for Teamspeak 3 servers.
sub ExecuteCommand{
print $sock $_[0]."\n";
my $response = "";
while (1){
$sock->sysread($buf,1024*10);
last if($buf =~ /error id=([0-9]{1,})/);
$response .= $buf;
};
return $response;
};

Perl: UDP Packet receive timeout handling with processes

Now I am designing a GUI for my sensor network with Perl-TK.
The main problem is that sometimes my low power lossy network suffers from packet loss. In that case my perl program stucks for example in the handshake. The handshakes run in different processes. I would like to yell for the user when the packet don't received and close the thread
Is there any solutions for implement any timeout for kill processes?
edit:
It works with the main loop. But In a specific process it doesn't work, the program running stopped and the terminal drops "Alarm clock".
The half result is:
#!usr/bin/perl
use Thread;
use IO::Socket::IP;
use Net::IP;
use Time::Out qw(timeout);
use warnings;
$conn_timeout = 2;
$th1=Thread->create(\&thr1);
while(1)
{
sleep(2);
print"main proc\n";
}
sub thr1{
print "thread started\n";
$temp = '2001:4428:29a::280:e103:1:57a8';
$ip=Net::IP::ip_expand_address($temp, 6);
$tempsock = IO::Socket::IP ->new(
PeerAddr => $ip,
PeerPort => '52525',
Proto => 'udp',
) or die "Cannot construct socket, IP address: $ip - error message: - $#";
print "Socket opened successfully for $ip on port 52525\n";
$SIG{ALRM} = sub {print "detaching...\n";thr1->detach(); }; # NB: \n required
alarm $conn_timeout;
$tempsock ->send("asdasd");
$tempsock->recv($tempdata, 16);
alarm 0;
}
edit 2:
Can not use alarm in threads... So I have to measure the timeout in the connection.
You can use alarm and signal capture the exception that occurs from it. alarm original intent was socket connections, but it can be used for anything that takes awhile to run:
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm $timeout;
$nread = sysread SOCKET, $buffer, $size;
alarm 0;
};
if ($#) {
print ("Timeout occurred") unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}

In Perl, how can I verify that every child of Parallel::ForkManager finishes its job?

I'm working on a Perl program that utilizes Net::FTP and Parallel::ForkManager. Within each of the child processes I create with ForkManager, I'm calling a number of Net::FTP methods. Unfortunately, these sporadically fail, I believe due to connectivity issues.
In the Net::FTP docs, they make it clear that you can/should handle failed method calls like so:
$ftp = Net::FTP->new("some.host.name", Debug => 0) or die "Cannot connect to some.host.name: $#";
This works fine to detect errors, but kills my child process in ForkManager. This is making it very difficult for me to make sure that each child runs to completion or tries again until it succeeds.
What I'm trying to do, is if a Net::FTP method fails, both warn (with a similar message to the die message above) and return 0 from within a subroutine. My thinking is that this will allow me to reconnect to the FTP and try again, without killing my child process. Like so (this is just a code snippet):
foreach my $page (sort (keys %pages)) {
my $pid = $pm1->start($page) and next;
my $ok;
my $attempts = 1;
while (!($ok)) {
print "Attempts on $page: $attempts\n";
$ok = ftp_server_process($page);
$attempts++;
}
$pm1->finish;
}
With the related subroutine:
sub ftp_server_process {
my $ftp = Net::FTP->new("some.ip", Debug => 0, Passive => 1, BlockSize => 1048576) or warn "Cannot connect to some.ip for page $page: $#" and return 0;
$ftp->login("username", "password") or warn "Cannot login to some.ip\n", $ftp->message and return 0;
$ftp->binary or warn "opening binary mode failed\n", $ftp->message and return 0;
$ftp->cwd($ftp_input_folder) or warn "changing directory failed\n", $ftp->message and return 0;
$ftp->put($pages{$page}{"ftp_upload_name"}) or warn "putting page $page failed\n", $ftp->message and return 0;
$ftp->quit;
return 1;
}
Is this a reasonable way to approach this problem? Is the object->method or warn "a message" and return 0; syntax correct, or is there an issue there that I'm missing? It seems to be working well, but it feels shaky and I'm wondering if there's a more established pattern to solve the problem of making sure every child process survives until the job is done.
Any suggestions are welcome. Thanks!
It's simpler to die and catch the exception.
for my $page (sort keys %pages) {
$pm->start($page) and next;
my $attempts_left = 3;
LOOP: {
if (!eval { ftp_server_process($page); 1 }) {
warn $#;
if (--$attempts_left) {
warn "Retrying...\n";
redo;
} else {
warn "Aborting.\n";
$pm->finish(1);
}
}
}
$pm->finish(0);
}
If you so desire, you can even keep note of which one failed in the parent process:
$pm->run_on_finish(sub {
my ($pid, $exit_code, $page, $signal) = #_;
if ($exit_code || $signal) {
print "Couldn't put page $page: ";
if ($exit_code) {
print "Exited with $exit_code\n";
} else {
print "Killed by signal $signal\n";
}
}
);