I'm writing a script that retrieves some files automatically once a day on some sftp server.
The problem is this sftp server is not very reliable and sometimes the client have to retry a couple of times until opening the session successfully.
I choose Net::SFTP::Foreign for different reasons (especially because it uses the native ssh command from the system).
I wrote a loop in order to retry the opening sftp session 3 times before giving up.
My problem :
I want to keep the autodie=1 because it automatically handles the non-recoverable errors for all methods used later in the code.
But the autodie=1 prevents me to trap any error during the session opening (Net::SFTP::Foreign->new) and therefore the retries part is useless.
Here is the part of the code I wrote, the autodie is set to 0 in order to make work the retries part (but I want autodie=1).
Is it possible to open the sftp connection with autodie=>0 so that the retries part actually works, and then change this value with autodie=>1 in order to have the auto handling of non-recoverable errors ?
Any help would be much appreciated :)
use Net::SFTP::Foreign;
print "Opening SFTP session...\n";
my $j = 1;
my $sftp_max_retry = 5;
while (1) {
$sftp = do {
local $SIG{TERM} = 'IGNORE'; # used to avoid the message "Killed by signal 15".
Net::SFTP::Foreign->new(
host => "some_host_unavailable",
port => 22,
user => "some_user",
password => "some_pwd",
autodie => 0,
timeout => 10,
autoflush => 1,
);
};
if ($sftp->error) {
if ($j > $sftp_max_retry) {
print "Opening SFTP failed, maximum retry reached !\n";
exit 2;
}
print "Opening SFTP session (retry $j/$sftp_max_retry)...\n";
sleep $sftp_retry_loop;
$j++;
}else{
print "\nConnection successful\n";
last;
}
}
You can wrap your connection into eval statement and set autodie to 1.
This should work:
use Net::SFTP::Foreign;
print "Opening SFTP session...\n";
my $j = 1;
my $sftp_max_retry = 5;
my $sftp;
while (1) {
eval {
$sftp = do {
local $SIG{TERM} = 'IGNORE'; # used to avoid the message "Killed by signal 15".
Net::SFTP::Foreign->new(
host => "some_host_unavailable",
port => 22,
user => "some_user",
password => "some_pwd",
autodie => 1,
timeout => 10,
autoflush => 1,
);
};
}
if ($#) {
if ($j > $sftp_max_retry) {
print "Opening SFTP failed, maximum retry reached !\n";
exit 2;
}
print "Opening SFTP session (retry $j/$sftp_max_retry)...\n";
sleep $sftp_retry_loop;
$j++;
}else{
print "\nConnection successful\n";
last;
}
}
Related
While calling $sftp->disconnect() connection is not getting closed and the Perl script is going in hung state until I manually kill the process .
Below is the code how we are creating a SFTP connection :
my %sftp_args = ( user => $username, autodie => 1, stderr_discard => 1,more => qw(-v),
timeout => $timeout_secs,ssh_cmd => $SSH_PATH );
my $sftp = Net::SFTP::Foreign->new($remote_host, %sftp_args);
When we are calling the disconnect method script is getting hung.
$sftp->disconnect();
I tried putting the disconnect is in eval under alarm but still it is not coming back .
eval {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm 25;
my $retrun = $sftp->disconnect();
alarm 0;
};
my $exception = $#;
msg("Error Dump".Dumper($exception));
}
Below is the error i am getting in my nohup.out file.
bash: line 1: 27860 Alarm clock sftp_connection.pl
After doing the analysis on the Net::SFTP::Foreign Module i was able to find the solution . So Net::SFTP::Foreign module has a bug below are the details , i found this in perldoc:
On some operating systems, closing the pipes used to communicate the
slave SSH process does not terminate it and a work around has to be applied.
If you find that your scripts hung when the $sftp object gets out of scope,
try setting $Net::SFTP::Foreign::dirty_cleanup to a true value
According to the above comment i made the changes in my application and now it is working fine:
my %sftp_args = ( user => $username, autodie => 1, stderr_discard => 1,
timeout => $timeout_secs, ssh_cmd => $SSH_PATH, dirty_cleanup => 1 );
my $sftp = Net::SFTP::Foreign->new($remote_host, %sftp_args);
return $sftp;
First of all I would thank you guys not offering a work around as a solution (although it would be cool to know other ways to do it). I was setting up tg-master project (telegram for cli) to be used by check_mk alert plugin. I found out that telegram runs on a stdin/stdout proccess so I tought it would be cool to "glue" it, so i wrote with a lot of building blocks from blogs and cpan the next 2 pieces of code. They already work (i need to handle broken pipes sometimes) but I was wondering if sharing this could come from some experts new ideas.
As you could see my code relies on a eval with a die reading from spawned process, and I know is not the best way to do it. Any suggestions? :D
Thank you guys
Server
use strict;
use IO::Socket::INET;
use IPC::Open2;
use POSIX;
our $pid;
use sigtrap qw/handler signal_handler normal-signals/;
sub signal_handler {
print "what a signal $!\nlets kill $pid\n";
kill 'SIGKILL', $pid;
#die "Caught a signal $!";
}
# auto-flush on socket
$| = 1;
# creating a listening socket
my $socket = new IO::Socket::INET(
LocalHost => '0.0.0.0',
LocalPort => '7777',
Proto => 'tcp',
Listen => 5,
Reuse => 1
);
die "cannot create socket $!\n" unless $socket;
print "server waiting for client connection on port 7777\n";
my ( $read_proc, $write_proc );
my ( $uid, $gid ) = ( getpwnam "nagios" )[ 2, 3 ];
POSIX::setgid($gid); # GID must be set before UID!
POSIX::setuid($uid);
$pid = open2( $read_proc, $write_proc, '/usr/bin/telegram' );
#flush first messages;
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) { }
};
while (1) {
my $client_socket = $socket->accept();
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port\n";
# read until \n
my $data = "";
$data = $client_socket->getline();
# write to spawned process stdin the line we got on $data
print $write_proc $data;
$data = "";
eval {
local $SIG{ALRM} = sub { die "Timeout" }; # alarm handler
alarm(1);
while (<$read_proc>) {
$client_socket->send($_);
}
};
# notify client that response has been sent
shutdown( $client_socket, 1 );
}
$socket->close();
Client
echo "contact_list" | nc localhost 7777
or
echo "msg user#12345 NAGIOS ALERT ... etc" | nc localhost 7777
or
some other perl script =)
If you are going to implement a script that performs both reads and writes from/to different handles, consider using select (the one defined as select RBITS,WBITS,EBITS,TIMEOUT in the documentation). In this case you will totally avoid using alarm with a signal handler in eval to handle a timeout, and will only have one loop with all of the work happening inside it.
Here is an example of a program that reads from both a process opened with open2 and a network socket, not using alarm at all:
use strict;
use warnings;
use IO::Socket;
use IPC::Open2;
use constant MAXLENGTH => 1024;
my $socket = IO::Socket::INET->new(
Listen => SOMAXCONN,
LocalHost => '0.0.0.0',
LocalPort => 7777,
Reuse => 1,
);
# accepting just one connection
print "waiting for connection...\n";
my $remote = $socket->accept();
print "remote client connected\n";
# simple example of the program writing something
my $pid = open2(my $localread, my $localwrite, "sh -c 'while : ; do echo boom; sleep 1 ; done'");
for ( ; ; ) {
# cleanup vectors for select
my $rin = '';
my $win = '';
my $ein = '';
# will wait for a possibility to read from these two descriptors
vec($rin, fileno($localread), 1) = 1;
vec($rin, fileno($remote), 1) = 1;
# now wait
select($rin, $win, $ein, undef);
# check which one is ready. read with sysread, not <>, as select doc warns
if (vec($rin, fileno($localread), 1)) {
print "read from local process: ";
sysread($localread, my $data, MAXLENGTH);
print $data;
}
if (vec($rin, fileno($remote), 1)) {
print "read from remote client: ";
sysread($remote, my $data, MAXLENGTH);
print $data;
}
}
In the real production code you will need to carefully check for errors returned by various function (socket creation, open2, accept, and select).
use strict; use warnings;
use IO::Socket;
use IO::Select;
my $read_select = IO::Select->new();
my $write_select = IO::Select->new();
my $socket = IO::Socket::INET->new(
LocalHost => '127.0.0.1',
LocalPort => '5556',
Proto => 'tcp',
Listen => 50,
Reuse => 1,
) or die "Could not create socket: $!";
print "Socket Created . Waiting for connection ...\n";
## poll to accept new connection or to receive data from a connection
$read_select->add($socket);
print "Added socket to read list ";
my $reade;
my $newconn;
my #read;
my #write;
while(1) {
#read = $read_select->can_read();
foreach my $reade(#read) {
if($reade == $socket) {
print "New conn received";
my $newconn = $reade->accept();
$write_select->add($newconn);
}
else {
print "data received";
}
}
}
#write = $write_select->can_write();
foreach my $write(#write) {
$write->send("got ur data");
}
I am trying to poll for connections using select statement. Why is that if i use an infinite loop, no connection is accepted. It works fine without while(1)
I think you are being bitten by I/O buffering here. Perl buffers all input and output. It generally doesn't print to the terminal until it has received an entire line.
Your code is probably working with the while(1), but you can't see the output of your debug print statements because the output to the terminal is being buffered. Once you get to the second time through the loop, $read_select->can_read() blocks forever, so you never see the output of the print statements.
You can probably fix this just by adding \n to the end of each print statement. Another option is setting $| = 1;. This disables buffering. See perlvar's discussion of $| for more information on buffering.
I have a simple script that should bind to an SMSC and listen for incoming messages. The problem I'm having is that it will time out if it doesn't receive any messages.
Here is the script:
#!/usr/bin/perl
use Net::SMPP;
use Data::Dumper;
$Net::SMPP::trace = 1;
$smpp = Net::SMPP->new_receiver('--removed--',
port => '--removed--',
system_id => '--removed--',
password => '--removed--',
) or die;
while (1)
{
$pdu = $smpp->read_pdu() or die;
print "Received #$pdu->{seq} $pdu->{cmd}:". Net::SMPP::pdu_tab->{$pdu->{cmd}}{cmd} ."\n";
print "From: $pdu->{source_addr}\nTo: $pdu->{destination_addr}\nData: $pdu->{data}\n";
print "Messsage: $pdu->{short_message}\n\n";
}
Here's the error I'm getting:
premature eof reading from socket at /usr/lib/perl5/site_perl/5.8.8/Net/SMPP.pm line 2424.
$VAR1 = undef;
And here's the relevant sub from SMPP.pm:
sub read_hard {
my ($me, $len, $dr, $offset) = #_;
while (length($$dr) < $len+$offset) {
my $n = length($$dr) - $offset;
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm ${*$me}{enquire_interval} if ${*$me}{enquire_interval};
warn "read $n/$len enqint(${*$me}{enquire_interval})" if $trace>1;
while (1) {
$n = $me->sysread($$dr, $len-$n, $n+$offset);
next if $! =~ /^Interrupted/;
last;
}
alarm 0;
};
if ($#) {
warn "ENQUIRE $#" if $trace;
die unless $# eq "alarm\n"; # propagate unexpected errors
$me->enquire_link(); # Send a periodic ping
} else {
if (!defined($n)) {
warn "error reading header from socket: $!";
${*$me}{smpperror} = "read_hard I/O error: $!";
${*$me}{smpperrorcode} = 1;
return undef;
}
#if ($n == 0) { last; }
if (!$n) {
warn "premature eof reading from socket";
${*$me}{smpperror} = "read_hard premature eof";
${*$me}{smpperrorcode} = 2;
return undef;
#return 0;
}
}
}
#warn "read complete";
return 1;
}
In the sub, the if statement it's hitting is the one where $n is 0 or undef.
My guess is that the socket is timing out and disconnecting. How can I keep the listener up indefinitely?
In addition, this listener blocks while waiting for a pdu. Is there a way to listen without blocking?
I'm a Telecom Engineer who does programming on the side, and I've gone through all the material I could find but couldn't find an answer.
It looks as if the sysread() call simply returns 0. It can do that only, if the connection status is known to be disconnected. Since your side did not disconnect or timeout, i would deduce that the remote side disconnected. If a timeout would have occured on your side, you should not have been able to see the premature eof... message.
So, you are already 'keeping the listener up indefinitely', since you do not set the enquire_interval option.
Regarding 'Is there a way to listen without blocking?' the description section describes asynchronous mode at the end: Module can also be used asynchronously by specifying async=>1 to the constructor. You have to implement the data polling then yourself.
Have you tried to set a parameter for the enquire link (SMPP ping) timeout?
On your new_receiver, verify if "enquire_interval" parameter exists and set it to 15 seconds, for example...
I have tried with new_transceiver() method, and it works.
my $smpp = Net::SMPP->new_transceiver(
$self->host,
port => $self->port,
system_id => $self->user,
password => $self->password,
smpp_version => $self->version,
interface_version => $self->interface_version,
enquire_interval => $self->timeout,
addr_ton => $self->addr_ton,
addr_npi => $self->addr_npi,
source_addr => $self->source_addr,
source_addr_ton => $self->source_addr_ton,
source_addr_npi => $self->source_addr_npi,
dest_addr_ton => $self->dest_addr_ton,
dest_addr_npi => $self->dest_addr_npi,
system_type => $self->system_type,
facilities_mask => $self->facilities_mask
) or die "Could not connect to $self->host: $!";
It (Net::SMPP) handles enquire link automaticallly.
I am also receiving the same error of premature termination.
For blocking, you can fork or thread another process and both can run parallel. There is no way around the blocking.
I've bumped into a strange problem. I wrote a little daemon in Perl which binds a port on a server.
On the same server there's a LAMP running and the client for my Perl daemon is a php file that opens a socket with the daemon, pushes some info and then closes the connection. In the Perl daemon I log each connection in a log file for later usage.
My biggest problem is the following: between the moment when the php script finishes its execution there are 15-20seconds until the daemon logs the connection.
PHP Client:
$sh = fsockopen("127.0.0.1", 7890, $errno, $errstr, 30);
if (!$sh)
{
echo "$errstr ($errno)<br />\n";
}
else
{
$out = base64_encode('contents');
fwrite($sh, $out);
fclose($sh);
}
Perl daemon (just the socket part)
#!/usr/bin/perl
use strict;
use warnings;
use Proc::Daemon;
use Proc::PID::File;
use IO::Socket;
use MIME::Base64;
use Net::Address::IP::Local;
MAIN:
{
#setup some vars to be used down...
if (Proc::PID::File->running())
{
exit(0);
}
my $sock = new IO::Socket::INET(
LocalHost => $ip,
LocalPort => $port,
Proto => 'tcp',
Listen => SOMAXCONN,
Reuse => 1);
$sock or die "no socket :$!";
my($new_sock, $c_addr, $buf);
for (;;)
{
# setup log file
open(LH, ">>".$logs);
print "SERVER started on $ip:$port \n";
print LH "SERVER started on $ip:$port \n";
while (($new_sock, $c_addr) = $sock->accept())
{
my ($client_port, $c_ip) =sockaddr_in($c_addr);
my $client_ipnum = inet_ntoa($c_ip);
my $client_host =gethostbyaddr($c_ip, AF_INET);
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime time;
$year += 1900;
$mon += 1;
print "$year-$mon-$mday $hour:$min:$sec [".time()."] - got a connection from: [$client_ipnum]";
open(AL, ">>".$accessLog);
print AL "$year-$mon-$mday $hour:$min:$sec [".time()."] - got a connection from: [$client_ipnum]\n";
close AL;
while (defined ($buf = <$new_sock>))
{
print "contents:", decode_base64($buf), " \n";
open(FH, ">".$basepath."file_" . time() .".txt") or warn "Can't open ".$basepath."file_".time().".txt for writing: $!";
print FH decode_base64($buf);
close FH;
}
}
close LH;
}
}
What is the thing that I do so wrong and then leads to 20seconds gap between php closing the socket after writing it and the Perl script logging the connection. Any idea?
Be gentle, I'm new to Perl :)
$new_sock is not closed explicitly, and so is not closed until the accept call. This might cause some things to hang until timeouts are hit. (I am not sure if the close will happen on entry to accept or exit from. )
Also, you are using the "<>" operator to read data from a socket. What happens if there are no newlines in the input ?
The best way to see what is actually happening is to run the process under "strace -e trace=network" and try to match up the network system call with the perl and php statements.
I am not seeing any call to flush the buffer, could you check if the delay disappears when flushing after logging?