I try to download huge file which takes a lot of time downloading from ftp link using perl.
I got:
Timeout at C:/Strawberry/perl/lib/Net/FTP.pm
what does this means and how to solve it?
Thanks
Solution:
Thanks #Chris Doyle
I change the timeout value in my perl file "not ftp.pm file"
Thanks
You can increase the timeout, but it is important that if the timeout is reached again and your server/client are out of sync, it might throw the same error you got the first time, again.
It seems that the issue is due a lack of error handling in your Perl Script instead.
Surely you have something like this at your perl script:
my $ftp = Net::FTP->new( $myhost, Timeout => 10, Debug => 1 );
...
$ftp->get($myfile) or print "Got an error";
$ftp->quit();
Please notice this is out of .../perl/lib/Net/FTP.pm, since the
FTP.pm is the third party module (Kind of library) you are using to
reach the ftp, I suggest you to not touch it to avoid portability
issues later on.
Normally the timeout is reached inside the FTP.pm and it goes to the or print "Got an error" condition, but there are some cases, that the Server/Client just get out of sync and the FTP.pm just throws an unhandled exception.
This exception will NOT go to the or print "Got an error" condition, therefore you need to catch it and handle it as any other languages.
Here you can use eval to wrap it up the code, catch the exception and handle it as you need.
For example:
my $ftp = Net::FTP->new( $myhost, Timeout => 10, Debug => 1 );
...
eval {$ftp->get($myfile) or print("Can't get file $myfile") };
if ($# =~ /Timeout/) {
print "Got a timeout Issue: $#";
}
$ftp->quit();
Related
I have an issues on Web application programming with Firebird.
I'm using mod_perl and Apache::DBI with Firebird.
And also using CGI::Session for session handling.
CGI::Session uses an already connected $dbh with Firebird.
AutoCommit is ON.
Every(multiple) sql execution is wrapped by eval {} statement.
for example,
$dbh = DBI->connect("dbi:Firebird:db=$DBSERVER:/home/cdbs/xxnet.fdb;
ib_charset=UTF8;ib_dialect=3",$DBUSER,$DBPASS,{
AutoCommit=>1,
LongReadLen=>8192,
RaiseError=>1
});
eval { $dbh->begin_work()
my $sql = "SELECT * FROM SAMPLETABLE"
my $st = $dbh->prepare($sql);
$st->execute();
while (my $R = $st->fetchrow_hashref()) {
...
}
$st->finish();
}; warn $# if $#;
if ($#) {
$dbh->rollback();
}else{
$dbh->commit();
}
When an exception was raised in eval section, execute warn statement and try to rollback transaction.
Error messages are logged in error_log,after that, $dbh is going to get stuck -- CGI::Session returns no data.
I thought 'warn' statement includes 'rollback', so I tried to comment out $dbh->rollback() statement. It looks good.
Other ways, I use 'warn' to log debug message -- like print STDERR $#.
httpd is going to get stuck like as above.
With Oracle(DBD::Oracle), I didn't see these situation.
Please tell me which part is BAD for using transaction with firebird ?
warn ? DBD::Firebird ?
Thanks.
Yasuto,
I am using Openssh module to connect to hosts using the (async => 1) option.
How is it possible to trap connection errors for those hosts that are not able to connect.I do not want the error to appear in the terminal but instead be stored in a data structure, since I want to finally format all the data as a cgi script.When I run the script the hosts that has a connection problem throw error in the terminal.The code executes further and try to run commands on disconnected hosts.I want to isolate the disconnected hosts.
my (%ssh, %ls); #Code copied from CPAN Net::OpenSSH
my #hosts = qw(host1 host2 host3 host4 );
# multiple connections are stablished in parallel:
for my $host (#hosts) {
$ssh{$host} = Net::OpenSSH->new($host, async => 1);
$ssh{$host}->error and die "no remote connection "; <--- doesn't work here! :-(
}
# then to run some command in all the hosts (sequentially):
for my $host (#hosts) {
$ssh{$host}->system('ls /');
}
$ssh{$host}->error and die "no remote connection doesn't work".
Any help will be appreciated.
Thanks
You run async connections. So program continue work and dont wait when connection is establised.
After new with async option you try to check error but it is not defined because connection in progress and no information about error.
As i understand you need wait after first loop until connection process got results.
Try to use ->wait_for_master(0);
If a false value is given, it will finalize the connection process and wait until the multiplexing socket is available.
It returns a true value after the connection has been succesfully established. False is returned if the connection process fails or if it has not yet completed (then, the "error" method can be used to distinguish between both cases).
for my $host (#hosts) {
$ssh{$host} = Net::OpenSSH->new($host, async => 1);
}
for my $host (#hosts) {
unless ($ssh{$host}->wait_for_master(0)) {
# check $ssh{$host}->error here. For example delete $ssh{$host}
}
}
# Do work here
I don't check this code.
PS: Sorry for my English. Hope it helps you.
I'm writing a small script to monitor if certain ports are attempted to be accessed on my Linux box (Centos 6) using Perl 5.10.1. I'm getting back blank entries for my peerhost request. I'm not sure why. It sounds like it may be a failure in the IO socket module (http://snowhare.com/utilities/perldoc2tree/example/IO/Socket.html) but I'm not really sure. Any insight would be much appreciated.
EDIT:
Since I enabled the strict and warnings I'm getting an 'uninitialized value $display' in the cases where I thought it was blank.
#! /usr/bin/perl
use strict;
use warnings;
use IO::Socket;
use Sys::Syslog qw( :DEFAULT setlogsock);
use threads;
my #threads=();
my #ports=(88,110,389);
main(\#ports);
sub main
{
my $ports=shift;
setlogsock('unix');
openlog($0,'','user');
for my $port (#{$ports} ) {
push #threads, threads->create(\&create_monitor, $port );
}
$_->join foreach #threads;
closelog;
# wait for all threads to finish
}
sub create_monitor{
my $LocalPort=shift;
my $sock = new IO::Socket::INET (
LocalPort => $LocalPort,
Proto => 'tcp',
Listen => 1,
Reuse => 1,
) or die "Could not create socket: $!\n";
while(1)
{
my $peer_connection= $sock->accept;
my $display = $peer_connection->peerhost();
my $message="Connection attempt on port $LocalPort from $display";
#syslog('info', $message);
print $message."\n";
}
}
NOTE - it is intentional that this script never finish. I'll eventually wrap this with an init script and have it run as a service.
Perl accept() has an error code like most other functions. For accept() it is a false return, see also here.
So when you get undefined as result there is an error in accept() call. The error of accept is saved in the errno variable ($!).
Same is true for peerhost() (see here). It also can fail and return an error code.
If you only use the above code without anything else, then probably you reach connection limit of your system (you should close the connections) when accept() fails. See rlimit() to find out how that number can be increased.
One case where peerhost() fails may be that remote connection was closed already.
Solution
As reported by #limulus in the answer I accepted, this was a bug in Net::HTTPS version 6.00. Always be wary of fresh .0 releases. Here's the relevant diff between the buggy and fixed version of that module:
D:\Opt\Perl512.32 :: diff lib\Net\HTTPS.pm site\lib\Net\HTTPS.pm
6c6
< $VERSION = "6.00";
---
> $VERSION = "6.02";
75,78c75,80
< # The underlying SSLeay classes fails to work if the socket is
< # placed in non-blocking mode. This override of the blocking
< # method makes sure it stays the way it was created.
< sub blocking { } # noop
---
> if ($SSL_SOCKET_CLASS eq "Net::SSL") {
> # The underlying SSLeay classes fails to work if the socket is
> # placed in non-blocking mode. This override of the blocking
> # method makes sure it stays the way it was created.
> *blocking = sub { };
> }
Original question
Relevance: It is annoying to see your HTTPS client block indefinitely because the connection endpoint is unreliable.
This experiment is easy to set up and replay at home. You just need two things, a tarpit to trap an incoming client, and a Perl script. The tarpit can be set up using netcat:
nc -k -l localhost 9999 # on Linux, for multiple requests
nc -l -p 9999 localhost # on Cygwin, for one request only
Then, point the script to this tarpit:
use strict;
use LWP::UserAgent;
use HTTP::Request::Common;
print 'LWP::UserAgent::VERSION ', $LWP::UserAgent::VERSION, "\n";
print 'IO::Socket::SSL::VERSION ', $IO::Socket::SSL::VERSION, "\n";
my $ua = LWP::UserAgent->new( timeout => 5, keep_alive => 1 );
$ua->ssl_opts( timeout => 5, Timeout => 5 ); # Yes - see note below!
my $rsp = $ua->request( GET 'https://localhost:9999' );
if ( $rsp->is_success ) {
print $rsp->as_string;
} else {
die $rsp->status_line;
}
What is this going to do? Well, connect to the port opened by NetCat, and then ... hang. Indefinitely. At least in terms of developer time. I mean it might time out after ten minutes or two hours, but I haven't checked; the specified timeout doesn't take effect, not on Linux, and not on Windows (Win32, haven't checked Cygwin).
Versions used:
LWP::UserAgent::VERSION 6.02
IO::Socket::SSL::VERSION 1.44
# on Linux
LWP::UserAgent::VERSION 6.02
IO::Socket::SSL::VERSION 1.44
# on Win32
Now for the timeout and Timeout parameters. The former is the name of the parameter for LWP::UA, the latter is the name for IO::Socket::SSL, used via LWP::Protocol::https. (Incidentally, why is metacpan HTTPS? Well, at least it's not a tarpit.) I am somehow hoping to have these parameters passed along :)
Just so you know, keep_alive doesn't have anything to do with the timeout not working, I verified that empirically. :)
Anyway, before digging deeper, does anyone know what's going on here and how to make the timeout work with HTTPS? Hard to believe I'm the first person running into this.
This is a result of the Net::HTTPS module overriding the blocking method of IO::Socket with a noop. Upgrading to the latest Net::HTTP package should fix this.
The timeout (and Timeout) options apply only to the connection -- how many seconds will LWP::UserAgent wait while connecting -- they are not for setting a timeout on the whole transaction.
You'll want to use Perl's alarm with a $SIG{ALRM} handler to timeout the whole transaction. See perldoc -f alarm or perlipc.
local $SIG{ALRM} = sub { die "SSL timeout\n" };
my $ua = LWP::UserAgent->new( timeout => 5, keep_alive => 1 );
$ua->ssl_opts( timeout => 5, Timeout => 5 );
eval {
alarm(10);
my $rsp = $ua->request( GET 'https://localhost:9999' );
if ( $rsp->is_success ) {
print $rsp->as_string;
} else {
die $rsp->status_line;
}
};
alarm(0);
if ($#) {
if ($# =~ /SSL timeout/) {
warn "request timed out";
} else {
die "error in request: $#";
}
}
(tested on Linux. Alarms can be a bit more cantankerous in Windows/Cygwin)
I asked this question on PerlMonks, and received an answer to the effect that:
The underlying IO::Socket::INET does not support non-blocking sockets
on Win32, thus non-blocking IO::Socket::SSL is not supported on Win32,
which means also, that timeouts don't work (because they are based on
non-blocking). See also http://www.perlmonks.org/?node_id=378675
http://cpansearch.perl.org/src/SULLR/IO-Socket-SSL-1.60/README.Win32
The PerlMonks post pointed to is from 2004. Not sure the information still applies; after all, I've seen the timeout does work on Windows, just not via SSL.
I'm not a perl programmer but need to debug an error. I'm using the Net:SFTP:Foreign package.
When I attempt to get files, the following call fails:
$sftp->get(source, destination) or do { print "something went wrong."}
This line returns "something went wrong." What I would like is to find out WHAT went wrong! How can I extract the reason for failure?
By the way, this script has been working for months without an error. The script is very reliable, I just don't know how to capture the reason for failure.
$sftp->get(source, destination) or warn "get() failed with " . $sftp->error . "\n";
$sftp->get($source, $destination)
or print "something went wrong: " . $sftp->error . "\n";