I have a fairly simple perl script with uses the LWP::UserAgent module to follow URLs through redirection to find the final destination URL which it then stores in our MySQL database. The problem is that from time to time the script reports warnings that look like this:
Day too big - 25592 > 24855
Sec too small - 25592 < 74752
Sec too big - 25592 > 11647
Day too big - 25592 > 24855
Sec too small - 25592 < 74752
Sec too big - 25592 > 11647
The warnings don't provide any other details as to why this is happening or which module is causing the issue but I'm pretty sure it has to do with LWP::UserAgent.
I'm initializing the agent using the following code:
use LWP::UserAgent;
my $ua = LWP::UserAgent->new(cookie_jar => { },requests_redirectable => [ ]);
$ua->agent('Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:9.9.9.9) Gecko/20079999 Firefox/2.0.0.1');
$ua->timeout(10);
I searched online and the only result I found was to the following thread which was never resolved http://www.mail-archive.com/libwww#perl.org/msg06515.html. The thread author thought that these warning were somehow related to cookie dates being captured by the LWP::UserAgent module.
The warning doesn't seem to be effecting the script but I would appreciate any help in better understanding what might be causing this issue and advice on how to resolve it or at least suppress the warning messages. Thanks in advance for your help!
If upgrading is not an option for you, you can, of course always filter out the warnings using a local $SIG{__WARN__} handler.
{
local $SIG{__WARN__} = sub {
warn #_ unless $_[0] =~ m(^.* too (?:big|small));
};
# your code here.
}
See Changes:
2009-10-06 Release 5.833
Gisle Aas (5):
Deal with cookies that expire far into the future [RT#50147]
Deal with cookies that expire at or before epoch [RT#49467]
It looks like you need to upgrade to the most recent version of LWP.
Related
In this oversimplified script I'm doing a GET request with Net::Async::HTTP using IO::Async::Loop::EV:
use Modern::Perl '2017';
use IO::Async::Loop::EV;
use Net::Async::HTTP;
use Future;
my $loop = IO::Async::Loop::EV->new;
my $http = Net::Async::HTTP->new(max_redirects => 0);
$loop->add($http);
my $f = $http->GET('https://www.perl.org')
->then(sub {
my $response = shift;
printf STDERR "got resp code %d\n", $response->code;
return Future->done;
});
$http->adopt_future($f->else_done);
$loop->run;
I get this warning a couple of times:
EV: error in callback (ignoring): Can't call method "sysread" on an undefined value at .../IO/Async/Stream.pm line 974
I get this warning when using IO::Async::Loop::Event too (again in IO::Async::Stream, at line 974).
For non-secure (http) links, however, all looks good. So something's probably wrong with IO::Async::SSL. (I tried this on different machines, with different OS - still getting those warnings)
Why am I getting this warning multiple times? Does it occur on your machines too?
It seems this warning is specific to the IO::Async::Loop::EV implementation. If you just
use IO::Async::Loop;
my $loop = IO::Async::Loop->new;
then it appears to work just fine. Unless you're picking that for a specific purpose it's best to avoid it and just let the IO::Async::Loop->new magic constructor find a good one.
Additionally, rather than ending the script with
$loop->run
You could instead use
$f->get
so it performs a blocking wait until that Future is complete but then exits cleanly afterwards, so as not to have to <Ctrl-C> it to abort.
I've raised this as a bug against IO::Async::Loop::EV at https://rt.cpan.org/Ticket/Display.html?id=124030
Initial Question
I am trying use Perl to make a POST to a remote server. I am using an alarm to set a hard timeout. When the alarm triggers, I am getting what I would consider to be strange behaviour.
This is the code:
eval {
local $SIG{ALRM} = sub {
die("Foobar");
};
alarm 25;
my $userAgent = new LWP::UserAgent( keep_alive => 1 );
$answer = $userAgent->post( ... );
# During a timeout, I expect that this code will not run. However,
# it does and it prints "Foobar".
$m = $answer->message();
print $m;
alarm 0;
};
alarm 0;
print "Done";
Put a sleep (or breakpoint) on server, so it will not respond and so that the alarm will trigger. When the alarm triggers, this will be printed:
Foobar
Done
My expectation was that this should print:
Done
Key questions:
Why is this happening? Am I using some kind of anti-pattern? Is using alarms, not a good idea, because underlying libraries may use them as well, and they may conflict?
What is the right way to solve this problem?
Appendix 1 - I know there is another method...
I know that I should be using:
$userAgent->timeout( ... );
And actually, I am. However, I would like to set a hard timeout as well, so that I can ensure that at most I will spend 25 seconds waiting on the request. Since the timeout associated with $userAgent->timeout( ... ); is reset each time the client gets something back from the server, it is not reliable enough.
Appendix 2 - Environment Info
#bolav mentioned that on his system, he could not reproduce the issue, I guess that it is possible that it is System dependant.
OS:
cat /etc/redhat-release
CentOS release 6.6 (Final)
Perl Version:
This is perl, v5.6.1 built for i686-linux
Appendix 3 - Answers on SO suggesting to use this method
https://stackoverflow.com/a/15900249/251589
This is hard to explain in a few sentences. I have spent the last 5 days trying to figure this out, so now I'm asking here as a last resort. I am trying to run a pool physics library with tournament server, built by the Stanford University Computational Billiards Group, available at http://www.stanford.edu/group/billiards/
It provides a tournament server using apache2, postgresql and perl. I have been able to enable functions like logging in or creating simple matches, but the communication with AI clients does not work. The client is a C++ application I'm running in a terminal, its fetching commands from the server and its printing this error:
XMLRPC error: Unable to transport XML to server and get XML response back. HTTP response code is 500, not 200
And the apache2 error.log is printing this:
[Thu May 17 21:30:17 2012] [error] [client 127.0.0.1] Premature end of script headers: api.pl
I tried running it with CGI::Debug but didnt get any more output. I have been able to identify the line it fails at with some debug outputs (one after every line :) ) and it's in this function, on the line between the print STDERRs:
package Pool::Rules::GameState;
sub addToDb {
my $self=shift;
my $timeleft=shift || $self->timeLeft();
my $timeleft_opp=shift || $self->timeLeftOpp();
#print STDERR "uh uh uh".$self." ".$self->timeLeft()." ".$self->timeLeftOpp()." ".$timeleft." ".$timeleft_opp."\n";
my $playingSolids = ($self->isOpenTable() ? undef : ($self->playingSolids()?1:0));
# print STDERR "addToDb: X".$self->isOpenTable()."X Y".$self->playingSolids()."Y Z".$playingSolids."Z\n";
$Pool::dB::dbh->do('INSERT INTO states (turntype,cur_player_started,playing_solids,timeleft,timeleft_opp,gametype) VALUES (?,?,?,?,?,?)',{},
$self->getTurnType(),$self->curPlayerStarted()?1:0,$playingSolids,$timeleft,$timeleft_opp,$self->gameType()) or STDERR $DBI::errstr;
my $stateid=$Pool::dB::dbh->selectrow_array('SELECT lastval()');
$self->tableState()->addToDb($stateid);
return $stateid;
}
It's only simple getter-methods. And still, it just stops executing. The code of the getter-methods has been generated by swig. The first print-line prints this:
uh uh uhPool::Rules::GameState=HASH(0x99fa99c) 0 0 00:10:00 00:10:00
Which seems to be okay, just curious that the $self->-getters only print 0. Since the library is from Stanford University and actual competitions have been held with it, I find it hard to believe that this is an error in the code. It just seems to be a bit too old to work (I had to make some adjustments on other places to make it work on more recent versions of libraries, f.ex. I added a < cstddef >-include in another file unrelated to this perl-issue). I first tried making it run on current versions, which I failed. Then I tried to install it on the versions the readme.txt sais it's been tested for (Ubuntu 8.10, Perl 5.10 and so on), but at some point my Ubuntu 8.10 installation always died and I had to reinstall (I tried this ~4 times, the gnome-terminal always Segmentation faulted). So now I'm back in Ubuntu 12 trying to make it work. I dont know much about perl, only the little I have been able to pick up over the last few days trying to get this to run.
Does anyone have an idea what could be triggering this kind of behavior? Is anyone aware of any compatibility issues that may be related to this? If you need any additional information just ask and I'll provide it.
Thank you for your help!
I'm refering to this question, but didn't want to post it there as it was half a year ago & its already answered.
I think that I need to set the alarm within the thread because it is listening for a connection (sockets) and I dont know what time to set for alarm until the client sents a command.
Short context: A clients sents a command which orders my script to run a selfwritten perl module. This module needs to be killed if it runs longer than it should. This "should" is very specific and will be written in the config file for each module.
I tried the alarm within a simple perl script and it worked quite well - even with my own message.
I am able to let the alarm quit the script, but it does not give me a message at all.
Used this example until I noticed that it may be different with threads.
Then I tried the Thread::alarm($time), but as I started with perl about 3 weeks ago I wasn't able to implement it correctly (it just does nothing. It does not even end the program).
Do you need any code to help or is there a site with examples that I could use and which I just did not find?
Did you already try AnyEvent?
AnyEvent let you setup watchers acting like timers:
# one-shot or repeating timers
my $w = AE::timer $seconds, 0, sub { ... }; # executed only once
my $w = AE::timer $seconds, $interval, sub { ... }; # executed every $interval
$seconds could be defined during the config phase, at thread start.
In callbacks you may use the same code that kills the program.
AnyEvent has its logging framework too AnyEvent::Log, which logs nothing by default, but you can enable some logging to see if it suits your needs about messages.
(Update to answer Jonathan Leffler's question below):
We're running Perl 5.8.7 and Oracle 11.1.0.7.0.
Due to the company's policy, developers have no arbitrary control in regard to software upgrade. Giving the proposal to the upper management takes months to be followed up (if approved) - I guess it's not a surprisingly odd situation for several other companies too.
I inherited the program from someone else left the company and found the warning about "issuing rollback() ..." from the application log file. The actual problem "maximum open_cursor exceeded" was found after I run DBI_TRACE=2=/tmp/trace.log program_name.pl.
Looking at the number of $dbh->{ActiveKids}, $dbh->{Kids}, and $dbh->{CachedKids}, I assume the maximum open cursor is 50 as the error happens after it reaches 50.
Our legacy production codes are using these modules:
DBI - 1.48
Ima::DBI - 0.33
Class::DBI - 0.96
Class::DBI::Oracle - 0.51
DBD::Oracle - 1.16
For some odd policy reason, upgrading the module to a newer version is not possible :(
The application relies on using CDBI to handle relationships on a large number of tables. A simplify snippet of the code is as below:
JOB:
foreach my $job (#jobs) {
my #records = $job->record;
RECORD:
foreach my $record (#records) {
my #datas = $record->data;
DATA:
foreach my $data (#datas) {
....
}
}
}
where each #jobs, $record, and $data is an object to a table and the inner most loop calls several other triggers.
Somewhere after several loops I'm getting an Oracle error: maximum open_cursor exceeded and then I got the error from the CDBI: issuing rollback() for database handle being DESTROYE'd without explicit disconnect.
I can workaround it by undef-ing the DBI CachedKids on the most outer loop, with:
# somewhere during initialization
$self->{_this_dbh} = __PACKAGE__->db_Main();
....
JOB:
foreach my $job (#jobs) {
RECORD: ....
DATA: ....
$self->{_this_dbh}->{CachedKids} = undef;
}
Is that the proper way to do it?
Or does CDBI support a way to clear statement handle the same way as DBI $sth->finish() ?
Thanks.
At some point, you will have to explain why you cannot upgrade to more nearly current versions of the software. You didn't mention which version of Perl you are using, or which version of Oracle; somehow, I suspect that it is neither 5.10.1 nor 11gR2.
Current versions:
Class::DBI 3.0.17
Class::DBI::Oracle 0.51
DBI 1.609 (version 1.48 is from 2005)
DBD::Oracle 1.23 (version 1.16 is from 2004)
Ima::DBI 0.35
What changed recently? Why are you suddenly finding problems in a piece of software that was, presumably, very stable? Is this new code?
With plain DBI, when you undef a statement handle (by having it go out of scope, for example), then the resources associated with it are released - more or less noisily. However, there is enough infrastructure between Class::DBI and DBI that it is hard to tell how this might map.
Have you worked out what the limit on open cursors actually is?
Have you worked out whether you've opened enough cursors to actually exceed that limit?
Have you tried running with DBI_TRACE set in the environment? A value such as 3 will tell you a fair amount about what it going on - maybe too much. It would show whether cursors are being released properly or not.
Have you tried reducing the number of tables manipulated in a single session?
Have you considered disconnecting and reconnecting between manipulating tables?
Is there a way to get to the statement handle corresponding to the Class::DBI abstractions, so that you can in fact execute $sth->finish()?