Issues with LWP logging into website - perl
I am new to LWP and thanks for all the help. I am writing a small perl script to log into a website and download a file. The process works perfectly fine with a browser but not through LWP. With a browser the process is
Log into website via authentication (username, password)
Upon successful login, the wesbite loads another page
One can then access the Downloads page and download the file
In case one is not logged in and tries to access the download page, the website
loads the Registration page to create a login.
This process works perfectly fine with a browser. The URL and user/pass are real so you can try this on the website with the details in the code
With a script however, I get a success code but the website does not allow access to steps 2 or 3. Instead of downloading the file, the Registration page gets downloaded. I suspect that this means that login is not working with the script.
All help in making this work will be greatly appreciated
Code below
#!/usr/bin/perl -w
use strict;
use warnings;
use LWP::Simple;
use LWP::UserAgent;
use HTTP::Cookies;
use HTTP::Request;
use WWW::Mechanize;
my $base_url = "http://www.eoddata.com/default.aspx";
my $username = 'xcytt';
my $password = '321pass';
# create a cookie jar on disk
my $cookies = HTTP::Cookies->new(
file => 'cookies1.txt',
autosave => 1,
);
my $http = LWP::UserAgent->new();
$http->cookie_jar($cookies);
my $login = $http->post(
'http://www.eoddata.com/default.aspx',
Content => [
username => $username,
password => $password,
]
);
# check if log in succeeded
if ( $login->is_success ) {
print "The response from server is " . $login->status_line . "\n\n";
print "The headers in the response are \n" . $login->headers()->as_string() . "\n\n";
print "Logged in Successfully\n\n";
print "Printing cookies after successful login\n\n";
print $http->cookie_jar->as_string() . "\n";
my $url = "http://www.eoddata.com/Data/symbollist.aspx?e=NYSE";
print "Now trying to download " . $url . "\n\n";
# make request to download the file
my $file_req = HTTP::Request->new( 'GET', $url );
print "Printing cookies before file download request\n\n";
print $http->cookie_jar->as_string() . "\n";
my $get_file = $http->request($file_req);
# check request status
if ( $get_file->is_success ) {
print "The response from server is " . $get_file->status_line . "\n\n";
print "The headers in the response are " . $get_file->headers()->as_string() . "\n\n";
print "Downloaded $url, saving it to file ...\n\n";
open my $fh, '>', 'tmp_NYSE.txt' or die "ERROR: $!n";
print $fh $get_file->decoded_content;
close $fh;
} else {
print "File Download failure\n";
}
} else {
print "Login Error\n";
}
Output from the script:
The response from server is 200 OK
The headers in the response are
Cache-Control: private
Date: Sun, 12 Oct 2014 17:43:47 GMT
Server: Microsoft-IIS/7.5
Content-Length: 39356
Content-Type: text/html; charset=utf-8
Client-Date: Sun, 12 Oct 2014 17:43:48 GMT
Client-Peer: 64.182.238.14:80
Client-Response-Num: 1
Link: <styles/jquery-ui-1.10.0.custom.min.css>; rel="stylesheet"; type="text/css"
Link: <styles/main.css>; rel="stylesheet"; type="text/css"
Link: <styles/button.css>; rel="stylesheet"; type="text/css"
Link: <styles/nav.css>; rel="stylesheet"; type="text/css"
Link: </styles/colorbox.css>; rel="stylesheet"; type="text/css"
Link: </styles/slides.css>; rel="stylesheet"; type="text/css"
Set-Cookie: ASP.NET_SessionId=cjgm4oscl1xmlzwnzql4gcns; path=/; HttpOnly
Title: End of Day Stock Quote Data and Historical Stock Prices
X-AspNet-Version: 4.0.30319
X-Meta-Description: Free end of day stock market data and historical quotes for many of the world's top exchanges including NASDAQ, NYSE, AMEX, TSX, OTCBB, FTSE, SGX, HKEX, and FOREX.
X-Meta-Keywords: metastock eod,free eod,free eod data,eod download,stock,exchange,data,historical stock quotes,free,historical share prices,download,day,end,prices,market,chart,NYSE,NASDAQ,AMEX,FTSE,FOREX,ASX,SGX,NZSE,tsx stock,stock share prices,stock ticker symbol,daily prices,daily stock,historic stock price,stock futures
X-Meta-Verify-V1: cT9ZK5uSlR3GrcasqgUh7Yh3fnuRGsRY1IRvE85ffa0=
X-Powered-By: ASP.NET
Logged in Successfully
Printing cookies after successful login
Set-Cookie3: ASP.NET_SessionId=cjgm4oscl1xmlzwnzql4gcns; path="/"; domain=www.eoddata.com; path_spec; discard; HttpOnly; version=0
Now trying to download http://www.eoddata.com/Data/symbollist.aspx?e=NYSE
Printing cookies before file download request
Set-Cookie3: ASP.NET_SessionId=cjgm4oscl1xmlzwnzql4gcns; path="/"; domain=www.eoddata.com; path_spec; discard; HttpOnly; version=0
The response from server is 200 OK
The headers in the response are Cache-Control: private
Date: Sun, 12 Oct 2014 17:43:48 GMT
Server: Microsoft-IIS/7.5
Content-Length: 49880
Content-Type: text/html; charset=utf-8
Client-Date: Sun, 12 Oct 2014 17:43:49 GMT
Client-Peer: 64.182.238.14:80
Client-Response-Num: 1
Link: <styles/jquery-ui-1.10.0.custom.min.css>; rel="stylesheet"; type="text/css"
Link: <styles/main.css>; rel="stylesheet"; type="text/css"
Link: <styles/button.css>; rel="stylesheet"; type="text/css"
Link: <styles/nav.css>; rel="stylesheet"; type="text/css"
Title: Member Registration
X-AspNet-Version: 4.0.30319
X-Meta-Description: Register now for Free end of day stock market data and historical quotes for many of the world's top exchanges including NASDAQ, NYSE, AMEX, TSX, OTCBB, FTSE, ASX, SGX, HKEX, and FOREX.
X-Meta-Keywords: metastock eod,free eod,free eod data,eod download,stock,exchange,data,historical stock quotes,free,download,day,end,prices,market,chart,NYSE,NASDAQ,AMEX,FTSE,FOREX,ASX,SGX,NZSE,tsx stock,stock share prices,stock ticker symbol,daily prices,daily stock,historic stock price
X-Powered-By: ASP.NET
Downloaded http://www.eoddata.com/Data/symbollist.aspx?e=NYSE, saving it to file ...
The header from the browser is:
http://www.eoddata.com/myaccount/default.aspx
GET /Data/symbollist.aspx?e=NYSE HTTP/1.1
Host: www.eoddata.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:32.0) Gecko/20100101 Firefox/32.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: ASP.NET_SessionId=uvnqhzpzco1wpe300egm4hqj; __utma=264658075.1162754774.1412987203.1413069850.1413137050.4; __utmc=264658075; __utmz=264658075.1412987203.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _cb_ls=1; _chartbeat2=DMtSRyBOnGNFDptR86.1412466246942.1413137060190.10011111; _chartbeat_uuniq=3; EODDataAdmin=D838F9AA985E247A47493320CC8DC14950FA6CE49C6E1079DCFA95F632CEA7A2A6A691B352C544D41D0C208077D0C23897C9EA6EF0FE9221833A7131C334A657A48F5001BF2EBDE073D98BE4FD5719943AAC94D7C3DAA5A422FD575C663C337C93D5046AF3F7987998EDD60347531460FC54DEC81394352D9EDA00B7C954CC3304BC7D4C30D1F3A82C0EE58B890E0765; __utmb=264658075.2.10.1413137050; __utmt=1
Connection: keep-alive
HTTP/1.1 200 OK
Cache-Control: private
Transfer-Encoding: chunked
Content-Type: text/plain; charset=utf-8
Server: Microsoft-IIS/7.5
Content-Disposition: attachment;filename=NYSE.txt
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 12 Oct 2014 18:05:24 GMT
The downloaded file snippet which is NOT the output I want is below. Note that the title is "Member Registration" instead of the data file I am expecting
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><link rel="stylesheet" href="styles/jquery-ui-1.10.0.custom.min.css" type="text/css" /><link rel="stylesheet" href="styles/main.css" type="text/css" /><link rel="stylesheet" href="styles/button.css" type="text/css" /><link rel="stylesheet" href="styles/nav.css" type="text/css" />
<script src="../scripts/jquery-1.9.0.min.js" type="text/javascript"></script>
<script src="../scripts/jquery-ui-1.10.0.custom.min.js" type="text/javascript"></script>
<script type="text/javascript"> var _sf_startpt = (new Date()).getTime()</script>
<meta name="keywords" content="metastock eod,free eod,free eod data,eod download,stock,exchange,data,historical stock quotes,free,download,day,end,prices,market,chart,NYSE,NASDAQ,AMEX,FTSE,FOREX,ASX,SGX,NZSE,tsx stock,stock share prices,stock ticker symbol,daily prices,daily stock,historic stock price" />
<meta name="description" content="Register now for Free end of day stock market data and historical quotes for many of the world's top exchanges including NASDAQ, NYSE, AMEX, TSX, OTCBB, FTSE, ASX, SGX, HKEX, and FOREX." />
<title>
Member Registration
</title></head>
Most of those use statements are unnecessary, as LWP will generally pull in any modules that it needs.
If you are using LWP::UserAgent then you certainly don't need LWP::Simple orWWW::Mechanize, and by default LWP will create an in-memory HTTP::Cookies object.
The problem is most likely that the HTML that you are fetching from the web site contains JavaScript code that modifies it after it is retrieved. LWP won't emulate that for you, so the page remains just as it was sent from the web site.
There is no good solution to this, but WWW::Mechanize::Firefox allows you to drive an installed Firefox browser from Perl code, and will do what you need.
Your login code isn't logging you in--the data you are posting doesn't resemble the input that the login form takes.
Using WWW::Mechanize's mech-dump to examine the contents of the form at http://www.eoddata.com/default.aspx shows the following:
POST http://www.eoddata.com/default.aspx [aspnetForm]
ctl00_tsm_HiddenField= (hidden readonly)
__VIEWSTATE=/wEPDwUJNTgzMTIzMjMyD2QWAmYPZBYCAgMPZBYCAgcPZBYCAh0PZBYEAgMPZBYCAgcPDxYCHgRUZXh0ZWRkAgcPDxYCHgdWaXNpYmxlaGRkGAEFHl9fQ29udHJvbHNSZXF1aXJlUG9zdEJhY2tLZXlfXxYBBRpjdGwwMCRjcGgxJGxnMSRjaGtSZW1lbWJlcuq72b0jSSSEoSOAcZlLZzWMmsYqjOMTbPl/Op1ToVKf (hidden readonly)
__VIEWSTATEGENERATOR=CA0B0334 (hidden readonly)
__PREVIOUSPAGE=72Ep8BrmYqNbOSb65afxljULshovHpRLBJcMC0funBrM2g0qkkpORQb_wqNsu_2SbA5JbxbwNkpXlR_SZWwgPwwbGdBP4YGDoNJCDtPRQS81 (hidden readonly)
__EVENTVALIDATION=/wEdAAvsaJw1zF2h8PWbp8tJHjaFx+CzKn9gssNaJswg1PWksJd223BvmKj73tdq9M98Zo0JWPh42opnSCw9zAHys7YwDyn98qMl4Da8RNKOYtjmMtj1Nek/A8Dky1WNDflwB7GO1vgbcIR7aON1c4Cm5wJw0r2yvex8d7TohORX6QMo1j8IRvmRE3IYRPV0S4fj4csX1838LMsOJxqMoksh8zNIRuOmXf1pY8AyXSwvWgp1mYRx4mHFI6oep3qpPKhhA22Mc6tB5KOFIqkGgyvucIby (hidden readonly)
ctl00$Menu1$s1$txtSearch= (text)
ctl00$Menu1$s1$btnSearch=Search (submit)
ctl00$cph1$btns1=CLICK HERE (submit)
ctl00$cph1$btns2=CLICK HERE (submit)
ctl00$cph1$btns3=CLICK HERE (submit)
ctl00$cph1$lg1$txtEmail= (text)
ctl00$cph1$lg1$txtPassword= (password)
ctl00$cph1$lg1$chkRemember=<UNDEF> (checkbox) [*<UNDEF>/off|on]
ctl00$cph1$lg1$btnLogin=Login (submit)
Your POST request needs to set the appropriate fields from the form above to successfully log in to the server, unless there is documentation somewhere that specifically says that the method you are using to login is valid (I did not do a search of the website to check this).
I cheated somewhat and created a valid login request using data from Chrome's inspector panel (rather than using WWW::Mechanize to populate the form or creating the request myself). With this, I was able to login and download the file:
my $resp = $http->post(
'http://www.eoddata.com/default.aspx',
Content => 'ctl00_tsm_HiddenField=&__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=%2FwEPDwUJNTgzMTIzMjMyD2QWAmYPZBYCAgMPZBYCAgcPZBYCAh0PZBYEAgMPZBYCAgcPDxYCHgRUZXh0ZWRkAgcPDxYCHgdWaXNpYmxlaGRkGAEFHl9fQ29udHJvbHNSZXF1aXJlUG9zdEJhY2tLZXlfXxYBBRpjdGwwMCRjcGgxJGxnMSRjaGtSZW1lbWJlcuq72b0jSSSEoSOAcZlLZzWMmsYqjOMTbPl%2FOp1ToVKf&__VIEWSTATEGENERATOR=CA0B0334&__PREVIOUSPAGE=72Ep8BrmYqNbOSb65afxljULshovHpRLBJcMC0funBrM2g0qkkpORQb_wqNsu_2SbA5JbxbwNkpXlR_SZWwgPwwbGdBP4YGDoNJCDtPRQS81&__EVENTVALIDATION=%2FwEdAAvsaJw1zF2h8PWbp8tJHjaFx%2BCzKn9gssNaJswg1PWksJd223BvmKj73tdq9M98Zo0JWPh42opnSCw9zAHys7YwDyn98qMl4Da8RNKOYtjmMtj1Nek%2FA8Dky1WNDflwB7GO1vgbcIR7aON1c4Cm5wJw0r2yvex8d7TohORX6QMo1j8IRvmRE3IYRPV0S4fj4csX1838LMsOJxqMoksh8zNIRuOmXf1pY8AyXSwvWgp1mYRx4mHFI6oep3qpPKhhA22Mc6tB5KOFIqkGgyvucIby&ctl00%24Menu1%24s1%24txtSearch=&ctl00%24cph1%24lg1%24txtEmail=xcytt&ctl00%24cph1%24lg1%24txtPassword=321pass&ctl00%24cph1%24lg1%24btnLogin=Login' );
if ($resp->is_success) {
my $get_file = $http->get("http://www.eoddata.com/Data/symbollist.aspx?e=NYSE");
}
Dumping the contents of $get_file gave me the list of symbols and company names as expected.
You can use WWW::Mechanize to fill in the form fields, or you can scrape the form input values from http://www.eoddata.com/default.aspx (particularly the hidden fields, which change on every page load) and then create a POST request using those values and your login credentials.
Also note that it is perfectly possible to get a successful response from the server without performing the action (e.g. login) that you were intending. Redirects and pages with "Login failed" will both be counted as success by LWP::UA.
In case anyone is stil interested in this problem, I have taken another look at it and found that it is quite workable using just LWP. However, the facilities of WWW::Mechanize make it much more simple to work with HTML forms
Here's a program that logs in to the page using the credentials provided. Being an ASP page it has dreadful input names. For instance the names if the username and password fields and the login button are ctl00$cph1$lg1$txtEmail, ctl00$cph1$lg1$txtPassword, and ctl00$cph1$lg1$btnLogin respectively. I have used the HTML::Form methods directly to locate these input fields using regular expressions, which I think makes the code much clearer
I have displayed the title of the HTML page that is reached after logging in to demonstrate that it is working
use strict;
use warnings;
use WWW::Mechanize;
my $base_url = 'http://www.eoddata.com/default.aspx';
my $username = 'xcytt';
my $password = '321pass';
my $mech = WWW::Mechanize->new;
$mech->get($base_url);
my $form = $mech->form_id('aspnetForm');
my #inputs = $form->inputs;
my ($email) = grep $_->name =~ /Email/, #inputs;
my ($pass) = grep $_->name =~ /Password/, #inputs;
my ($login) = grep $_->name =~ /Login/, #inputs;
$email->value($username);
$pass->value($password);
$mech->click_button(value => 'Login');
print $mech->title, "\n";
output
EODData - My Download
Related
Should Rest API return only structured data? What if I have to get html?
I'm trying to develop a super simple web page with REST API. In most examples, they suggest to get a resource like below. request GET /dogs/1 response HTTP/1.1 200 OK Content-Type: application/json { "id" : 1, "name" : bity } The path /dogs/1 represents structured data of dog 1. Then which path should I use for the html of dog 1 detail view? request GET ???? response HTTP/1.1 200 OK Content-Type: text/html <html> <head></head> <body> <div>name is bity</div> </body> </html>
Then which path should I use for the html The same. This is called "content negotiation". The Accept request header indicates which representation of the same resource the client desires to receive. See also: REST – Content Negotiation RFC 7231, section 3.1.1.1, "Media type" Software Engineering: Is it OK to return HTML from a JSON API? Should I pass a full webpage as html/text for a rest GET call
CGI returns OK for 403
I wrote a simple script to serve custom HTTP error 403 page and I use the following code: use CGI qw/:standard/; print header( '-Status' => 403, '-Type' => 'text/html; charset=utf-8', '-Cache-Control' => 'private, no-cache, no-store, must-revalidate, max-age=0', '-Pragma' => 'no-cache'); ... print $html; I expected system to return Forbidden status text automatically in HTTP header. Unfortunately it returns 403 OK instead of 403 Forbidden. Text phrase is more likely added by browser. Sure, I can explicitly add the status text using '-Status' => '403 Forbidden', but I would still like to know why isn't this done automatically, and why I am getting OK status instead... Is there a way to force Perl to add default (English) status text for selected response code?
Chrome is the culprit here. You can verify by running your snippet on the command line, which outputs the following headers: Status: 403 Pragma: no-cache Cache-control: private, no-cache, no-store, must-revalidate, max-age=0 Content-Type: text/html; charset=utf-8 Notice the status is plain-old 403. CGI.pm does not know about the reason phrases recommended by the HTTP spec. Nor should it: they are merely recommendations (not defaults), and you can change them to whatever you want without affecting the protocol (403 Go away anyone?). According to the standard, clients are not required to even look at the reason phrase. So no, unless you modify CGI.pm, there is no way to force Perl to add a reason phrase. Even if you do provide a reason phrase, browsers can do what they wish with them (although most browsers will probably behave sanely).
I've been looking long time for this, and basically it's very simple: see https://serverfault.com/questions/121735/how-to-return-404-from-apache2-cgi-program In sort: change use CGI qw/:standard/; print header( '-Status' => 403, '-Type' => 'text/html; charset=utf-8', ... to print "Status: 403 Forbidden\r\n"; print "Content-Type: text/html\r\n\r\n"; print "<h1>403 Forbidden!</h1>";
How to make Fake HTTP response objects for testing in perl
I have written a perl script that feeds data into a web service. I have some system tests for the perl script that check that I can interact with the webservice, and these work just fine, but I do not want to be running system tests when I make small changes - I want to run unit tests: So far I have written a subclass of my importer that simply intercepts the web requests before it actually calls the URL in question and tests that all the inputs are of the right type and form, and this works fine in all cases except where the perl script needs to read the response for instructions, and then proceed to the next steps. My problem is that I cannot fake a response object. I've tried using HTTP::Response->new, but it keeps complaining about bad header arguments How do I best FAKE a response object?
There is no need to mock the HTTP::Response object. They are easy to construct—at least as easy as mocking would be and less likely to introduce bugs into the tests. You need to read the documentation and not just guess at usage. You can construct them in code, of course, but what I've done in the past more than once is just save the output of curl or a stringified request that was made against an application and parse it back into an object. Try playing around with these– use warnings; use strict; use HTTP::Response; my $response = HTTP::Response->new(204); print $response->as_string; my $other = HTTP::Response->parse(join "", <DATA>); print $other->decoded_content, $/; __DATA__ HTTP/1.1 200 OK Cache-Control: public, max-age=53 Content-Type: text/html; charset=utf-8 Expires: Wed, 06 Jul 2011 19:13:54 GMT Last-Modified: Wed, 06 Jul 2011 19:12:54 GMT Vary: * Date: Wed, 06 Jul 2011 19:12:59 GMT Content-Length: 198121 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Stack Overflow</title> </head> <body class="home-page"> <blockquote>O HAI!</blockquote> </body> </html>
You may be looking for mock objects - in this case a mock LWP object? See Test::Mock::LWP on CPAN. Its documentation shows usage like this: use Test::Mock::LWP; # Setup fake response content and code $Mock_response->mock( content => sub { 'foo' } ); $Mock_resp->mock( code => sub { 201 } ); # Validate args passed to request constructor is_deeply $Mock_request->new_args, \#expected_args; # Validate request headers is_deeply [ $Mock_req->next_call ], [ 'header', [ 'Accept', 'text/plain' ] ]; # Special User Agent Behaviour $Mock_ua->mock( request => sub { die 'foo' } ); If you search CPAN for Test::Mock, there are quite a few modules for mocking/faking objects for testing.
Why doesn't Safari set the cookies from my Perl CGI script?
I have a Perl-based website that attempts to set a number of cookies on the users first visit and I just noticed that Safari has stopped setting all but the first cookie that is passed. On first visit two cookies should be set which are 'location' and 'referrer'. In IE and Firefox the cookies are being set correctly but Safari is only setting the 'location' cookie. I tried changing the names, values, etc. and the conclusion I've come to is that Safari is just setting the first of the two cookies: Here is the code that is setting the cookies: # Add location cookie if necessary if(!$query->cookie('location') && $user_location) { my $cookie = $query->cookie(-name=>'location',-value=>qq|$user_lcoation|,-domain=>".domain.com",-path=>'/',-expires=>'+1Y'); push(#cookies,$cookie); } # Add referrer if first visit if(!$query->cookie('referrer')) { if($ENV{'HTTP_REFERER'}) { my $cookie = $query->cookie(-name=>'referrer',-value=>$ENV{'HTTP_REFERER'},-domain=>".domain.com",-path=>'/',-expires=>'+3M'); push(#cookies,$cookie); } else { my $cookie = $query->cookie(-name=>'referrer',-value=>'unknown',-domain=>".domain.com",-path=>'/',-expires=>'+3M'); push(#cookies,$cookie); } } if(scalar(#cookies)) { print $query->header(-cookie=>\#cookies); } Here is what I get when I try to access the website from curl: curl -so /dev/null -D - http://domain.com HTTP/1.1 200 OK Date: Thu, 18 Feb 2010 20:19:17 GMT Server: Apache/2.0.63 (Unix) mod_ssl/2.0.63 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.8 mod_perl/2.0.4 Perl/v5.8.8 Set-Cookie: location=Dallas; domain=.domain.com; path=/; expires=+1Y Set-Cookie: referrer=unknown; domain=.domain.com; path=/; expires=Wed, 19-May-2010 20:19:20 GMT Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 Any ideas? I'm at a loss as to what I can do to help resolve this issue since it seems that my script is passing them correctly. Thanks in advance for any insights or ideas you might have!
Look at the expires date on the first cookie header -- it's a literal +1Y instead of the actual standard datestamp that it should be. My guess is that your version of Safari is choking on this and simply refuses to parse the remaining cookie headers. To set a one-year expiration date, the correct syntax is -expires => '+1y' (lowercase Y).
Try upgrading CGI.pm (do cpan CGI). I had similar problem with cookies that was solved by CGI.pm upgrade.
a bit late for an aswer, but later better than never : a simple way, without having to reinstall/update CGI.pm, is to specify the date you want your cookie to expire, using DateTime.pm : my $cookie = CGI->new->cookie( -name=>'cookie_name', -value=>'value', -domain=>$ENV{'HTTP_HOST'}, -expires=>((DateTime->now->set_time_zone('local'))->add(months=>1)->strftime("%a, %d %b %Y %I:%M:%S GMT")), -path=>'/', ); there i've got a cookie that will last for 1 month. I tested it on safari under XP, works fine. hope this will help
Can gmail's random-signatures-from-an-RSS-feed be used for truly dynamic signatures?
This is a new gmail labs feature that lets you specify an RSS feed to grab random quotes from to append to your email signature. I'd like to use that to generate signatures programmatically based on parameters I pass in, the current time, etc. (For example, I have a script in pine that appends the current probabilities of McCain and Obama winning, fetched from intrade's API. See below.) But it seems gmail caches the contents of the URL you specify. Any way to control that or anyone know how often gmail looks at the URL? ADDED: Here's the program I'm using to test this. This file lives at http://kibotzer.com/sigs.php. The no-cache header idea, taken from here -- http://mapki.com/wiki/Dynamic_XML -- seems to not help. <?php header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // Date in the past header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); // HTTP/1.1 header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); // HTTP/1.0 header("Pragma: no-cache"); //XML Header header("content-type:text/xml"); ?> <!DOCTYPE rss PUBLIC "-//Netscape Communications//DTD RSS 0.91//EN" "http://my.netscape.com/publish/formats/rss-0.91.dtd"> <rss version="0.91"> <channel> <title>Dynamic Signatures</title> <link>http://kibotzer.com</link> <description>Blah blah</description> <language>en-us</language> <pubDate>26 Sep 2008 02:15:01 -0000</pubDate> <webMaster>dreeves#kibotzer.com</webMaster> <managingEditor>dreeves#kibotzer.com (Daniel Reeves)</managingEditor> <lastBuildDate>26 Sep 2008 02:15:01 -0000</lastBuildDate>  <item> <title> Dynamic Signature 1 (<?php echo gmdate("H:i:s"); ?>) </title> <link>http://kibotzer.com</link> <description>This is the description for Signature 1 (<?php echo gmdate("H:i:s"); ?>) </description> </item> <item> <title> Dynamic Signature 2 (<?php echo gmdate("H:i:s"); ?>) </title> <link>http://kibotzer.com</link> <description>This is the description for Signature 2 (<?php echo gmdate("H:i:s"); ?>) </description> </item> </channel> </rss> -- http://ai.eecs.umich.edu/people/dreeves - - search://"Daniel Reeves" Latest probabilities from intrade... 42.1% McCain becomes president (last trade 18:07 FRI) 57.0% Obama becomes president (last trade 18:34 FRI) 17.6% US recession in 2008 (last trade 16:24 FRI) 16.1% Overt air strike against Iran in '08 (last trade 17:39 FRI)
You might be able to do something on the clientside, take a look at this greasemonkey script which randomly adds a signature. Since it's under your control, and not google's, you can control if it caches or not.
Try setting the Cache-Control: no-cache and Pragma: no-cache HTTP headers. If Google's signature code honors either of these headers then you'll be in luck.