I have written a Perl script which would check a list of URLs and connect to them by sending a GET request.
Now, let's say that one of these URLs has a file which is very big in size, for instance, has a size > 100 MB.
When a request is sent to download this file using this:
$mech=WWW::Mechanize->new();
$url="http://somewebsitename.com/very_big_file.txt"
$mech->get($url)
Once the GET request is sent, it will start downloading the file. I want this to be cancelled using WWW::Mechanize. How can I do that?
I checked the documentation of this Perl Module here:
http://metacpan.org/pod/WWW::Mechanize
However, I could not find a method which would help me do this.
Thanks.
Aborting a GET request
Using the :content_cb option, you can provide a callback function to get() that will be executed for each chunk of response content received from the server. You can set* the chunk size (in bytes) using the :read_size_hint option. These options are documented in LWP::UserAgent (get() in WWW::Mechanize is just an overloaded version of the same method in LWP::UserAgent).
The following request will be aborted after reading 1024 bytes of response content:
use WWW::Mechanize;
sub callback {
my ($data, $response, $protocol) = #_;
die "Too much data";
}
my $mech = WWW::Mechanize->new;
my $url = 'http://www.example.com';
$mech->get($url, ':content_cb' => \&callback, ':read_size_hint' => 1024);
print $mech->response()->header('X-Died');
Output:
Too much data at ./mechanize line 12.
Note that the die in the callback does not cause the program itself to die; it simply sets the X-Died header in the response object. You can add the appropriate logic to your callback to determine under what conditions a request should be aborted.
Don't even fetch URL if content is too large
Based on your comments, it sounds like what you really want is to never send a request in the first place if the content is too large. This is quite different from aborting a GET request midway through, since you can fetch the Content-Length header with a HEAD request and perform different actions based on the value:
my #urls = qw(http://www.example.com http://www.google.com);
foreach my $url (#urls) {
$mech->head($url);
if ($mech->success) {
my $length = $mech->response()->header('Content-Length') // 0;
next if $length > 1024;
$mech->get($url);
}
}
Note that according to the HTTP spec, applications should set the Content-Length header. This does not mean that they will (hence the default value of 0 in my code example).
* According to the documentation, the "protocol module which will try to read data from the server in chunks of this size," but I don't think it's guaranteed.
Related
I want to send xml request using HTTP streaming protocol . where transfer-encoding is "chunked". Currently i am using LWP::UserAgent to send the xml transaction.
my $userAgent = LWP::UserAgent->new;
my $starttime = time();
my $response = $userAgent->request(POST $url,
Content_Type => 'application/xml',
Transfer_Encoding => 'Chunked',
Content => $xml);
print "Response".Dumper($response);
But i am getting http status code 411 Length Required. Which means "client error response code indicates that the server refuses to accept the request without a defined "
How we can handle this while sending a request in chunked ?
LWP::UserAgent's API isn't designed to send a stream, but it is able to do so with minimal hacking.
use strict;
use warnings qw( all );
use HTTP::Request::Common qw( POST );
use LWP::UserAgent qw( );
my $ua = LWP::UserAgent->new();
# Don't provide any content.
my $request = POST('http://stackoverflow.org/junk',
Content_Type => 'application/xml',
);
# POST() insists on adding a Content-Length header.
# We need to remove it to get a chunked request.
$request->headers->remove_header('Content-Length');
# Here's where we provide the stream generator.
my $buffer = 'abc\n';
$request->content(sub {
return undef if !length($buffer); # Return undef when done.
return substr($buffer, 0, length($buffer), ''); # Return a chunk of data otherwise.
});
my $response = $ua->request($request);
print($response->status_line);
Using a proxy (Fiddler), we can see this does indeed send a chunked request:
There's no point in using a chunked request if you already have the entire document handy like in the example you gave. Instead, let's say wanted to upload the output of some external tool as it produced its output. To do that, you could use the following:
open(my $pipe, '-|:raw', 'some_tool');
$request->content(sub {
my $rv = sysread($pipe, my $buf, 64*1024);
die $! if !defined($rv);
return undef if !$rv;
return $buf;
});
But i am getting http status code 411 Length Required.
Not all servers understand a request with a chunked payload even though this is standardized in HTTP/1.1 (but not in HTTP/1.0). For example nginx only supports chunking within a request since version 1.3.9 (2012), see Is there a way to avoid nginx 411 Content-Length required errors?. If the server does not understand a request with chunked encoding there is nothing you can do from the client side, i.e. you simply cannot use chunked transfer encoding then. If you have control over the server make sure that the server actually supports it.
I've also never experienced browsers send such requests, probably since they cannot guarantee that the server will support such request. I've only seen mobile apps used where the server and app is managed by the same party and thus support for chunked requests can be guaranteed.
I was using an open source tool called SimTT which gets an URL of a tabletennis league and then calculates the probable results (e.g. ranking of teams and players). Unfortunately the webpage moved to a different webpage.
I downloaded the open source and repaired the parsing of the webpage, but currently I'm only able to download the page manually and read it then from a file.
Below you can find an excerpt of my code to retrieve the page. It prints success, but the response is empty. Unfortunately I'm not familiar with perl and webtechniques very well, but in Wireshark I could see that one of the last things send was a new session key. But I'm not sure, if the problem is related to cookies, ssl or something like that.
It would be very nice if someone could help me to get access. I know that there are some people out there which would like to use the tool.
So heres the code:
use LWP::UserAgent ();
use HTTP::Cookies;
my $ua = LWP::UserAgent->new(keep_alive=>1);
$ua->agent('Mozilla/5.0');
$ua->cookie_jar({});
my $request = new HTTP::Request('GET', 'https://www.mytischtennis.de/clicktt/ByTTV/18-19/ligen/Bezirksoberliga/gruppe/323819/mannschaftsmeldungen/vr');
my $response = $ua->request($request);
if ($response->is_success) {
print "Success: ", $response->decoded_content;
}
else {
die $response->status_line;
}
Either there is some rudimentary anti-bot protection at the server or the server is misconfigured or otherwise broken. It looks like it expects to have an Accept-Encoding header in the request which LWP by default does not sent. The value of this header does not really seem to matter, i.e. the server will send the content compressed with gzip if the client claims to support it but it will send uncompressed data if the client offered only a compression method which is unknown to the server.
With this knowledge one can change the code like this:
my $request = HTTP::Request->new('GET',
'https://www.mytischtennis.de/clicktt/ByTTV/18-19/ligen/Bezirksoberliga/gruppe/323819/mannschaftsmeldungen/vr',
[ 'Accept-Encoding' => 'foobar' ]
);
With this simple change the code works currently for me. Note that it might change at any time if the server setup will be changed, i.e. it might then need other workarounds.
I'm using WWW::Mechanize to fetch a web page that includes a Google Maps widget that receives constant data from a single response of type text/event-stream.
That kind of response is like a never ending response from the server that constantly returns updated data for the widget to work.
I'm trying to find out how to read that exact response from Perl. Using something like:
my $mech = WWW::Mechanize->new;
# Do some normal GET and POST requests to authenticate and set cookies for the session
# Now try to get that text/event-stream response
$mech->get('https://some.domain.com/event_stream_page');
But that doesn't work because the response never ends.
How can I make that request and start reading the response and do something with that data every time the server updates the stream?
Found a way to do this. Using a handler from LWP, from which WWW::Mechanize inherits:
$mech->add_handler (
'response_data',
sub {
my ($response, $ua, $h, $data) = #_;
# Your chunk of response is now in $data, do what you need
# If you plan on reading an infinite stream, it's a good idea to clean the response so it doesn't grow infinitely too!
$response->content(undef);
# Important to return a true value if you want to keep reading the response!
return 1;
},
);
I am currently making a perl script that will convert a file to webm/ogg/mp4 format and then send it back to the user but in embed video. It all works except that I can not send an EOF so the HTML5 video player knows what the end is and so he can correctly use the file (like going to a specific time and even knowing when the file has ended (now it just stops but you can't do anything anymore with the video.
Start-code:
elsif ($path =~ /^\/((\w|\d){11})\.webm$/ig) {
print "HTTP/1.0 200 OK\r\n";
$handler = \&resp_youtubemovie;
$handler->($cgi,$1);
Function to send webm file
sub resp_youtubemovie {
my $cgi = shift;
my $youtubeID = shift;
return if !ref $cgi;
open(movie,"<$youtubeID.webm");
local($/);
$movie = <movie>;
close(movie);
print "Content-type: movie/webm\n";
print $movie;
}
I've already tried with a while loop and a buffer but that doesn't work either, I've also tried to change the HTTP status code to 206 Partial Content because I wiresharked some other video streaming websites used it but it didn't matter. So anyone an idea how to open a movie file and stream it correctly?
Rather than doing this by hand, a framework like Dancer can take care of this. This will save you many, many, many headaches. It also allows you to take advantage of the Plack/PSGI superglue which figures out how to talk to web servers for you.
use Dancer;
get qr{/(\w{11}\.webm)$}i => sub {
my($video_file) = splat;
return send_file(
$video_file,
streaming => 1,
);
}
Using Dancer routes you should be able to adapt your existing code pretty easily especially if its a big if/elsif matching against various paths. Dancer does a very good job making simple things simple, it also gives you a huge amount of control over the exact HTTP response if you need it.
A few notes...
The content-type for webm is video/webm which may be the source of your problems. Dancer should just get it right. If not you can tell send_file the content type explicitly.
(\w|\d){11} is better written as \w{11} since \w includes \d.
You must use the 206 Partial Content HTTP status and you must also send:
The Accept-Range: bytes header.
A Content-Range: 0-2048/123456 header where you send the starting and ending byte index of the content followed by the total byte length of the content. The client will be sending you the byte ranges it wants in the request header. The client may send multiple byte ranges in a single request, in which case you'd also need to send the content with multipart word boundaries.
Finally, to get back to your question, if the client requests a byte range that isn't satisfiable then you send a 416 HTTP status and close the connection.
In my application, I fetch webpages periodically using LWP. Is there anyway to check whether between two consecutive fetches the webpage has got changed in some respect (other than explicitly doing a comparison) ? Is there any signature(say CRC) that is being generated at lower protocol layers which can be extracted and compared against older signatures to see possible changes ?
There are two possible approaches. One is to use a digest of the page, e.g.
use strict;
use warnings;
use Digest::MD5 'md5_hex';
use LWP::UserAgent;
# fetch the page, etc.
my $digest = md5_hex $response->decoded_content;
if ( $digest ne $saved_digest ) {
# the page has changed.
}
Another option is to use an HTTP ETag, if the server provides one for the resource requested. You can simply store it and then set your request headers to include an If-None-Match field on subsequent requests. If the server ETag has remained the same, you'll get a 304 Not Modified status and an empty response body. Otherwise you'll get the new page. (And new ETag.) See Entity Tags in RFC2616.
Of course, the server could be lying, and sending the same ETag even though the content has changed. There's no way to know unless you look.
You should use the If-Modified-Since request header, noting the gotchas in the RFC. You send this header with the request. If the server supports it and thinks the content is newer, it sends it to you. If it thinks you have the most recent version, it returns a 304 with no message body.
However, as other answers have noted, the server doesn't have to tell you the truth, so you're sometimes stuck downloading the content and checking yourself. Many dynamic things will always claim to have new content because many developers have never thought about supporting basic HTTP things in their web apps.
For the LWP bits, you can create a single request with an extra header:
use HTTP::Request;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
my $request = HTTP::Request->new( GET => $url );
$r->header( 'If-Modified-Since' => $time );
$ua->request( $request );
For all requests, you can set a request handler:
$ua->add_handler(
request_send => sub {
my($request, $ua, $h) = #_;
# ... look up time from local store
$r->header( 'If-Modified-Since' => $time );
}
);
However, LWP can do most of this for you with mirror if you want to save the files:
$ua->mirror( $url, $filename )