I am currently making a perl script that will convert a file to webm/ogg/mp4 format and then send it back to the user but in embed video. It all works except that I can not send an EOF so the HTML5 video player knows what the end is and so he can correctly use the file (like going to a specific time and even knowing when the file has ended (now it just stops but you can't do anything anymore with the video.
Start-code:
elsif ($path =~ /^\/((\w|\d){11})\.webm$/ig) {
print "HTTP/1.0 200 OK\r\n";
$handler = \&resp_youtubemovie;
$handler->($cgi,$1);
Function to send webm file
sub resp_youtubemovie {
my $cgi = shift;
my $youtubeID = shift;
return if !ref $cgi;
open(movie,"<$youtubeID.webm");
local($/);
$movie = <movie>;
close(movie);
print "Content-type: movie/webm\n";
print $movie;
}
I've already tried with a while loop and a buffer but that doesn't work either, I've also tried to change the HTTP status code to 206 Partial Content because I wiresharked some other video streaming websites used it but it didn't matter. So anyone an idea how to open a movie file and stream it correctly?
Rather than doing this by hand, a framework like Dancer can take care of this. This will save you many, many, many headaches. It also allows you to take advantage of the Plack/PSGI superglue which figures out how to talk to web servers for you.
use Dancer;
get qr{/(\w{11}\.webm)$}i => sub {
my($video_file) = splat;
return send_file(
$video_file,
streaming => 1,
);
}
Using Dancer routes you should be able to adapt your existing code pretty easily especially if its a big if/elsif matching against various paths. Dancer does a very good job making simple things simple, it also gives you a huge amount of control over the exact HTTP response if you need it.
A few notes...
The content-type for webm is video/webm which may be the source of your problems. Dancer should just get it right. If not you can tell send_file the content type explicitly.
(\w|\d){11} is better written as \w{11} since \w includes \d.
You must use the 206 Partial Content HTTP status and you must also send:
The Accept-Range: bytes header.
A Content-Range: 0-2048/123456 header where you send the starting and ending byte index of the content followed by the total byte length of the content. The client will be sending you the byte ranges it wants in the request header. The client may send multiple byte ranges in a single request, in which case you'd also need to send the content with multipart word boundaries.
Finally, to get back to your question, if the client requests a byte range that isn't satisfiable then you send a 416 HTTP status and close the connection.
Related
I am building a web server using Apache and Perl CGI which processes the POST requests sent to it. The processing part requires me to get the completely unprocessed data from the request and verify its signature.
The client sends two different kinds of POST requests: one with the content-type set as application/json, and the second one with content type as application/x-www-form-urlencoded.
I was able to fetch the application/json data using cgi->param('POSTDATA'). But if I do the same for application/x-www-form-urlencoded data, i.e. cgi->param('payload'), I get the data but it's already decoded. I want the data in its original URL-encoded format. i.e I want the unprocessed data as it is sent out by the client.
I am doing this for verifying requests sent out by Slack.
To handle all cases, including those when Content-Type is multipart/form-data, read (and put back) the raw data, before CGI does.
use strict;
use warnings;
use IO::Handle;
use IO::Scalar;
STDIN->blocking(1); # ensure to read everything
my $cgi_raw = '';
{
local $/;
$cgi_raw = <STDIN>;
my $s;
tie *STDIN, 'IO::Scalar', \$s;
print STDIN $cgi_raw;
tied(*STDIN)->setpos(0);
}
use CGI qw /:standard/;
...
Though I'm not sure which Perl Module can handle it all for you, but here is a basic rundown.
Your HTML form should submit to a .cgi file (or any other handler which is properly defined).
The raw request is something similar to this:
POST HTTP/1.1
UserAgent: Mozilla/5.0
Content-Length: 69
Host: 127.0.0.1
(More headers depending on situation and then a single blank line)
(Message Body Containing data)
username=John&password=123J (example)
https://en.wikipedia.org/wiki/List_of_HTTP_header_fields)
What will happen is that, this data is available via the CGI (not CGI perl module aka CGI.pm) using the environment variables and stdin (header feilds are passed using EV and message body using stdin).
In Perl, I think you need this to read those EVs: http://perldoc.perl.org/Env.html
And this to read stdin: https://perlmaven.com/read-from-stdin
From there on, you can process as needed.
BE CAREFULL, when reading any of these. You can be sent malformed information like 100GB valid data in one of the HTTP headers, or in message body, which can break havoc on you or dangerous system calls, etc. Sterilizing is necessary, before passing the data to other places.
I was using an open source tool called SimTT which gets an URL of a tabletennis league and then calculates the probable results (e.g. ranking of teams and players). Unfortunately the webpage moved to a different webpage.
I downloaded the open source and repaired the parsing of the webpage, but currently I'm only able to download the page manually and read it then from a file.
Below you can find an excerpt of my code to retrieve the page. It prints success, but the response is empty. Unfortunately I'm not familiar with perl and webtechniques very well, but in Wireshark I could see that one of the last things send was a new session key. But I'm not sure, if the problem is related to cookies, ssl or something like that.
It would be very nice if someone could help me to get access. I know that there are some people out there which would like to use the tool.
So heres the code:
use LWP::UserAgent ();
use HTTP::Cookies;
my $ua = LWP::UserAgent->new(keep_alive=>1);
$ua->agent('Mozilla/5.0');
$ua->cookie_jar({});
my $request = new HTTP::Request('GET', 'https://www.mytischtennis.de/clicktt/ByTTV/18-19/ligen/Bezirksoberliga/gruppe/323819/mannschaftsmeldungen/vr');
my $response = $ua->request($request);
if ($response->is_success) {
print "Success: ", $response->decoded_content;
}
else {
die $response->status_line;
}
Either there is some rudimentary anti-bot protection at the server or the server is misconfigured or otherwise broken. It looks like it expects to have an Accept-Encoding header in the request which LWP by default does not sent. The value of this header does not really seem to matter, i.e. the server will send the content compressed with gzip if the client claims to support it but it will send uncompressed data if the client offered only a compression method which is unknown to the server.
With this knowledge one can change the code like this:
my $request = HTTP::Request->new('GET',
'https://www.mytischtennis.de/clicktt/ByTTV/18-19/ligen/Bezirksoberliga/gruppe/323819/mannschaftsmeldungen/vr',
[ 'Accept-Encoding' => 'foobar' ]
);
With this simple change the code works currently for me. Note that it might change at any time if the server setup will be changed, i.e. it might then need other workarounds.
I have a raw email, (MIME multipart), and I want to display this on a website (e.g. in an iframe, with tabs for the HTML part and the plain text part, etc.). Are there any CPAN modules or Template::Toolkit plugins that I can use to help me achieve this?
At the moment, it's looking like I'll have to parse the message with Email::MIME, then iterate over all the parts, and write a handler for all the different mime types.
It's a long shot, but I'm wondering if anyone has done all this already? It's going to be a long and error prone process writing handlers if I attempt it myself.
Thanks for any help.
I actually just dealt with this problem just a few months ago. I added an email feature to the product I work for, both sending and receiving. The first part was sending reminders to users, but we didn't want to manage the bounce backs for our customer admins, we decided to have a message inbox that the admins could see bounces and replies without us, and the admins can deal with adjusting email addresses if they needed to.
Because of this, we accept all email that is sent to an inbox we watch. We use VERP to associate an email with a user, and store the entire email as is in the database. Then, when the admin requests to see the email, we have to parse the email.
My first attempt was very similar to an earlier answer. If one of the parts is html, show it. If it's text, show it. Otherwise, show the original, raw email. This broke down real fast with a few emails not generated by sendmail. Outlook, Exchange, and a few other email systems don't do that, they use multiparts to send the email. After a lot of digging and cussing, I discovered that the problem doesn't appear to be well documented. With the help of looking through MHonArc and reading the RFC's (RFC2045 and RFC2046), I settled on the solution below. I decided on not using MHonArc, since I couldn't easily resuse the parsing and display functionality. I wouldn't say this is perfect, but it's been good enough that we used it.
First, take the message and use Email::MIME to parse it. Then call a function called get_part with the array of parts Email::MIME gives you with ->parts().
get_part, for each part it was passed, decodes the content type, looks it up in a hash, and if it exists, call the function associated with that content type. If the decoder was able to give us something, put it on a result array.
The last piece of the puzzle is this decoder array. Basically, it defines the content types I can deal with:
text/html
text/plain
message/delivery-status, which is actually also plain text
multipart/mixed
multipart/related
multipart/alternative
The non-multipart sections I return as is. With mixed, related and alternative, I merely call get_parts on that MIME node and returns the results. Because alternative is special, it has some extra code after calling get_parts. It will only return html if it has an html part, or it will return only the text part of it has a text part. If it has neither, it won't return anything valid.
The advantage with the hash of valid content types is that I can easily add logic for more parts as needed. And by the time you get_parts is done, you should have an array of all content you care about.
One more item I should mention. As a part of this, we created a separate domain that actually serves these messages. The main domain that an admin works on will refuse to serve the message and redirect the browser to our user content domain. This second domain will only serve user content. This is to help the browser properly sandbox the content away from our main domain. See same origin policy (http://en.wikipedia.org/wiki/Same_origin_policy)
It doesn't sound like a difficult job to me:
use Email::MIME;
my $parsed = Email::MIME->new($message);
my #parts = $parsed->parts; # These will be Email::MIME objects, too.
print <<EOF;
<html><head><title>!</title></head><body>
EOF
for my $part (#parts) {
my $content_type = $parsed->content_type;
if ($content_type eq "text/plain") {
print "<pre>", $part->body (), "</pre>\n";
}
elsif ($content_type eq "text/html") {
print $part->body ();
}
# Handle some more cases here
}
print <<EOF;
</body></html>
EOF
Reuse existing complete software. The MHonArc mail-to-HTML converter has excellent MIME support.
I am using WWW::Mechanize and currently handling HTTP responses with the 'Content-Encoding: gzip' header in my code by first checking the response headers and then using IO::Uncompress::Gunzip to get the uncompressed content.
However I would like to do this transparently so that WWW::Mechanize methods like form(), links() etc work on and parse the uncompressed content. Since WWW::Mechanize is a sub-class of LWP::UserAgent, I would prefer to use the LWP::UA::handlers to do this.
While I have been partly successful (I can print the uncompressed content for example), I am unable to do this transparently in a way that I can call
$mech->forms();
In summary: How do I "replace" the content inside the $mech object so that from that point onwards, all WWW::Mechanize methods work as if the Content-Encoding never happened?
I would appreciate your attention and help.
Thanks
WWW::Mechanize::GZip, I think.
It looks to me like you can replace it by using the $res->content( $bytes ) member.
By the way, I found this stuff by looking at the source of LWP::UserAgent, then HTTP::Response, then HTTP::Message.
It is built in with UserAgent and thus Mechanize. One MAJOR caveat to save you some hair
-To debug, make sure you check for error $# after the call to decoded_content.
$html = $r->decoded_content;
die $# if $#;
Better yet, look through the source of HTTP::Message and make sure all the support packages are there
In my case, decoded_content returned undef while content is raw binary, and I went on a wild goose chase. UserAgent will set the error flag on failure to decode, but Mechanize will just ignore it (It doesn't check or log the incidence as its own error/warning).
In my case $# sez: "Can't find IO/HTML.pm .. It was eval'ed
After having to dive into the source, I find out the built-in decoding process is long, meticulous, and arduous, covering just about every scenario and making tons of guesses (Thank you Gisle!).
if you are paranoid, explicitly set the default header to be used with every request at new()
$browser = new WWW::Mechanize('default_headers' => HTTP::Headers->new('Accept-Encoding'
=> scalar HTTP::Message::decodable()));
I have a client/server system that performs communication using XML transferred using HTTP requests and responses with the client using Perl's LWP and the server running Perl's CGI.pm through Apache. In addition the stream is encrypted using SSL with certificates for both the server and all clients.
This system works well, except that periodically the client needs to send really large amounts of data. An obvious solution would be to compress the data on the client side, send it over, and decompress it on the server. Rather than implement this myself, I was hoping to use Apache's mod_deflate's "Input Decompression" as described here.
The description warns:
If you evaluate the request body yourself, don't trust the Content-Length header! The Content-Length header reflects the length of the incoming data from the client and not the byte count of the decompressed data stream.
So if I provide a Content-Length value which matches the compressed data size, the data is truncated. This is because mod_deflate decompresses the stream, but CGI.pm only reads to the Content-Length limit.
Alternatively, if I try to outsmart it and override the Content-Length header with the decompressed data size, LWP complains and resets the value to the compressed length, leaving me with the same problem.
Finally, I attempted to hack the part of LWP which does the correction. The original code is:
# Set (or override) Content-Length header
my $clen = $request_headers->header('Content-Length');
if (defined($$content_ref) && length($$content_ref)) {
$has_content = length($$content_ref);
if (!defined($clen) || $clen ne $has_content) {
if (defined $clen) {
warn "Content-Length header value was wrong, fixed";
hlist_remove(\#h, 'Content-Length');
}
push(#h, 'Content-Length' => $has_content);
}
}
elsif ($clen) {
warn "Content-Length set when there is no content, fixed";
hlist_remove(\#h, 'Content-Length');
}
And I changed the push line to:
push(#h, 'Content-Length' => $clen);
Unfortunately this causes some problem where content (truncated or not) doesn't even get to my CGI script.
Has anyone made this work? I found this which does compression on a file before uploading, but not compressing a generic request.
Although you said you didn't want to do the compression yourself, there are lots of perl modules which will do both sides for you, Compress::Zlib for example.
I have a cheat (with a .net part of the company) where I get passed XML as a separate parameter posted in, then can handle it as if it was a string rather than faffing about with SOAP like stuff.
I don't think you can change the Content-Length like that. It would confuse Apache, because mod_deflate wouldn't know how much compressed data to read. What about having the client add an X-Uncompressed-Length header, and then use a modified version of CGI.pm that uses X-Uncompressed-Length (if present) instead of Content-Length? (Actually, you probably don't need to modify CGI.pm. Just set $ENV{'CONTENT_LENGTH'} to the appropriate value before initializing the CGI object or calling any CGI functions.)
Or, use a lower-level module that uses the bucket brigade to tell how much data to read.
I am not sure if I am following you with what you want, but I have a custom get/post module, that I use to do some non standard stuff. The below code will read in anything sent via post, or STDIN.
read(STDIN, $query_string, $ENV{'CONTENT_LENGTH'});
Instead of using using $ENV's value use your's. I hope this helps, and sorry if it doesn't.