I am building a web server using Apache and Perl CGI which processes the POST requests sent to it. The processing part requires me to get the completely unprocessed data from the request and verify its signature.
The client sends two different kinds of POST requests: one with the content-type set as application/json, and the second one with content type as application/x-www-form-urlencoded.
I was able to fetch the application/json data using cgi->param('POSTDATA'). But if I do the same for application/x-www-form-urlencoded data, i.e. cgi->param('payload'), I get the data but it's already decoded. I want the data in its original URL-encoded format. i.e I want the unprocessed data as it is sent out by the client.
I am doing this for verifying requests sent out by Slack.
To handle all cases, including those when Content-Type is multipart/form-data, read (and put back) the raw data, before CGI does.
use strict;
use warnings;
use IO::Handle;
use IO::Scalar;
STDIN->blocking(1); # ensure to read everything
my $cgi_raw = '';
{
local $/;
$cgi_raw = <STDIN>;
my $s;
tie *STDIN, 'IO::Scalar', \$s;
print STDIN $cgi_raw;
tied(*STDIN)->setpos(0);
}
use CGI qw /:standard/;
...
Though I'm not sure which Perl Module can handle it all for you, but here is a basic rundown.
Your HTML form should submit to a .cgi file (or any other handler which is properly defined).
The raw request is something similar to this:
POST HTTP/1.1
UserAgent: Mozilla/5.0
Content-Length: 69
Host: 127.0.0.1
(More headers depending on situation and then a single blank line)
(Message Body Containing data)
username=John&password=123J (example)
https://en.wikipedia.org/wiki/List_of_HTTP_header_fields)
What will happen is that, this data is available via the CGI (not CGI perl module aka CGI.pm) using the environment variables and stdin (header feilds are passed using EV and message body using stdin).
In Perl, I think you need this to read those EVs: http://perldoc.perl.org/Env.html
And this to read stdin: https://perlmaven.com/read-from-stdin
From there on, you can process as needed.
BE CAREFULL, when reading any of these. You can be sent malformed information like 100GB valid data in one of the HTTP headers, or in message body, which can break havoc on you or dangerous system calls, etc. Sterilizing is necessary, before passing the data to other places.
Related
I need a simple CGI based Perl script to receive a POST (directly, not from another HTML page) with Content-Type being application/x-www-form-urlencoded and to echo back
I received: (encoded string)
(and if possible)
decoded, the string is: (decoded string)
I am new to CGI Perl, and this is a one off request for testing a product (I'm a sysadmin. not a programmer). I intend to learn Perl more deeply in the future, but in this case I'm hoping for a gimme.
To start off, I will quickly skim some of the basics.
Following is the package for PERL/CGI application:
use CGI;
To create CGI object:
my $web = CGI->new;
Make sure you set and then write HTTP headers to outstream, before flushing out any CGI data to outstream. Otherwise you would end up in 500 error.
To set the headers:
print $web->header();
print $web->header('application/x-www-form-urlencoded');
To receive any post data from HTML, say for example,
http://example.com?POSTDATA=helloworld
you may use param() function:
my $data = $web->param('POSTDATA');
scalar $data would be set with "helloworld".
It is advisable to check if $web->param('POSTDATA') is defined before you assign to a scalar.
I am currently making a perl script that will convert a file to webm/ogg/mp4 format and then send it back to the user but in embed video. It all works except that I can not send an EOF so the HTML5 video player knows what the end is and so he can correctly use the file (like going to a specific time and even knowing when the file has ended (now it just stops but you can't do anything anymore with the video.
Start-code:
elsif ($path =~ /^\/((\w|\d){11})\.webm$/ig) {
print "HTTP/1.0 200 OK\r\n";
$handler = \&resp_youtubemovie;
$handler->($cgi,$1);
Function to send webm file
sub resp_youtubemovie {
my $cgi = shift;
my $youtubeID = shift;
return if !ref $cgi;
open(movie,"<$youtubeID.webm");
local($/);
$movie = <movie>;
close(movie);
print "Content-type: movie/webm\n";
print $movie;
}
I've already tried with a while loop and a buffer but that doesn't work either, I've also tried to change the HTTP status code to 206 Partial Content because I wiresharked some other video streaming websites used it but it didn't matter. So anyone an idea how to open a movie file and stream it correctly?
Rather than doing this by hand, a framework like Dancer can take care of this. This will save you many, many, many headaches. It also allows you to take advantage of the Plack/PSGI superglue which figures out how to talk to web servers for you.
use Dancer;
get qr{/(\w{11}\.webm)$}i => sub {
my($video_file) = splat;
return send_file(
$video_file,
streaming => 1,
);
}
Using Dancer routes you should be able to adapt your existing code pretty easily especially if its a big if/elsif matching against various paths. Dancer does a very good job making simple things simple, it also gives you a huge amount of control over the exact HTTP response if you need it.
A few notes...
The content-type for webm is video/webm which may be the source of your problems. Dancer should just get it right. If not you can tell send_file the content type explicitly.
(\w|\d){11} is better written as \w{11} since \w includes \d.
You must use the 206 Partial Content HTTP status and you must also send:
The Accept-Range: bytes header.
A Content-Range: 0-2048/123456 header where you send the starting and ending byte index of the content followed by the total byte length of the content. The client will be sending you the byte ranges it wants in the request header. The client may send multiple byte ranges in a single request, in which case you'd also need to send the content with multipart word boundaries.
Finally, to get back to your question, if the client requests a byte range that isn't satisfiable then you send a 416 HTTP status and close the connection.
I'm writing a small perl tool which should help me to speed up some processes during a blind SQL injection attack (it's an ethical tool. it's my job).
My script manages HTTP requests already url-encoded with hex values (%xx).
Therefore, my request is encoded twice when I use HTTP::Request to send it to the web browser.
I use this kind of code:
my $ua = LWP::UserAgent->new;
my $httpreq = new HTTP::Request GET => 'http://192.168.0.1/lab/sqli.php?id=1%20and%20(select%20ascii(substring(user,3,1))%20from%20mysql.user%20limit%201)>100%23';
my $res = $ua->request($httpreq)
How can I disable the perl URL encoding inside my request?
HTTP::Request does not modify the provided URL.
Any URL encoding must be done before the URL is assembled — it's actually URL components that get encoded — so HTTP::Request expects the encoding to already be done.
>perl -MHTTP::Request -e"print HTTP::Request->new(GET => 'http://192.168.0.1/lab/sqli.php?id=1%20and%20(select%20ascii(substring(user,3,1))%20from%20mysql.user%20limit%201)>100%23')->as_string;"
GET http://192.168.0.1/lab/sqli.php?id=1%20and%20(select%20ascii(substring(user,3,1))%20from%20mysql.user%20limit%201)%3E100%23
I'm trying to write a Perl CGI script to handle XML-RPC requests, in which an XML document is sent as the body of an HTTP POST request.
The CGI.pm module does a great job at extracting named params from an HTTP request, but I can't figure out how to make it give me the entire HTTP request body (i.e. the XML document in the XML-RPC request I'm handling).
If not CGI.pm, is there another module that would be able to parse this information out of the request? I'd prefer not to have to extract this information "by hand" from the environment variables. Thanks for any help.
You can get the raw POST data by using the special parameter name POSTDATA.
my $q = CGI->new;
my $xml = $q->param( 'POSTDATA' );
Alternatively, you could read STDIN directly instead of using CGI.pm, but then you lose all the other useful stuff that CGI.pm does.
The POSTDATA trick is documented in the excellent CGI.pm docs here.
Right, one could use POSTDATA, but that only works if the request Content-Type has not been set to 'multipart/form-data'.
If it is set to 'multipart/form-data', CGI.pm does its own content processing and POSTDATA is not initialized.
So, other options include $cgi->query_string and/or $cgi->Dump.
The $cgi->query_string returns the contents of the POST in a GET format (param=value&...), and there doesn't seem to be a way to simply get the contents of the POST STDIN as they were passed in by the client.
So to get the actual content of the standard input of a POST request, if modifying CGI.pm is an option for you, you could modify around line 620 to save the content of #lines somewhere in a variable, such as:
$self->{standard_input} = join '', #lines;
And then access it through $cgi->{standard_input}.
To handle all cases, including those when Content-Type is multipart/form-data, read (and put back) the raw data, before CGI does.
use strict;
use warnings;
use IO::Handle;
use IO::Scalar;
STDIN->blocking(1); # ensure to read everything
my $cgi_raw = '';
{
local $/;
$cgi_raw = <STDIN>;
my $s;
tie *STDIN, 'IO::Scalar', \$s;
print STDIN $cgi_raw;
tied(*STDIN)->setpos(0);
}
use CGI qw /:standard/;
...
I have a client/server system that performs communication using XML transferred using HTTP requests and responses with the client using Perl's LWP and the server running Perl's CGI.pm through Apache. In addition the stream is encrypted using SSL with certificates for both the server and all clients.
This system works well, except that periodically the client needs to send really large amounts of data. An obvious solution would be to compress the data on the client side, send it over, and decompress it on the server. Rather than implement this myself, I was hoping to use Apache's mod_deflate's "Input Decompression" as described here.
The description warns:
If you evaluate the request body yourself, don't trust the Content-Length header! The Content-Length header reflects the length of the incoming data from the client and not the byte count of the decompressed data stream.
So if I provide a Content-Length value which matches the compressed data size, the data is truncated. This is because mod_deflate decompresses the stream, but CGI.pm only reads to the Content-Length limit.
Alternatively, if I try to outsmart it and override the Content-Length header with the decompressed data size, LWP complains and resets the value to the compressed length, leaving me with the same problem.
Finally, I attempted to hack the part of LWP which does the correction. The original code is:
# Set (or override) Content-Length header
my $clen = $request_headers->header('Content-Length');
if (defined($$content_ref) && length($$content_ref)) {
$has_content = length($$content_ref);
if (!defined($clen) || $clen ne $has_content) {
if (defined $clen) {
warn "Content-Length header value was wrong, fixed";
hlist_remove(\#h, 'Content-Length');
}
push(#h, 'Content-Length' => $has_content);
}
}
elsif ($clen) {
warn "Content-Length set when there is no content, fixed";
hlist_remove(\#h, 'Content-Length');
}
And I changed the push line to:
push(#h, 'Content-Length' => $clen);
Unfortunately this causes some problem where content (truncated or not) doesn't even get to my CGI script.
Has anyone made this work? I found this which does compression on a file before uploading, but not compressing a generic request.
Although you said you didn't want to do the compression yourself, there are lots of perl modules which will do both sides for you, Compress::Zlib for example.
I have a cheat (with a .net part of the company) where I get passed XML as a separate parameter posted in, then can handle it as if it was a string rather than faffing about with SOAP like stuff.
I don't think you can change the Content-Length like that. It would confuse Apache, because mod_deflate wouldn't know how much compressed data to read. What about having the client add an X-Uncompressed-Length header, and then use a modified version of CGI.pm that uses X-Uncompressed-Length (if present) instead of Content-Length? (Actually, you probably don't need to modify CGI.pm. Just set $ENV{'CONTENT_LENGTH'} to the appropriate value before initializing the CGI object or calling any CGI functions.)
Or, use a lower-level module that uses the bucket brigade to tell how much data to read.
I am not sure if I am following you with what you want, but I have a custom get/post module, that I use to do some non standard stuff. The below code will read in anything sent via post, or STDIN.
read(STDIN, $query_string, $ENV{'CONTENT_LENGTH'});
Instead of using using $ENV's value use your's. I hope this helps, and sorry if it doesn't.