I'm writing a small perl tool which should help me to speed up some processes during a blind SQL injection attack (it's an ethical tool. it's my job).
My script manages HTTP requests already url-encoded with hex values (%xx).
Therefore, my request is encoded twice when I use HTTP::Request to send it to the web browser.
I use this kind of code:
my $ua = LWP::UserAgent->new;
my $httpreq = new HTTP::Request GET => 'http://192.168.0.1/lab/sqli.php?id=1%20and%20(select%20ascii(substring(user,3,1))%20from%20mysql.user%20limit%201)>100%23';
my $res = $ua->request($httpreq)
How can I disable the perl URL encoding inside my request?
HTTP::Request does not modify the provided URL.
Any URL encoding must be done before the URL is assembled — it's actually URL components that get encoded — so HTTP::Request expects the encoding to already be done.
>perl -MHTTP::Request -e"print HTTP::Request->new(GET => 'http://192.168.0.1/lab/sqli.php?id=1%20and%20(select%20ascii(substring(user,3,1))%20from%20mysql.user%20limit%201)>100%23')->as_string;"
GET http://192.168.0.1/lab/sqli.php?id=1%20and%20(select%20ascii(substring(user,3,1))%20from%20mysql.user%20limit%201)%3E100%23
Related
I am building a web server using Apache and Perl CGI which processes the POST requests sent to it. The processing part requires me to get the completely unprocessed data from the request and verify its signature.
The client sends two different kinds of POST requests: one with the content-type set as application/json, and the second one with content type as application/x-www-form-urlencoded.
I was able to fetch the application/json data using cgi->param('POSTDATA'). But if I do the same for application/x-www-form-urlencoded data, i.e. cgi->param('payload'), I get the data but it's already decoded. I want the data in its original URL-encoded format. i.e I want the unprocessed data as it is sent out by the client.
I am doing this for verifying requests sent out by Slack.
To handle all cases, including those when Content-Type is multipart/form-data, read (and put back) the raw data, before CGI does.
use strict;
use warnings;
use IO::Handle;
use IO::Scalar;
STDIN->blocking(1); # ensure to read everything
my $cgi_raw = '';
{
local $/;
$cgi_raw = <STDIN>;
my $s;
tie *STDIN, 'IO::Scalar', \$s;
print STDIN $cgi_raw;
tied(*STDIN)->setpos(0);
}
use CGI qw /:standard/;
...
Though I'm not sure which Perl Module can handle it all for you, but here is a basic rundown.
Your HTML form should submit to a .cgi file (or any other handler which is properly defined).
The raw request is something similar to this:
POST HTTP/1.1
UserAgent: Mozilla/5.0
Content-Length: 69
Host: 127.0.0.1
(More headers depending on situation and then a single blank line)
(Message Body Containing data)
username=John&password=123J (example)
https://en.wikipedia.org/wiki/List_of_HTTP_header_fields)
What will happen is that, this data is available via the CGI (not CGI perl module aka CGI.pm) using the environment variables and stdin (header feilds are passed using EV and message body using stdin).
In Perl, I think you need this to read those EVs: http://perldoc.perl.org/Env.html
And this to read stdin: https://perlmaven.com/read-from-stdin
From there on, you can process as needed.
BE CAREFULL, when reading any of these. You can be sent malformed information like 100GB valid data in one of the HTTP headers, or in message body, which can break havoc on you or dangerous system calls, etc. Sterilizing is necessary, before passing the data to other places.
I was using an open source tool called SimTT which gets an URL of a tabletennis league and then calculates the probable results (e.g. ranking of teams and players). Unfortunately the webpage moved to a different webpage.
I downloaded the open source and repaired the parsing of the webpage, but currently I'm only able to download the page manually and read it then from a file.
Below you can find an excerpt of my code to retrieve the page. It prints success, but the response is empty. Unfortunately I'm not familiar with perl and webtechniques very well, but in Wireshark I could see that one of the last things send was a new session key. But I'm not sure, if the problem is related to cookies, ssl or something like that.
It would be very nice if someone could help me to get access. I know that there are some people out there which would like to use the tool.
So heres the code:
use LWP::UserAgent ();
use HTTP::Cookies;
my $ua = LWP::UserAgent->new(keep_alive=>1);
$ua->agent('Mozilla/5.0');
$ua->cookie_jar({});
my $request = new HTTP::Request('GET', 'https://www.mytischtennis.de/clicktt/ByTTV/18-19/ligen/Bezirksoberliga/gruppe/323819/mannschaftsmeldungen/vr');
my $response = $ua->request($request);
if ($response->is_success) {
print "Success: ", $response->decoded_content;
}
else {
die $response->status_line;
}
Either there is some rudimentary anti-bot protection at the server or the server is misconfigured or otherwise broken. It looks like it expects to have an Accept-Encoding header in the request which LWP by default does not sent. The value of this header does not really seem to matter, i.e. the server will send the content compressed with gzip if the client claims to support it but it will send uncompressed data if the client offered only a compression method which is unknown to the server.
With this knowledge one can change the code like this:
my $request = HTTP::Request->new('GET',
'https://www.mytischtennis.de/clicktt/ByTTV/18-19/ligen/Bezirksoberliga/gruppe/323819/mannschaftsmeldungen/vr',
[ 'Accept-Encoding' => 'foobar' ]
);
With this simple change the code works currently for me. Note that it might change at any time if the server setup will be changed, i.e. it might then need other workarounds.
I'm trying to pass a URL as a query string so it can be read by another website and then used:
www.example.com/domain?return_url=/another/domain
Gets returned as:
www.example.com/domain?return_url=%2Fanother%2Fdomain
Is there a way this URL can be read and parsed by the other application with the escaped characters?
The only way I can think of is to encode it somehow so it comes out like this:
www.example.com/domain?return_url=L2Fub3RoZXIvZG9tYWlu
then the other application can decode and use?
https://www.base64encode.org/
www.example.com/domain?return_url=%2Fanother%2Fdomain
This is called URL encoding. Not because you put a URL in it, but because it encodes characters that have a special meaning in a URL.
The %2F corresponds to a slash /. You've probably also seen the %20 before, which is a space .
Putting a complete URI into a URL parameter of another URI is totally fine.
http://example.org?url=http%3A%2F%2Fexample.org%2Ffoo%3Fbar%3Dbaz
The application behind the URL you are calling needs to be able to understand URL encoding, but that is a trivial thing. Typical web frameworks and interfaces to the web (like CGI.pm or Plack in Perl) will do that. You should not have to care about it a all.
To URL-encode something in Perl, you have several options.
You could use the URI module to create the whole URI including the URL encoded query.
use URI;
my $u = URI->new("http://example.org");
$u->query_form( return_url => "http://example.org/foo/bar?baz=qrr");
print $u;
__END__
http://example.org?return_url=http%3A%2F%2Fexample.org%2Ffoo%2Fbar%3Fbaz%3Dqrr
This seems like the natural thing to do.
You could also use the URI::Encode module, which gives you a uri_encode function. That's useful if you want to encode strings without building a URI object.
use URI::Encode qw(uri_encode uri_decode);
my $encoded = uri_encode($data);
my $decoded = uri_decode($encoded);
All of this is a normal part of how the web works. There is no need to do Base 64 encoding.
The correct way would be to uri-encode the second hop as you do in your first example. The URI and URI::QueryParam modules make this nice and easy:
To encode a URI, you simply create a URI object on your base url. Then add any query parameters that you want. (NOTE: they will be automatically uri-encoded by URI::QueryParam):
use strict;
use warnings;
use feature qw(say);
use URI;
use URI::QueryParam;
my $u = URI->new('http://www.example.com/domain');
$u->query_param_append('return_url', 'http://yahoo.com');
say $u->as_string;
# http://www.example.com/domain?return_url=http%3A%2F%2Fyahoo.com
To receive this url and then redirect to return_url, you simply create a new URI object then pull off the return_url query parameter with URI::QueryParam. (NOTE: again URI::QueryParam automatically uri-decodes the parameter for you):
my $u = URI->new(
'http://www.example.com/domain?return_url=http%3A%2F%2Fyahoo.com'
);
my $return_url = $u->query_param('return_url');
say $return_url;
# http://yahoo.com
I need a simple CGI based Perl script to receive a POST (directly, not from another HTML page) with Content-Type being application/x-www-form-urlencoded and to echo back
I received: (encoded string)
(and if possible)
decoded, the string is: (decoded string)
I am new to CGI Perl, and this is a one off request for testing a product (I'm a sysadmin. not a programmer). I intend to learn Perl more deeply in the future, but in this case I'm hoping for a gimme.
To start off, I will quickly skim some of the basics.
Following is the package for PERL/CGI application:
use CGI;
To create CGI object:
my $web = CGI->new;
Make sure you set and then write HTTP headers to outstream, before flushing out any CGI data to outstream. Otherwise you would end up in 500 error.
To set the headers:
print $web->header();
print $web->header('application/x-www-form-urlencoded');
To receive any post data from HTML, say for example,
http://example.com?POSTDATA=helloworld
you may use param() function:
my $data = $web->param('POSTDATA');
scalar $data would be set with "helloworld".
It is advisable to check if $web->param('POSTDATA') is defined before you assign to a scalar.
I'm trying to write a Perl CGI script to handle XML-RPC requests, in which an XML document is sent as the body of an HTTP POST request.
The CGI.pm module does a great job at extracting named params from an HTTP request, but I can't figure out how to make it give me the entire HTTP request body (i.e. the XML document in the XML-RPC request I'm handling).
If not CGI.pm, is there another module that would be able to parse this information out of the request? I'd prefer not to have to extract this information "by hand" from the environment variables. Thanks for any help.
You can get the raw POST data by using the special parameter name POSTDATA.
my $q = CGI->new;
my $xml = $q->param( 'POSTDATA' );
Alternatively, you could read STDIN directly instead of using CGI.pm, but then you lose all the other useful stuff that CGI.pm does.
The POSTDATA trick is documented in the excellent CGI.pm docs here.
Right, one could use POSTDATA, but that only works if the request Content-Type has not been set to 'multipart/form-data'.
If it is set to 'multipart/form-data', CGI.pm does its own content processing and POSTDATA is not initialized.
So, other options include $cgi->query_string and/or $cgi->Dump.
The $cgi->query_string returns the contents of the POST in a GET format (param=value&...), and there doesn't seem to be a way to simply get the contents of the POST STDIN as they were passed in by the client.
So to get the actual content of the standard input of a POST request, if modifying CGI.pm is an option for you, you could modify around line 620 to save the content of #lines somewhere in a variable, such as:
$self->{standard_input} = join '', #lines;
And then access it through $cgi->{standard_input}.
To handle all cases, including those when Content-Type is multipart/form-data, read (and put back) the raw data, before CGI does.
use strict;
use warnings;
use IO::Handle;
use IO::Scalar;
STDIN->blocking(1); # ensure to read everything
my $cgi_raw = '';
{
local $/;
$cgi_raw = <STDIN>;
my $s;
tie *STDIN, 'IO::Scalar', \$s;
print STDIN $cgi_raw;
tied(*STDIN)->setpos(0);
}
use CGI qw /:standard/;
...