Why am I getting this error? server_port() works as intended, afaik REMOTE_PORT is a valid environment variable too.
Are there any alternative ways of getting the client's port number?
Here's my full code:
#!/usr/bin/perl -w
use CGI qw(:all);
use CGI::Carp qw(fatalsToBrowser);
use strict;
use Cwd;
#################################
my $time = localtime();
my $dir = cwd();
my $parameter = param('name');
my $q = new CGI;
my $addr = $q->remote_host();
my $request = $q->request_method();
my $port = $q->server_port();
print "Content-type:text/html\r\n\r\n";
print '<html>';
print '<head>';
print '<title>Auth2</title>';
print '</head>';
print '<body>';
print "<h1> The time is $time </h1>";
print "<p> Current directory is $dir</p>";
print "<p> Request parameter: $parameter</p>";
print "<p> Remote address: $addr</p>";
print "<p> Remote port: $port</p>";
print "<p> Request method: $request </p>";
print '</body>';
print '</html>';
1;
The available methods are documented in the man page and it looks like remote_port is not one of them.
afaik REMOTE_PORT is a valid environment variable too.
If it is an environment variable then you can simply access it as one:
my $port = $ENV{REMOTE_PORT};
Apart from that it is not clear why you need the remote port number at all, because it will probably simply be some arbitrary number from the ephemeral port range of the remote system. This was maybe also the reasoning to not provide a method to access it.
Related
I have the code as shown below. A BEGIN which loads the session or, if none is yet created, it creates one. But it doesn't do it all the time. It's a login script; If I enter the PIN and it's wrong, the script displays the login form again, which is submitted to this same script. Up to 3 attempts permitted but, it will fail to load the session, usually on attempt 2. Inconsistent so, please can anyone see what might be wrong and why is the session sometimes not loading.
I do have warnings enabled and I have shown that in the code.
I used to start the script with 'print $session->header' but, having changed to 'print $cgi->header;' I can see clearly that the session is undefined, when the script fails. I should add that, if I refresh the failed page perhaps as many as 5 times, the session does eventually reload with all data intact.
#!/usr/bin/perl
#use CGI::Carp qw/warningsToBrowser fatalsToBrowser/;
use strict;
use warnings 'all';
use CGI qw(:all);
use CGI::Session;
use Crypt::PasswdMD5;
use DBI;
use Data::Dumper;
my $cgi = CGI->new;
my $session;
my $sessions_dir_location;
my $session_id;
BEGIN{
unshift #INC, "/var/www/vhosts/example.com/subDomain.example.com/cgi-bin/library";
my $document_root = $ENV{'DOCUMENT_ROOT'};
$document_root =~ s/\///;
my ( $var
, $www
, $vhosts
, $domain
) = split ('/', $document_root, 5);
$sessions_dir_location = '/' . $var . '/' . $www . '/' . $vhosts . '/' . $domain;
$session = CGI::Session->load() or die CGI::Session->errstr();
if ( $session->is_expired ) {
print $session->header(),
$cgi->start_html(),
$cgi->p("Your session timed out! Refresh the screen to start new session!"),
$cgi->end_html();
exit(0);
}
if ( $session->is_empty ) {
$session = new CGI::Session(undef, undef,
{Directory=>"$sessions_dir_location/sessions_storage/"}) or die CGI::Session->errstr;
}
#add the library dir to #INC;
use lib do {
use Cwd 'realpath';
my ($dir) = __FILE__ =~ m{^(.*)/};
realpath("$dir/library");
};
use feature 'say';
use FindBin '$RealBin';
use lib $RealBin;
use lib "$RealBin/library";
}
my $self = $cgi->url;
my %login = $cgi->Vars;
print $cgi->header;
# capture and display warnings
local $SIG{__WARN__} = sub {
my $message = shift;
print $cgi->header;
print qq($message);
};
print qq(<pre>);
print Dumper \%login;
print qq(</pre>);
print qq(<pre>session);
print Dumper \$session; #undef
print qq(</pre>);
#next is line 141
my $session_stored_user_name = $session->param("entered_user_name");
Error message is this:
Can't call method "param" on an undefined value at /var/www/vhosts/example.com/subDomain.example.com/cgi-bin/dashboard-login/login-with-pin.pl line 141, <DAT> line 45.
Please, also, what or where is <DAT> line 45?
I've created a perl script to use HTML::TableExtract to scrape data from tables on a site.
It works great to dump out table data for unsecured sites (i.e. HTTP site), but when I try HTTPS sites, it doesn't work (the tables_report line just prints blank.. it should print a bunch of table data).
However, if I take the content of that HTTPS page, and save it to an html file and then post it on an unsecured HTTP site (and change my content to point to this HTTP page), this script works as expected.
Anyone know how I can get this to work over HTTPS?
#!/usr/bin/perl
use lib qw( ..);
use HTML::TableExtract;
use LWP::Simple;
use Data::Dumper;
# DOESN'T work:
my $content = get("https://datatables.net/");
# DOES work:
# my $content = get("http://www.w3schools.com/html/html_tables.asp");
my $te = HTML::TableExtract->new();
$te->parse($content);
print $te->tables_report(show_content=>1);
print "\n";
print "End\n";
The sites mentioned above for $content are just examples.. these aren't really the sites I'm extracting, but they work just like the site I'm really trying to scrape.
One option I guess is for me to use perl to download the page locally first and extract from there, but I'd rather not, if there's an easier way to do this (anyone that helps, please don't spend any crazy amount of time coming up with a complicated solution!).
The problem is related to the user agent that LWP::Simple uses, which is stopped at that site. Use LWP::UserAgent and set an allowed user agent, like this:
use strict;
use warnings;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
my $url = 'https://datatables.net/';
$ua->agent("Mozilla/5.0"); # set user agent
my $res = $ua->get($url); # send request
# check the outcome
if ($res->is_success) {
# ok -> I simply print the content in this example, you should parse it
print $res->decoded_content;
}
else {
# ko
print "Error: ", $res->status_line, "\n";
}
This is because datatables.net is blocking LWP::Simple requests. You can confirm this by using below code:
#!/usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
print is_success(getprint("https://datatables.net/"));
Output:
$ perl test.pl
403 Forbidden <URL:https://datatables.net/>
You could try using LWP::RobotUA. Below code works fine for me.
#!/usr/bin/perl
use strict;
use warnings;
use LWP::RobotUA;
use HTML::TableExtract;
my $ua = LWP::RobotUA->new( 'bot_chankey/1.1', 'chankeypathak#stackoverflow.com' );
$ua->delay(5/60); # 5 second delay between requests
my $response = $ua->get('https://datatables.net/');
if ( $response->is_success ) {
my $te = HTML::TableExtract->new();
$te->parse($response->content);
print $te->tables_report(show_content=>1);
}
else {
die $response->status_line;
}
In the end, a combination of Miguel and Chankey's responses provided my solution. Miguel's made up most of my code, so I selected that as the answer, but here is my "final" code (got a lot more to do, but this is all I couldn't figure out.. the rest should be no problem).
I couldn't quite get either mentioned by Miguel/Chankey to work, but they got me 99% of the way.. then I just had to figure out how to get around the error "certificate verify failed". I found that answer with Miguel's method right away, so in the end, I mostly used his code, but both responses were great!
#!/usr/bin/perl
use lib qw( ..);
use strict;
use warnings;
use LWP::UserAgent;
use HTML::TableExtract;
use LWP::RobotUA;
use Data::Dumper;
my $ua = LWP::UserAgent->new(
ssl_opts => { SSL_verify_mode => 'SSL_VERIFY_PEER' },
);
my $url = 'https://WebsiteIUsedWasSomethingElse.com';
$ua->agent("Mozilla/5.0"); # set user agent
my $res = $ua->get($url); # send request
# check the outcome
if ($res->is_success)
{
my $te = HTML::TableExtract->new();
$te->parse($res->content);
print $te->tables_report(show_content=>1);
}
else {
# ko
print "Error: ", $res->status_line, "\n";
}
my $url = "https://ohsesfire01.summit.network/reports/slices";
my $user = 'xxxxxx';
my $pass = 'xxxxxx';
my $ua = new LWP::UserAgent;
my $request = new HTTP::Request GET=> $url;
# authenticate
$request->authorization_basic($user, $pass);
my $page = $ua->request($request);
I need to create a Perl CGI script that will accept a single parameter as input. The parameter will be a fully qualified URL and the script will redirect the browser to the URL that has been passed as the parameter. The method is a GET and not a POST.
The browser address bar will accept the full script with the URL parameter like this: http://webserver/cgi-bin/myscript.pl?URL=http://www.google.com
I am new to Perl and I can figure out how to do it with a POST, but not a GET. Any help would be greatly appreciated.
I stole this code but it does not do a GET and I think I am using a bad example or one that does not apply to what I need to do:
UPDATE: This was my solution
#!/usr/local/bin/perl
use strict;
use warnings;
use CGI qw(:standard);
use CGI::Carp qw(warningsToBrowser fatalsToBrowser);
use DBI;
use URI::Escape;
use strictures;
use CGI;
use URI;
my $q = new CGI ;
my $url = "httpcandy";
# Process an HTTP request
#my #values = $q->multi_param('form_field');
my $value = $q->param('param_name');
print "Content-type: text/html\n\n";
#print "<pre>\n";
#
#foreach my $key (sort keys(%ENV)) {
# print "$key = $ENV{$key}<br/>";
#}
#print "</pre>\n";
my $requested = URI->new( CGI::url() );
$requested->query( $ENV{QUERY_STRING} || $ENV{REDIRECT_QUERY_STRING} )
if url_param();
#print header(),
# start_html(),
# h1("requested:"),
# blockquote($requested),
# h1("url:"),
# blockquote($value),
# h1("nothing else"),
#
# end_html();
#
if ($value =~ /http/)
{
print "<META HTTP-EQUIV=refresh CONTENT=\"1;$value\">\n";
}
else
{
print "<META HTTP-EQUIV=refresh CONTENT=\"1;URL=http://$value\">\n";
};
exit;
Your solution seems rather over-engineered. I think this does all that you need.
#!/bin/env perl
use strict;
use warnings;
use CGI ':cgi'; # Only the CGI functions
my $url = param('URL');
if ($url !~ /^http/) {
$url = "http://$url";
}
print redirect($url);
In particular, your solution of using META HTTP-EQUIV in an HTML page seems really strange. HTTP-EQUIV is for situations where you can't alter the web server's headers. But as we're writing a CGI program, we can return whatever headers we warn. So using the redirect() function to return a 302 response seems to the be most obvious solution.
Note: If you get the input using the param() function, then it doesn't matter if it's a GET or POST request.
For an upcoming school project I need to implement a simple web server, It is from the book "Net Programming with Perl". I am trying to get my head around it all as it is all new to me. For now, all I want to do is have a cgi script run as the home page.
I need to get the code to run from the Web.pm script that comes with the webserver (chapter 15 of the book).
I can get it to serve the cgi file as the home page, but it just show the code. I have tried numerous things and the closest I got was the html that the script is supposed to generate was displayed in the command line window that was running the web server but the server output the message from the not_found subroutine.
Here is the beginning of the Web.pm code with the handle_connection and lookup_file subroutines (straight from the book) the subroutines that I have left our are:
invalid_request, redirect and not_found
package Web;
use strict;
use vars '#ISA','#EXPORT';
use IO::File;
use CGI;
require Exporter;
#ISA = 'Exporter';
#EXPORT = qw(handle_connection docroot);
# set to your home directory
my $DOCUMENT_ROOT = '.';
my $CRLF = "\015\012";
###############################
sub docroot {
$DOCUMENT_ROOT = shift if #_;
return $DOCUMENT_ROOT;
}
###############################
# Outline of Handle_Connection()
# Set the socket handle supplied as a parameter
# Set the standard end-of-line character for HTTP messages
# Read the contents from the socket handle into a request variable
# SECTION TO CHECK FOR ERRORS
# Check to make sure the main request line has the right string format. Call invalid_request() otherwise. Set $method to GET or HEAD, and $url to the supplied URL
# Call lookup_file() to find the specified $url in the file system. Call not_found() if lookup_file() fails
# If the type of 'file' return from lookup_file() is actually a directory, call redirect()
# Print the status line and the headers for the response to the socket handle (ie. to the client)
# If the HTTP method is “GET”, print the file requested in the URL to the socket handle (ie. to the client)
sub handle_connection {
my $c = shift; # socket
my ($fh,$type,$length,$url,$method);
local $/ = "$CRLF$CRLF"; # set end-of-line character
my $request = <$c>; # read the request header
print $request; # print request to the command line
# error checking
return invalid_request($c)
unless ($method,$url) = $request =~ m!^(GET|HEAD) (/.*) HTTP/1\.[01]!;
return not_found($c) unless ($fh,$type,$length) = lookup_file($url);
return redirect($c,"$url/") if $type eq 'directory';
# print the header to socket
print $c "HTTP/1.0 200 OK$CRLF";
print $c "Content-length: $length$CRLF";
print $c "Content-type: $type$CRLF";
print $c $CRLF;
return unless $method eq 'GET';
# print the content to socket
my $buffer;
while ( read($fh,$buffer,1024) ) {
print $c $buffer;
}
close $fh;
}
The cgi file is as follows
#!/usr/bin/perl -w
# from http://perl.about.com/od/cgiweb/a/perlcgipm.htm
use CGI qw/:standard/;
print header,
start_html('Hello World'),
h1('CGI.pm is simple.'),
end_html;
The cgi works fine on Apache.
I believe I need to make a system call, but all of my efforts have been unsuccessful.
Any help would be greatly appreciated.
I have CGI proxy that works on my localhost, but when I try to get it work on another server I get Premature end of script headers. I have included the source below. I also tried print header instead of the text/xml and it worked localhost but it failed on the server.
use strict;
#use warnings;
use CGI qw(:standard);
use CGI::Carp qw(fatalsToBrowser);
use HTTP::Request::Common qw(POST);
use HTTP::Request::Common;
use LWP::UserAgent;
use URI::Escape;
use Data::Dumper;
my $url = param('url');
sub writeXML($) {
my $response = shift #_;
if ($response->is_success) {
print CGI->header('text/xml');
print $response->content;
print STDERR "content response:#" . $response->content . "#\n";
}
else {
print STDERR "Status Code: " . $response->status_line . "\n";
print STDERR Dumper ($response);
}
}
sub makeRequest(){
if ($url){
my $ua = LWP::UserAgent->new;
my $response = $ua->request(GET $url);
if ($response){
writeXML($response);
}
else{
print STDERR "No response exists";
}
}
else{
print STDERR "URL must be specified";
}
}
makeRequest();
0;
__END__
The script "works" when I try it from the command line:
$ t.pl url=http://www.unur.com/
gives me the home page of my web site.
That means, the host on which you are trying this is missing some libraries. To figure out which ones, you should examine the server's error log, or try running your script from the shell as shown above.
See DEBUGGING.
PS: There is absolutely no good reason for those prototypes on makeRequest and writeXML. Plus, try warn sprintf "Status: %s\n", $response->status_line; instead of those unsightly print STDERR lines.
See my Troubleshooting Perl CGI scripts guide for all the steps you can go through to find the problem.
You only output a header if the program succeeds, all your error conditions are going to cause the premature end of script headers.
Put a 'print CGI->header();' and a suitable error message to STDOUT at all the points where you're output an error message to STDERR, and you'll be to see what's going wrong.