Perl LWP does not work - perl

I'm using Padre as my IDE with Strawberry Perl on Windows 7 Pro.
I'm trying to create a perl script that goes to a text file on a website, and then reads/copies the text file.
But I can't get LWP to work even for the simplest LWP command ever.
#!/usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
getprint('http://www.perlmeme.org') or die 'Unable to get page';
exit 0;
I keep getting this error message:
500 can't connect to proxy.sn.no:8001 (Bad hostname)
eg 500 can't connect to (Bad hostname) http://www.perlmeme.org
I've been googling around, used Microsoft Fixit to reset ports, etc but I still can't make it work. http//www.justskins.com/forums/lwp-connect-failing-bad-119421.html
Can anyone help me out here? Been stuck for many hours :(
Edit:
--1 foreach my $key (keys %ENV) { print "$key: $ENV{$key}\n" if $key =~ m/proxy/i; }
Yes it prints out FTP_PROXY and HTTP_PROXY both followed by this: http://proxy.sn.no:8001/
That's the proxy that I got from this helpthread How do I install a module? Strawberry Perl issues
I had the proxy problem, then I tried the config from that thread, then the proxy problem was still there.
--2 I'm not expecting any proxy to be used on my end or anything. Just wanna connect the perl script to the website to retrieve a text document.
--3 ping had 0% loss. (I can only post two hyperlinks in this post)
--4 I'm using Windows.

LWP will honor the http_proxy environment variable and try to use it as an HTTP proxy. Check with env | grep http_proxy on Unix.

Related

Perl LWP::Simple - A file or directory in the path name does not exist

I have two AIX hosts, both with same version of Perl installed (5.10.1), both with LWP::Simple module installed. I run a few Perl one-liners to download files from IBM's RAM.
They use LWP::Simple to first verify the module exists, then get the header for a resource (a file) at some URL to obtain file size, then finally download the file. It works fine on host #1, but on host #2 I get a couple different errors.
perl -e 'eval { require LWP::Simple; } or die 0;'
This one runs without issue on both. I've tested it dies as expected on a machine without LWP::Simple module installed.
perl -e 'use LWP::Simple; my #info = head("<URL here>"); die("-1 \$!") unless #info; print "$info[1]\n";'
On the problem host, I get "A file or directory in the path name does not exist" here, which I find weird given this is (I think?) a Unix error, but I can't reproduce this on the working host (#1) even by giving a garbage URL for this command.
perl -e 'use LWP::Simple; my $localfile = "<target file name here>"; my $url = "<URL here>"; my $status = getstore($url, $localfile); die "-1 \$!" unless is_success($status);'
Here I get a 501 error when I run on the problem host.
I reformatted these statements for clarity; it's actually invoked from a C# application via SSH.NET. Please pardon typos; the upshot is it works absolutely fine on AIX host #1 and not at all on AIX host #2.
Anyone have ideas how to further troubleshoot?

Why isn't this perl cgi script redirecting?

I have a perl cgi script that is exactly the following:
#!/usr/bin/perl -w
use CGI;
$query = new CGI;
print $query->redirect("http://www.yahoo.com");
At the command line things look OK:
$perl test.pl
Status: 302 Moved
Location: http://www.yahoo.com
When I load it in the browser, http://localhost/cgi-bin/test.pl, the request gets aborted, and depending on the browser I get various messages:
Connection reset by server.
No data received.
The only research I could find on this issue, stated that a common problem is printing some data or header before the redirect call, but I am clearly not doing that here.
I'm hosting it from a QNX box with the default slinger server.
The code works fine on my machine, check the following
Check the error logs, eg: tail /var/log/http/error_log
Do the chmod/chown permissions match other working CGi scripts, compare using ls -l
Does printing the standard hello world work? Change your print statement to
print $query->header(), 'Hello World';
Add the following for better errors
use warnings;
use diagnostics;
use CGI::Carp 'fatalsToBrowser';
at the command line use slinger will return some basic use options. For logging you need both syslogd and -d enabled in slinger. Ie
slinger -d &
Then look to /var/log/syslog for errors

Why does SSL Web access work as root in an interactive shell, but not as user `apache` in a post-commit script?

I have a Perl program, intended to be run from a subversion post-commit script, which needs to connect to a HTTPS based Web API.
When I test the program from an interactive shell, as root, it works just fine.
When it runs from the post-commit script, it errors out, and the response from LWP is along the lines of "500 Connect failed".
There's some evidence that when run from the post-commit script, SS isn't enabled, because when I set $ENV{HTTPS_DEBUG} =1; and run it as root, I see debug output, such as
SSL_connect:before/connect initialization
but from the post-commit script, non of the SLL debug info is printed.
The the post-commit script runs as user apache.
I'm running CentOS 64bit.
It's been years since I've done any Unix work, so I'm not sure what the next steps are to get SSL working in this case.
The difference in environments makes me suspicious. Like running cron jobs, it may be that the environment, the INC path, or the perl interpreter itself is sufficiently different that it can't find Crypt::SSLeay or whatever else you're using for SSL support.
As a troubleshooting step, try using this program in both your shell and in the post-commit hook to see if there is an environment difference between the two. This will dump several runtime variables that show what perl knows about its environment to a tempfile.
#!/usr/bin/perl
use Data::Dumper;
use File::Temp qw( tempfile );
use strict;
use warnings;
my $tempdir = '/tmp'; # Change this if necessary.
my( $fh, $fname ) = tempfile( "tempXXXXXX", DIR => $tempdir, UNLINK => 0 );
print $fh Data::Dumper->Dump( [ \#INC, \%INC, $^X, $0, $], \#ARGV, \%ENV ],
[ qw( #INC %INC ^X 0 ] #ARGV %ENV ] ) ] );
close( $fh );
# Change this if the post-commit hook doesn't pass stdout back to you.
print "Wrote data to $fname.\n";
__END__
If they differ substantially, your next step would be to make the environment in the post-commit hook the same as under your shell, e.g. adding a use lib qw( /path/to/where/ssl/modules/are/installed ); line to your script's use section, by setting PERL5LIB, using the full path to a different Perl interpreter, or whatever is appropriate. See perldoc perlvar for a description of some of the variables, if you're not familiar with them.
It's not a Perl issue. Perl is doing only what it is told. Figure out what is saying "http:" vs "https:" above Perl, and sort that out. You don't need to "configure Perl... to use SSL".
I had to run this:
setsebool httpd_can_network_connect=on
to allow the httpd process to make network connections

Why doesn't my Perl CGI script work?

I really do not get how to run a Perl file. I have uploaded my .pl to the cgi-bin then chmod to 755. Then when i go to run the file i just get a 500 internal server error.
**/cgi-bin/helloworld.pl**
#!/usr/bin/perl
print 'hello world';
Any ideas as to what I am doing wrong?
Read the official Perl CGI FAQ.
That'll answer this, and many other questions you may have.
For example: "My CGI script runs from the command line but not the browser. (500 Server Error)"
Hope this helps!
You probably need something like
print "Content-type: text/html\n\n";
before your print statement. Take a look at http://httpd.apache.org/docs/2.0/howto/cgi.html#troubleshoot
It would help to know what server you are using, and the exact error message that's showing up in the server's logs. I'd guess that, if you are using Apache, you'll see something like "Premature end of script headers".
Look into using CGI::Carp to output fatal errors to the browser. use CGI::Carp qw(fatalsToBrowser);
Also, please definitely do use the CGI module to output any needed information such as headers/html/whatever. Printing it all is the wrong way to do it.
EDIT: You will also definitely be able to check an error log of some sort.
Perhaps you need my Troubleshooting Perl CGI scripts
First, find out the path to perl on that system and make sure the shebang line is correct. Giving more information about the system and the web server would also help others diagnose.
Then, try:
#!/path/to/perl/binary
use strict;
use warnings;
$| = 1;
use CGI qw( :default );
print header('text/plain'), "Hello World\n";
Make sure that you can run the script from a shell prompt, without invoking it through Perl. In other words, you should be able to go to your cgi-bin directory and type:
./helloworld.pl
and get output. If that doesn't work, fix that. In looking at the output, the first line must be:
Content-Type: text/html
(Or text/plain or some other valid MIME type.)
If that's not the case, fix that.
Then you must have an empty line before the body of your page is printed. If there's no empty line, your script won't work as a CGI script. So your total output should look like this:
Content-Type: text/html
hello world
If you can run your script and that's the output, then there's something weird going on. If Apache is not logging the error to an error_log file somewhere, then maybe there's some problem with it.
Did you enable Apache to server .pl files as CGI scripts? Check your Apache config file, or (quick but not guaranteed test) try changing the file extension to .cgi. Also, make sure your shebang line (#!) is at the very top. Finally, check the line endings are Unix if your server is Linux. And yes, test it from the command-line, and use strict; for better feedback on potential errors.

Why do I need to explicitly output the HTTP header for IIS but not Apache?

I am trying to set up apache instead of IIS because IIS needlessly crashes all the time, and it would be nice to be able to have my own checkout of the source instead of all of us editing a common checkout.
In IIS we must do something like this at the beginning of each file:
use CGI;
my $input = new CGI();
print "HTTP/1.0 200 OK";
print $input->header();
whereas with apache we must leave off the 200 OK line. The following works with both:
use CGI;
my $input = new CGI();
print $input->header('text/html','200 OK');
Can anyone explain why? And I was under the impression that the CGI module was supposed to figure out these kind of details automatically...
Thanks!
Update: brian is right, nph fixes the problem for IIS, but it is still broken for Apache. I don't think it's worth it to have conditionals all over the code so I will just stick with the last method, which works with and without nph.
HTTP and CGI are different things. The Perl CGI module calls what it does an "HTTP header", but it's really just a CGI header for the server to fix up before it goes back to the client. They look a lot alike which is why people get confused and why the CGI.pm docs don't help by calling them the wrong thing.
Apache fixes up the CGI headers to make them into HTTP headers, including adding the HTTP status line and anything else it might need.
If you webserver isn't fixing up the header for you, it's probably expecting a "no-parsed header" where you take responsibility for the entire header. To do that in CGI.pm, you have to add the -nph option to your call to header, and you have to make the complete header yourself, including headers such as Expires and Last-Modified. See the docs under Creating a Standard HTTP Header. You can turn on NPH in three ways:
use CGI qw(-nph)
CGI::nph(1)
print header( -nph => 1, ...)
Are you using an older version of IIS? CGI.pm used to turn on the NPH feature for you automatically for IIS, but now that line is commented out in the source in CGI.pm:
# This no longer seems to be necessary
# Turn on NPH scripts by default when running under IIS server!
# $NPH++ if defined($ENV{'SERVER_SOFTWARE'}) && $ENV{'SERVER_SOFTWARE'}=~/IIS/;
I'm still experiencing this problem with ActivePerl 5.14 running under IIS 7 via ISAPI. The ActivePerl 5.10 FAQ claims the problem is fixed (the 5.14 FAQ doesn't even address the issue), but it doesn't appear to be and setting the registry key they suggest using has no effect in this environment.
Using $ENV{PerlXS} eq 'PerlIS' to detect ISAPI and turn on the NPH key per the aforementioned FAQ seems to work. I hacked my CGI.pm to add the final two lines below under the old IIS handler:
# This no longer seems to be necessary
# Turn on NPH scripts by default when running under IIS server!
# $NPH++ if defined($ENV{'SERVER_SOFTWARE'}) && $ENV{'SERVER_SOFTWARE'}=~/IIS/;
# Turn on NPH scripts by default when running under IIS server via ISAPI!
$NPH++ if defined($ENV{'SERVER_SOFTWARE'}) && $ENV{PERLXS} eq 'PerlIS';
I had similar problem with perl (it was a DOS/Unix/Mac newline thing !)
binmode(STDOUT);
my $CRLF = "\r\n"; # "\015\012"; # ^M: \x0D ^L: \x0A
print "HTTP/1.0 200 OK",$CRLF if ($0 =~ m/nph-/o);
print "Content-Type: text/plain".$CRLF;
print $CRLF; print "OK !\n";