This question already has an answer here:
How can I handle proxy servers with LWP::Simple?
(1 answer)
Closed 6 years ago.
I am trying to download a webpage through a proxy connection with the following code:
use LWP::Simple qw(get);
my $url = 'https://www.random-site.com';
my $html = get $url or die "sorry, can't";
I get the obvious error sorry, can't.
The code works on a normal connection,but on proxy it doesn't and even with the Hideman program, it still doesn't bypass that proxy. Which would be a better approach in this situation? Am I using the wrong module?
Note LWP::Simple:
The user agent created by this module will identify itself as "LWP::Simple/#.##" and will initialize its proxy defaults from the environment (by calling $ua->env_proxy).
Then, note env_proxy:
Load proxy settings from *_proxy environment variables.
So, set HTTPS_PROXY in the environment.
Related
i am using Mojo::Useragent to fetch some site behind a proxy which defined using HTTP_PROXY and HTTPS_PROXY
below an example of the code :
my $rs = $ua->insecure(1)->get($mysite)
if($rs->res->is_success) {
.....
} else {
print "Problem with fetching $mysite \n";
print $rs->res->error->{message};
}
I am getting this error:
SSL connect attempt failed error:14077419:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert access denied
While when i am using curl on the same machine I get the results as expected.
Any idea how to solve this issue?
Form the SSL error, it looks like your network is actively refusing to let you through.
Defining environment variables HTTP_PROXY and HTTPS_PROXY is fine, however you need to tell Mojo::UserAgent to use them (unlike cURL, that automatically looks them up by default).
Add this line to your code before you run the query :
$ua->proxy->detect;
See the Mojo::UserAgent::Proxy documentation.
If you are looking for a pure Perl solution without using environment variables, you can configure the proxy manually, directly in your code, like :
$ua->proxy
->http('http://127.0.0.1:8080')
->https('http://127.0.0.1:8080');
I have a Perl script that uses WWW::Mechanize to connect to a site over
https, and that script just stopped working the other day. The status
and error message I get back are 500 and "Can't connect to
jobs.illinois.edu:443". The URL I'm trying to connect to is
https://jobs.illinois.edu/. I can connect from my browser (Firefox).
My platform is Linux -- up-to-date Arch Linux. I can also connect
(using WWW::Mechanize) to other https sites.
I tried using LWP::UserAgent, and the behavior is the same.
I'm using ssl_opts => { SSL_version => 'TLSv1' }; I don't remember why
I added that -- it may have been necessary to get it working at some
point.
Any ideas on how to fix this, or how I might get more information as
to what the problem is? Are there other ssl options I can try?
I have a feeling there was some slight configuration change on the
site that led to this problem -- maybe some SSL-protocol version
change or something like that. (I don't think I updated anything
on my machine inbetween the times it worked and stopped working.)
Thanks.
Here's sample code that fails:
#!/usr/bin/perl
use strict;
use warnings;
use constant AJB_URL => 'https://jobs.illinois.edu/academic-job-board';
use WWW::Mechanize;
my $mech = WWW::Mechanize->new( ssl_opts => { SSL_version => 'TLSv1' } );
$mech->get( AJB_URL );
It returns:
Error GETing https://jobs.illinois.edu/academic-job-board: Can't connect to jobs.illinois.edu:443 at ./test2.pl line 12.
... that script just stopped working the other day.
Which in most cases is caused by server-side or client-side changes. But I assume that you did not make any changes on the client side.
Calling your code with perl -MIO::Socket::SSL=debug4... gives:
DEBUG: ...SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Looking at the SSLLabs report you see two trust paths, where one requires an extra download. The root-CA "USERTrust RSA Certification Authority" for the first trust path is not installed on my system (Ubuntu 14.04), and I guess it is not installed on yours (no information about your OS is known, so just guessing). This means the second trust chain will be used and the relevant Root-CA "AddTrust External CA Root" is also installed on my system. Unfortunately this trust chain is missing an intermediate certificate ("Extra download"), so the verification fails.
To fix the problem, find the missing root-CA which should match the fingerprint 2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e and use it:
$ENV{PERL_LWP_SSL_CA_FILE} = '2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e.pem';
Looking at the certificate you see that it was issued on 22 May 2015, i.e. three days ago. This explains why the problem happened just now.
I am working with Web::ID and have some questions.
From the FAQ for Web::ID:
How can I use WebID in Perl?
[...]
Otherwise, you need to use Web::ID directly. Assuming you've configured your web server to request a client certificate from the browser, and you've managed to get that client certificate into Perl in PEM format, then it's just:
my $webid = Web::ID->new(certificate => $pem);
my $uri = $webid->uri;
And you have the URI.
Anyway I'm stuck at the .. get that client certificate into Perl .. part.
I can see the client certificate is being passed along to the script by examining the %ENVenvironment variable. But I am still unsure how to actually process it in the way that Web::ID does... like examine the SAN.
According to the documentation of mod_ssl you will find the PEM encoded client certificate in the environment variable SSL_CLIENT_CERT, so all you need is to call
my $webid = Web::ID->new(certificate => $ENV{SSL_CLIENT_CERT});
However, Apache does not set the SSL_CLIENT_CERT environment variable by default. This is for performance reasons - setting a whole bunch of environment variables before spawning your Perl script (via mod_perl, or CGI, or whatever) is wasteful if your Perl script doesn't use them, so it only sets a small set of environment variables by default. You need to configure Apache correctly to tell it you want ALL DA STUFFZ. In particular you want something like this in .htaccess, or your virtual host config, or server config file:
SSLOptions +StdEnvVars +ExportCertData
While you're at it, you also want to make sure Apache is configured to ask clients to present a certificate. For that you want something like:
SSLVerifyClient optional_no_ca
All this is kind of covered in the documentation for Web::ID but not especially thoroughly.
If I enable Catalyst::Plugin::Session or Catalyst::Plugin::Authentication, I'm no longer able to add settings via myapp.conf, myapp_local.conf, or MyApp.pm. I've searched extensively and haven't been able to find any documentation of why this might be occurring. Is this a feature or a bug? I'm on the latest version of Catalyst and the Session plugins available via the FreeBSD ports tree. I tested this on Debian, and the same issue occurs.
p5-Catalyst-Runtime-5.90016
p5-Catalyst-Plugin-Session-0.35
p5-Catalyst-Plugin-Session-State-Cookie-0.17
p5-Catalyst-Plugin-Session-Store-FastMmap-0.16
I'm running the development server. The plugins are being loaded as follows in MyApp.pm:
use Catalyst qw/
ConfigLoader
Static::Simple
Authorization::Roles
Authentication
Session
Session::Store::FastMmap
Session::State::Cookie
/;
Attempts to set config values fail as long as the Session or Auth plugins are enabled. The only exception to this is the 'name' variable.
__PACKAGE__->config(
name => 'Will be accessible via "name"',
foo => 'Will not be accessible via c.foo if plugins are loaded',
disable_component_resolution_regex_fallback => 1,
enable_catalyst_header => 1, # Send X-Catalyst header
);
I can see ConfigLoader and myapp.conf being loaded in the server debug output. Since this is a pretty basic setup that many users probably use, I'm assuming I'm missing something fairly obvious. Neither the plugin documentation nor any number of other sources I've looked up mention anything about this, unless I just completely missed it.
Update: I thought maybe the fact that I was running this via the develpment server might have been an issue. I made a deployment via Apache/FastCGI, but it didn't make a difference.
Sorry, I need 50 reputation to make a comment, so, I will question here,
Can you add this code right before your __PACKAGE__->setup(); in App.pm ?
my $conf = __PACKAGE__->config;
use DDP; p $conf;
exit;
if you don't have Data::Printer, you can do:
my $conf = __PACKAGE__->config;
use Data::Dumper; print STDERR Dumper $conf;
exit;
this will show your current config.
I thing you are misunderstanding how it works. Since you say that "foo" 'Will not be accessible via c.foo if plugins are loaded', it should be accesible via $c->config->{foo} or c.config.foo if you are using Template::Toolkit
I am running Ubuntu 12.04 and am trying to record http requests with perl's HTTP::Recorder module. I am following the instructions here: http://metacpan.org/pod/HTTP::Recorder
I have the following perl script running:
#!/usr/bin/perl
use HTTP::Proxy;
use HTTP::Recorder;
my $proxy = HTTP::Proxy->new();
# create a new HTTP::Recorder object
my $agent = new HTTP::Recorder;
# set the log file (optional)
$agent->file("/tmp/myfile");
# set HTTP::Recorder as the agent for the proxy
$proxy->agent( $agent );
# start the proxy
$proxy->start();
And I have changed firefox settings so that it uses port 8080 on the localhost as the proxy. Here is a snapshot of my settings:
When I try to visit a website with firefox, however, I get the following error:
Content Encoding Error
The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression.
Not sure what to do. And when I visit:
http://http-recorder
(Where my recorded activity is supposed to show up) I do see that GET requests are being logged. For example, if I try to visit google:
$agent->get('http://www.google.com');
Edit: I should also mention that ubuntu is running inside of virtualbox, not sure if that is messing with anything.