i am using Mojo::Useragent to fetch some site behind a proxy which defined using HTTP_PROXY and HTTPS_PROXY
below an example of the code :
my $rs = $ua->insecure(1)->get($mysite)
if($rs->res->is_success) {
.....
} else {
print "Problem with fetching $mysite \n";
print $rs->res->error->{message};
}
I am getting this error:
SSL connect attempt failed error:14077419:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert access denied
While when i am using curl on the same machine I get the results as expected.
Any idea how to solve this issue?
Form the SSL error, it looks like your network is actively refusing to let you through.
Defining environment variables HTTP_PROXY and HTTPS_PROXY is fine, however you need to tell Mojo::UserAgent to use them (unlike cURL, that automatically looks them up by default).
Add this line to your code before you run the query :
$ua->proxy->detect;
See the Mojo::UserAgent::Proxy documentation.
If you are looking for a pure Perl solution without using environment variables, you can configure the proxy manually, directly in your code, like :
$ua->proxy
->http('http://127.0.0.1:8080')
->https('http://127.0.0.1:8080');
Related
I want to access a website where the certificate cannot be verified (hostname not correct and I cannot change/update the certificate on the server which my application point). I'm using Mojo::UserAgent to get request. So how would go about ignoring this and continues to connect to the website?
I've seen that there is not an option.
I don't want to use LWP::UserAgent.
I've done it using WWW::Curl and WWW::Curl::Easy but I want to clean the code using Mojo::UserAgent (as used in my entire application).
hostname not correct ... So how would go about ignoring this and continues to connect to the website?
It is a very bad idea just to abandon any kind of validation just because the hostname does not match the certificate. Why do you use TLS at all?
A much better way is to know up front which certificate you expect and verify that you exactly get this one. This can easily be done with the option SSL_fingerprint. Unfortunately Mojo::UserAgent does not offer a way to set connection specific arguments, so you need to set it immediately before the connection and back before you do other connections:
use IO::Socket::SSL 1.980;
IO::Socket::SSL::set_client_defaults(
SSL_fingerprint => "sha256$55a5dfaaf..."
);
... use Mojo::UserAgent to connect ..
IO::Socket::SSL::set_client_defaults(); # set back
For more information about to use this option and how to get the fingerprint see Certificate error in Perl.
Another way in case only the hostname is bad would be to use the SSL_verifycn_name option to specify the hostname you expect inside the certificate.
IO::Socket::SSL::set_client_defaults(
SSL_verifycn_name => 'foo.example.com',
);
Another way could be done with the set_args_filter_hack function which is intended to deal with modules which set strange defaults or which don't let the user set its own values:
my $hostname = undef;
IO::Socket::SSL::set_args_filter_hack(
sub {
my ($is_server,$args) = #_;
$args->{SSL_verifycn_name} = $hostname if $hostname;
}
);
...
$hostname = 'foo.example.com';
... do something with Mojo::UserAgent ...
$hostname = undef;
This way you can adapt the settings for each SSL handshake.
For more information see the documentation of IO::Socket::SSL, especially the part about the common usage errors. This part also documents what you should do instead of disabling any kind of validation if some part of the certificate is wrong.
'SSL connect attempt failed error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol'
curl ... SSL connection using TLS_RSA_WITH_RC4_128_MD5
My guess is what you are facing here is unrelated to the certificate validation. Given that this server is using a very old cipher RC4-MD5 I will assume that the server can only handle SSL 3.0. This version is disabled since a while for security reasons in IO::Socket::SSL. To explicitly use this insecure version temporarily:
IO::Socket::SSL::set_client_defaults(
SSL_version => 'SSLv3'
);
Mojo::UserAgent uses IO::Socket::SSL for SSL/TLS support, so you can disable server certificate verification using
IO::Socket::SSL::set_defaults(
SSL_verify_mode => IO::Socket::SSL::SSL_VERIFY_NONE,
);
This is an old question, but Mojolicious is alive and kicking. As such I've battled with this recently. Directly from the documents:
my $bool = $ua->insecure;
$ua = $ua->insecure($bool);
Do not require a valid TLS certificate to access HTTPS/WSS sites, defaults to the value of the MOJO_INSECURE environment variable.
# Disable TLS certificate verification for testing
say $ua->insecure(1)->get('https://127.0.0.1:3000')->result->code;
In my application $bool is set from a configuration file, so I can switch it back on, where we need it.
I have a Perl script that uses WWW::Mechanize to connect to a site over
https, and that script just stopped working the other day. The status
and error message I get back are 500 and "Can't connect to
jobs.illinois.edu:443". The URL I'm trying to connect to is
https://jobs.illinois.edu/. I can connect from my browser (Firefox).
My platform is Linux -- up-to-date Arch Linux. I can also connect
(using WWW::Mechanize) to other https sites.
I tried using LWP::UserAgent, and the behavior is the same.
I'm using ssl_opts => { SSL_version => 'TLSv1' }; I don't remember why
I added that -- it may have been necessary to get it working at some
point.
Any ideas on how to fix this, or how I might get more information as
to what the problem is? Are there other ssl options I can try?
I have a feeling there was some slight configuration change on the
site that led to this problem -- maybe some SSL-protocol version
change or something like that. (I don't think I updated anything
on my machine inbetween the times it worked and stopped working.)
Thanks.
Here's sample code that fails:
#!/usr/bin/perl
use strict;
use warnings;
use constant AJB_URL => 'https://jobs.illinois.edu/academic-job-board';
use WWW::Mechanize;
my $mech = WWW::Mechanize->new( ssl_opts => { SSL_version => 'TLSv1' } );
$mech->get( AJB_URL );
It returns:
Error GETing https://jobs.illinois.edu/academic-job-board: Can't connect to jobs.illinois.edu:443 at ./test2.pl line 12.
... that script just stopped working the other day.
Which in most cases is caused by server-side or client-side changes. But I assume that you did not make any changes on the client side.
Calling your code with perl -MIO::Socket::SSL=debug4... gives:
DEBUG: ...SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Looking at the SSLLabs report you see two trust paths, where one requires an extra download. The root-CA "USERTrust RSA Certification Authority" for the first trust path is not installed on my system (Ubuntu 14.04), and I guess it is not installed on yours (no information about your OS is known, so just guessing). This means the second trust chain will be used and the relevant Root-CA "AddTrust External CA Root" is also installed on my system. Unfortunately this trust chain is missing an intermediate certificate ("Extra download"), so the verification fails.
To fix the problem, find the missing root-CA which should match the fingerprint 2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e and use it:
$ENV{PERL_LWP_SSL_CA_FILE} = '2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e.pem';
Looking at the certificate you see that it was issued on 22 May 2015, i.e. three days ago. This explains why the problem happened just now.
tl;dr; How do I capture stderr from within a script to get a more specific error, rather than just relying on the generic error from Net::OpenSSH?
I have a tricky problem I'm trying to resolve. Net::OpenSSH only works with protocol version 2. However we have a number of devices of the network that only support version 1. I'm trying to find an elegant way of detecting if the remote end is the wrong version.
When connecting to a version 1 device, the following message shows up on the stderr
Protocol major versions differ: 2 vs. 1
However the error that is returned by Net::OpenSSH is as follows
unable to establish master SSH connection: bad password or master process exited unexpectedly
This particular error is too general,and doesn't address just a protocol version difference. I need to handle protocol differences by switching over to another library, I don't want to do that for every connection error.
We use a fairly complicated process that was originally wired for telnet only access. We load up a "comm" object, that then determines stuff like the type of router, etc. That comm object invokes Net::OpenSSH to pass in the commands.
Example:
my $sshHandle = eval { $commsObject->go($router) };
my $sshError = $sshHandle->{ssh}->error;
if ($sshError) {
$sshHandle->{connect_error} = $sshError;
return $sshHandle;
}
Where the protocol error shows up on stderr is here
$args->{ssh} = eval {
Net::OpenSSH->new(
$args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
master_opts => [ -o => "StrictHostKeyChecking=no" ]
);
};
What I would like to do is pass in the stderr protocol error instead of the generic error passed back by Net::OpenSSH. I would like to do this within the script, but I'm not sure how to capture stderr from within a script.
Any ideas would be appreciated.
Capture the master stderr stream and check it afterwards.
See here how to do it.
Another approach you can use is just to open a socket to the remote SSH server. The first thing it sends back is its version string. For instance:
$ nc localhost 22
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-8
^C
From that information you should be able to infer if the server supports SSH v2 or not.
Finally, if you also need to talk to SSH v1 servers, the development version of my other module Net::SSH::Any is able to do it using the OS native SSH client, though it establishes a new SSH connection for every command.
use Net::SSH::Any;
my $ssh = Net::SSH::Any->new($args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
backends => 'SSH_Cmd',
strict_host_key_checking => 0);
Update: In response to Bill comment below on the issue of sending multiple commands over the same session:
The problem of sending commands over the same session is that you have to talk to the remote shell and there isn't a way to do that reliably in a generic fashion as every shell do things differently, and specially for network equipment shells that are quite automation-unfriendly.
Anyway, there are several modules on CPAN trying to do that, implementing a handler for every kind of shell (or OS). For instance, check Oliver Gorwits's modules Net::CLI::Interact, Net::Appliance::Session and Net::Appliance::Phrasebook. The phrasebook approach seems quite suitable.
I am running Ubuntu 12.04 and am trying to record http requests with perl's HTTP::Recorder module. I am following the instructions here: http://metacpan.org/pod/HTTP::Recorder
I have the following perl script running:
#!/usr/bin/perl
use HTTP::Proxy;
use HTTP::Recorder;
my $proxy = HTTP::Proxy->new();
# create a new HTTP::Recorder object
my $agent = new HTTP::Recorder;
# set the log file (optional)
$agent->file("/tmp/myfile");
# set HTTP::Recorder as the agent for the proxy
$proxy->agent( $agent );
# start the proxy
$proxy->start();
And I have changed firefox settings so that it uses port 8080 on the localhost as the proxy. Here is a snapshot of my settings:
When I try to visit a website with firefox, however, I get the following error:
Content Encoding Error
The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression.
Not sure what to do. And when I visit:
http://http-recorder
(Where my recorded activity is supposed to show up) I do see that GET requests are being logged. For example, if I try to visit google:
$agent->get('http://www.google.com');
Edit: I should also mention that ubuntu is running inside of virtualbox, not sure if that is messing with anything.
Recently I was forced to switch from SVN to TFS.
I'm trying to get this working with TEE on our RedHat box.
Any action seems to end with something like this:
user#rh: tf -map $/XX/XX . -workspace:app-job -server:http://tfs.domain.com:8080/tfs/TFS2008/ -profile:TFS1_PRF_C
Password:
An error occurred: Proxy URL 'incache.domain.com:8080' does not contain a valid hostname.
Could someone help with that?
Your question is a little vague about what you expect to happen here (are you supposed to be using an HTTP proxy to access your TFS server? Or is the problem that it's assuming your HTTP proxy?)
I'm going to assume that you do not need to use an HTTP proxy to access your internal TFS server, since in most corporate environments your proxy is used to get outside the network, not inside. By default, the Team Explorer Everywhere CLC does try to use your system HTTP proxy, however this is configurable in your connection profile.
In order to override your default system HTTP proxy for that profile, you can set the profile property httpProxyIgnoreGlobal to true:
tf profile -edit -boolean:httpProxyIgnoreGlobal=true TFS1_PRF_C