Process x509 client certificates in Perl - perl

I am working with Web::ID and have some questions.
From the FAQ for Web::ID:
How can I use WebID in Perl?
[...]
Otherwise, you need to use Web::ID directly. Assuming you've configured your web server to request a client certificate from the browser, and you've managed to get that client certificate into Perl in PEM format, then it's just:
my $webid = Web::ID->new(certificate => $pem);
my $uri = $webid->uri;
And you have the URI.
Anyway I'm stuck at the .. get that client certificate into Perl .. part.
I can see the client certificate is being passed along to the script by examining the %ENVenvironment variable. But I am still unsure how to actually process it in the way that Web::ID does... like examine the SAN.

According to the documentation of mod_ssl you will find the PEM encoded client certificate in the environment variable SSL_CLIENT_CERT, so all you need is to call
my $webid = Web::ID->new(certificate => $ENV{SSL_CLIENT_CERT});
However, Apache does not set the SSL_CLIENT_CERT environment variable by default. This is for performance reasons - setting a whole bunch of environment variables before spawning your Perl script (via mod_perl, or CGI, or whatever) is wasteful if your Perl script doesn't use them, so it only sets a small set of environment variables by default. You need to configure Apache correctly to tell it you want ALL DA STUFFZ. In particular you want something like this in .htaccess, or your virtual host config, or server config file:
SSLOptions +StdEnvVars +ExportCertData
While you're at it, you also want to make sure Apache is configured to ask clients to present a certificate. For that you want something like:
SSLVerifyClient optional_no_ca
All this is kind of covered in the documentation for Web::ID but not especially thoroughly.

Related

Why getting SSLCertVerificationError ... self signed certificate in certificate chain - from one machine but not another?

I am trying to test an API on my site. The tests work just fine from one machine, but running the code from a different machine results in the SSLCertVerificationError - which is odd because the site has an SSL cert and is NOT self signed.
Here is the core of my code:
async def device_connect(basename, start, end):
url = SERVER_URL
async with aiohttp.ClientSession() as session:
post_tasks = []
# prepare the coroutines that post
for x in range(start, end):
myDevice={'test':'this'}
post_tasks.append(do_post(session, url, myDevice))
# now execute them all at once
await asyncio.gather(*post_tasks)
async def do_post(session, url, data):
async with session.post(url, data =data) as response:
x = await response.text()
I tried (just for testing) to set 'verify=False' or trust_env=True, but I continue to get the same error. On the other computer, this code runs fine and no trust issue results.
That error text is somewhat misleading. OpenSSL, which python uses, has dozens of error codes that indicate different ways certificate validation can fail, including
X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN -- the peer's cert can't be chained to a root cert in the local truststore; the chain received from the peer includes a root cert, which is self-signed (because root certs must be self-signed), but that root is not locally trusted
Note this is not talking about the peer/leaf cert; if that is self signed and not trusted, there is a different error X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT which displays as just 'self signed certificate' without the part about 'in certificate chain'.
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY (displays in text as 'unable to get local issuer certificate') -- the received chain does not contain a self-signed root and the peer's cert can't be chained to a locally trusted root
In both these cases the important info is the peer's cert doesn't chain to a trusted root; whether the received chain includes a self-signed root is less important. It's kind of like if you go to your doctor and after examination in one case s/he tells you "you have cancer, and the weather forecast for tomorrow is a bad storm" or in another case "you have cancer, but the weather forecast for tomorrow is sunny and pleasant". While these are in fact slightly different situations, and you might conceivably want to distinguish them, you need to focus on the part about "you have cancer", not tomorrow's weather.
So, why doesn't it chain to a trusted root? There are several possibilities:
the server is sending a cert chain with a root that SHOULD be trusted, but machine F is using a truststore that does not contain it. Depending on the situation, it might be appropriate to add that root cert to the default truststore (affecting at least all python apps unless specifically coded otherwise, and often other types of programs like C/C++ and Java also) or it might be better to customize the truststore for your appplication(s) only; or it might be that F is already customized wrongly and just needs to be fixed.
the server is sending a cert chain that actually uses a bad CA, but machine W's truststore has been wrongly configured (again either as a default or customized) to trust it.
machine F is not actually getting the real server's cert chain, because its connection is 'transparently' intercepted by something. This might be something authorized by an admin of the network (like an IDS/IPS/DLP or captive portal) or machine F (like antivirus or other 'endpoint security'), or it might be something very bad like malware or a thief or spy; or it might be in a gray area like some ISPs (try to) intercept connections and insert advertisements (at least in data likely to be displayed to a person like web pages and emails, but these can't always be distinguished).
the (legit) server is sending different cert chains to F (bad) and W (good). This could be intentional, e.g. because W is on a business' internal network while F is coming in from the public net; however you describe this as 'my site' and I assume you would know if it intended to make distinctions like this. OTOH it could be accidental; one fairly common cause is that many servers today use SNI (Server Name Indication) to select among several 'certs' (really cert chains and associated keys); if F is too old it might not be sending SNI, causing the server to send a bad cert chain. Or, some servers use different configurations for IPv4 vs IPv6; F could be connecting over one of these and W the other.
To distinguish these, and determine what (if anything) to fix, you need to look at what certs are actually being received by both machines.
If you have (or can get) OpenSSL on both, do openssl s_client -connect host:port -showcerts. For OpenSSL 1.1.1 up (now common) to omit SNI add -noservername; for older versions to include SNI add -servername host. Add -4 or -6 to control the IP version, if needed. This will show subject and issuer names (s: and i:) for each received cert; if any are different, and especially the last, look at #3 or #4. If the names are the same compare the whole base64 blobs to make sure they are entirely the same (it could be a well-camoflauged attacker). If they are the same, look at #1 or #2.
Alternatively, if policy and permissions allow, get network-level traces with Wireshark or a more basic tool like tcpdump or snoop. In a development environment this is usually easy; if either or both machine(s) is production, or in a supplier, customer/client, or partner environment, maybe not. Check SNI in ClientHello, and in TLS1.2 (or lower, but nowadays lower is usually discouraged or prohibited) look at the Certificate message received; in wireshark you can drill down to any desired level of detail. If both your client(s) and server are new enough to support TLS1.3 (and you can't configure it/them to downgrade) the Certificate message is encrypted and wireshark won't be able to show you the contents unless you can get at least one of your endpoints to export the session secrets in SSLKEYLOGFILE format.

Postgresql : SSL certificate error unable to get local issuer certificate

In PostgreSQL, whenever I execute an API URL with secure connection with query
like below
select *
from http_get('https://url......');
I get an error
SSL certificate problem: unable to get local issuer certificate
For this I have already placed a SSL folder in my azure database installation file at following path
C:\Program Files\PostgreSQL\9.6\ssl\certs
What should I do to get rid of this? Is there any SSL extension available, or do I require configuration changes or any other effort?
Please let me know the possible solutions for it.
A few questions...
First, are you using this contrib module: https://github.com/pramsey/pgsql-http ?
Is the server that serves https://url....... using a self-signed (or invalid) certificate?
If the answer to those two questions is "yes" then you may not be able to use that contrib module without some modification. I'm not sure how limited your access is to PostgreSQL in Azure, but if you can install your own C-based contrib modules there is some hope...
pgsql-http only exposes certain CURLOPTs (see: https://github.com/pramsey/pgsql-http#curl-options) values which are settable with http_set_curlopt()
For endpoints using self-signed certificates, I expect the CURLOPT you'll want to include support for to ignore SSL errors is CURLOPT_SSL_VERIFYPEER
If there are other issues like SSL/TLS protocol or cipher mismatches, there are other CURLOPTs that can be patched-in, but those also are not available without customization of the contrib module.
I don't think anything in your
C:\Program Files\PostgreSQL\9.6\ssl\certs
folder has any effect on the http_get() functionality.
If you don't want to get your hands dirty compiling and installing custom contrib modules, you can create an issue on the github page of the maintainer and see if it gets picked up.
You might also take a peek at https://github.com/pramsey/pgsql-http#why-this-is-a-bad-idea because the author of the module makes several very good points to consider.

Perl script using WWW::Mechanize to connect to https site just started failing

I have a Perl script that uses WWW::Mechanize to connect to a site over
https, and that script just stopped working the other day. The status
and error message I get back are 500 and "Can't connect to
jobs.illinois.edu:443". The URL I'm trying to connect to is
https://jobs.illinois.edu/. I can connect from my browser (Firefox).
My platform is Linux -- up-to-date Arch Linux. I can also connect
(using WWW::Mechanize) to other https sites.
I tried using LWP::UserAgent, and the behavior is the same.
I'm using ssl_opts => { SSL_version => 'TLSv1' }; I don't remember why
I added that -- it may have been necessary to get it working at some
point.
Any ideas on how to fix this, or how I might get more information as
to what the problem is? Are there other ssl options I can try?
I have a feeling there was some slight configuration change on the
site that led to this problem -- maybe some SSL-protocol version
change or something like that. (I don't think I updated anything
on my machine inbetween the times it worked and stopped working.)
Thanks.
Here's sample code that fails:
#!/usr/bin/perl
use strict;
use warnings;
use constant AJB_URL => 'https://jobs.illinois.edu/academic-job-board';
use WWW::Mechanize;
my $mech = WWW::Mechanize->new( ssl_opts => { SSL_version => 'TLSv1' } );
$mech->get( AJB_URL );
It returns:
Error GETing https://jobs.illinois.edu/academic-job-board: Can't connect to jobs.illinois.edu:443 at ./test2.pl line 12.
... that script just stopped working the other day.
Which in most cases is caused by server-side or client-side changes. But I assume that you did not make any changes on the client side.
Calling your code with perl -MIO::Socket::SSL=debug4... gives:
DEBUG: ...SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Looking at the SSLLabs report you see two trust paths, where one requires an extra download. The root-CA "USERTrust RSA Certification Authority" for the first trust path is not installed on my system (Ubuntu 14.04), and I guess it is not installed on yours (no information about your OS is known, so just guessing). This means the second trust chain will be used and the relevant Root-CA "AddTrust External CA Root" is also installed on my system. Unfortunately this trust chain is missing an intermediate certificate ("Extra download"), so the verification fails.
To fix the problem, find the missing root-CA which should match the fingerprint 2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e and use it:
$ENV{PERL_LWP_SSL_CA_FILE} = '2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e.pem';
Looking at the certificate you see that it was issued on 22 May 2015, i.e. three days ago. This explains why the problem happened just now.

Perl SOAP::WSDL accessing HTTPS Unathorized error

I'm trying to generate a Perl library to connect to a WebService. This webservice is in an HTTPS server and my user has access to it.
I've executed wsdl2perl.pl several times, with different options, and it always fails with the message: Unauthorized at /usr/lib/perl5/site_perl/5.8.8/SOAP/WSDL/Expat/Base.pm line 73.
The thing is, when I don't give my user/pass as arguments, it doesn't even asks for them.
I've read [SOAP::WSDL::Manual::Cookbook] (http://search.cpan.org/~mkutter/SOAP-WSDL-2.00.10/lib/SOAP/WSDL/Manual/Cookbook.pod) and done what it says about HTTPS: Crypt::SSLeay is instaleld, and both SOAP::WSDL::Transport::HTTP and SOAP::Transport::HTTP are modified.
Can you give any hint about what may be going wrong?
Can you freely access the WSDL file from your web browser?
Can someone else in your network access it without any problems?
Maybe the web server hosting the WSDL file requires Basic or some other kind of Authentication...
If not necessary ,I don't recommend you to use perl as a web service client .As you know ,perl is a open-source language,although it do support soap protocol,but its support do not seem very standard.At first,its document is not very clear.And also ,its support sometimes is limited.At last,bug always exists here and there.
So ,if you have to use wsdl2perl,you can use komodo to step into the code to find out what happened.This is just what I used to do when using perl as a web service client.You know ,in the back of https is SSL,so ,if your SSL is based on certificate-authorized,you have to set up your cert path and the list of trusted server cert.You'd better use linux-based firefox to have a test.As I know ,you can set up firefox's cert path and firefox's trusted cert list.If firefox can communicated with your web service server succefully,then,it's time to debug your perl client.
To debug situations with Perl and SOAP, interpose a web proxy so you can see exactly what data is being passed and what response comes back from the server. You were getting a 401 Not authorized, I expect, but there may be more detail in the server response.
Both Fiddler http://docs.telerik.com/fiddler and Charles proxy https://www.charlesproxy.com/ can do this.
The error message you quote seems to be from this line :
die $response->message() if $response->code() ne '200';
and in HTTP world, Unauthorized is clearly error code 401, which means your website asks for a username and password (most probably, some website may "hijack" this error code to cater for other conditions like a filter on the source IP).
Do you have them?
If so, you can
after wdsl2perl has run, find in the created files where set_proxy() is called and change the URL in there to include the username and password like that : ...->set_proxy('http://USERNAME:PASSWORD#www.example.com/...')
or your in code, after instantiating the SOAP::WSDL object, call service(SERVICENAME) on it (for each service you have defined in your WSDL file), which gives you a new object, on which you call transport() to access the underlying transport object on which you can call proxy() with the URL as formatted above (yes it is proxy() here and set_proxy() above); or you call credentials() instead of proxy() and you pass 4 strings:
'HOSTNAME:PORT'
the realm, as given by the webserver but I think you can put anything
the username
the password

haproxy - which configuration files

I have an HAProxy install which was configured by someone who left the company. It runs on Ubuntu 10.04 and it seems to use 3 configuration files in the directory /etc/haproxy
haproxy.cfg
haproxy.http.cfg
haproxy.https.cfg
I don't see the point in using the haproxy.https.cfg file as I believe (in our configuration) it can all be configured from a single haproxy.http.cfg file but when I remove that httpS file it complains bitterly and refuses to run. My question
Is this the standard configuration haproxy uses or if not, I can't find a reference to the "S" file anywhere. Can anyone suggest how HAProxy concludes it should use it?
Thanks
The very answer to your question: your haproxy is simply launched with those three config files ( -f haproxy.cfg -f haproxy.http.cfg -f haproxy.https.cfg, maybe from /etc/init.d/haproxy but mileage varies depending on your distribution ).
If you remove the file, of course it will complain.
This is not particularly standard, but ain't bad either, it helps structuring the conf rather than having a very long file.
The task of the .https version will certainly be to redirect the https traffic towards a service that can handle HTTPS (stunnel or nginx usually), since haproxy cannot terminate ssl connections. (stunnel has to be patched, see on the haproxy page)
If you want you can merge those files into one or two, just find out how haproxy is launched (check for init.d or let us know which distribution) and fix it appropriately.
I believe that it is only /etc/haproxy/haproxy.cfg that is used by default.
This may be of use to you (1.4 configuration reference):
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt