LetsEncrypt on multiple HaProxy instances across servers - haproxy

Looking at the instructions here: https://certbot.eff.org/lets-encrypt/ubuntubionic-haproxy
I'm in a situation where I have 2 HaProxy instances, each in a docker container, on different machines. The domain names are the same. This is done for redundancy purposes.
Googling "multiple letsencrypt" or "multiple certbot" just leads to solutions for creating certificates for many domains at the same time.
This is good for subdomains, but it doesn't explain what I'm expected to do if I have more than 1 server running haproxy.
Run certbot on 1 server only, then copy the file over? If so, what about renewing the certificate? Can it no longer be automated?
Also, because of urls, certain subdomains will go to one server or the other. But both must be able to serve all the urls.
Or does this situation call for a different approach entirely? Should I use the manual mode, generate the certificates, and then update them manually?
Thanks for any help.

Eventually found a solution: you can start certbot with a custom port, --http-01-port as you can read here: https://eff-certbot.readthedocs.io/en/stable/using.html.
If all your haproxys detect the incoming challenge URL "/.well-known/acme-challenge", you can have them redirect to that host/port combo. So all challenges end up at the certbot.
Then find a way to move the certificate around.

I would suggest you to go with getssl which is a "simple" Bash script taking care to :
deploy the challenge file to all the required nodes, to the right place, and even reloading the remote node web server
deploy/copy the generated SSL certificate files to remote nodes too
It can use SSH, SFTP or FTPS to transfer files. You then can add a cron job to execute getssl everyday and it will renew the certificate and distribute it when done (a config allows you to tell when to renew the certificate).

Related

LetsEncrypt SSL Certificates with multi domains and multi subdomains

We've been using a PositiveSSL Multi-Domain Cert for some years, and that's been working fine. Under that Cert, we have, for instance:
domain1:
mail.domain1.com
www.domain1.com
domain1.com
domain2:
mail.domain2.fr
www.domain2.fr
domain2.fr
etc., with a total of 5 different domains.
Now, since we're going to expand our domain base and that the current Cert is expiring, we're looking closely at Lets-Encrypt.
Before I get into this, however, I'd like to know a couple of things:
(1)- does every subdomain (mail. www., etc.) as well as their main respective domain have to be listed in the main certificate? I'm mainly asking that because (a) that was my original understanding, and (b) the verification stage with Lets-Encrypt will differ (preferred-challenges=dns instead of by default apache-based), which will lead me to add DNS records for each domain/subdomain.
(2)- if it is indeed needed (and if I have no choice but use preferred-challenges=dns, at the time of the next Cert renewal (i.e. < 90 days), should the DNS records still have to be present? I'm asking this because last time I left the DNS records after creation, the mail server couldn't be reached anymore after the DNS propagation time. I'm pretty sure that it was because of my bad setup, but it's a risk I prefer to avoid taking.
(3)- if I'm missing here of you have a better advice to give me, let me know.
At https://websocket.email I am using the Subject Alternative Name (SAN) mechanism to handle api.websocket.email etc. I did not have to configure DNS records and used http challenges. The exact way you would do this depends on your acme client. Mine had the option listed under a config section "alternative names".
Edit: To clarify, I needed DNS records to point to my subdomains all to the same server, I am using this acme client - https://man.openbsd.org/acme-client.conf.5 and set the alternative names option. When getting my certificates I could see in my http logs, lets-encrypt fetching a single challenge file per domain to prove I own it.
In case someone has a similar issue or plan, this is what I've learned during this process:
Every domain name that the certificate will be used for has to be included. Let’s Encrypt will validate each of the domain names in the certificate.
If using DNS-01 validation, all of them will require DNS records to be added. With that, records will be required to re-validate the domains every 90 days (or whatever time frame you choose). This will require adding a new TXT record each time. Leaving the old TXT records from a previous validation will not work, the value changes each time.
There are also HTTP-01 and TLS-SNI-01 challenge types. If you already have an Apache instance running I recommend using HTTP-01.
If you don’t have a webserver on the machine that runs mail.domain1.com you can still use HTTP-01. Certbot has a feature called “standalone” mode where it can start up a small purpose built webserver to answer HTTP-01 challenges to provision a certificate.
To use HTTP or TLS-SNI validation on a non-web server, you would run something like: certbot certonly --standalone -d mail.example.com. You still need to have port 80 or 443 open in your firewall to use this method, but you need no server running on those ports. If you only want to open one port you can specify the challenge type explicitly, e.g. --preferred-challenges http to use port 80 or --preferred-challenges tls-sni to use port 443.

How to propagate truststore updates in a cluster using Wildfly?

I have an application running on Wildfly 10 in a domain setup with more than 10 machines. Clients consume REST webservices using SSL authentication, in this scenario we will be adding clients on a daily basis so it is important to be able to propagate changes on the Truststore to the whole server group.
It's not an option to centralize the truststore in one machine due to concurrency levels.
I would like to know if there is a way to achieve this using the CLI or any other alternatives.
Thanks in advance!
Given that Wildfly does not support reloading the truststore at runtime (see https://access.redhat.com/solutions/482133), you would copy the truststore file to all servers (by hand, by script, by Puppet/Ansible/your DevOps tool), and use CLI to restart the affected server groups in the domain.
See also https://github.com/wildfly/quickstart/tree/10.x/helloworld-war-ssl for an example to implemet SSL auth. Basically all clients get a certificate from your own CA, which you add to the truststore once. Then use RBAC for the authorization.

How to update SSL certificate on EC2 instances

Here is my dilemma. Currently we run quite a few server on AWS EC2 service. Before my time, they used to configure Server images with the SSL certificate on them. Now, the certificate is about to expire and we need to replace the old one with the new one. I have read documentation on AWS in regards to uploading new certificate to IAM but it is very confusing. Is there any way, for example, using Power Shell commands to upload the new certificate to the existing servers?
Thanks in advance.
If you have certificates that are expired on existing instances and NOT on an Elastic Load Balancer, then you need to update each server as needed, on that server.
It is not an IAM type server certificate.
So you need to touch each server and upgrade. If you have AMIs for each server, you may need to create new AMIs after upgrading the certificate.
See Install certificate with PowerShell on remote server for some suggestion on PowerShell methods of installing a certificate file remotely.
Depending on your budget, you could consider using an ELB even for one instance, and installing the SSL cert there. It makes it easier in the long run to manage certs at the ELB level, rather than at the server/AMI level

Are/can SSL certificates be specific to the service (e.g. server uses different certificate for HTTPS than for SMTP/TLS)

I can't work out a definitive answer on this, but from searching I find two links which seem to indicate to me that a server (in this case it's MS Exchange as per the links) can have different certificates in place for https than for secure smtp/TLS.
http://technet.microsoft.com/en-GB/library/bb851505(v=exchg.80).aspx
https://www.sslshopper.com/article-how-to-use-ssl-certificates-with-exchange-2007.html
I have an issue which no-one has been able to help with here and this question is a follow on, in that I am coming to the suspicion that my first problem is that my machine trusts the https certificate, but not the one being used for smtp/TLS. But what I'm asking now, is that even possible?
Going through the diagnostic steps here shows me that the certificates in use when I access my mail server's web interface through https are fully trusted. However when I look at the debug of my c# process it is stating a completely different certificate issued by one of our servers to it's self (the server on which exchange is installed).
So... any one know if it's possible that I am thinking along the right lines... is it possible that when I do an https connection I get one certificate and when I use the .net SMTP client I get a completely different certificate (from exactly the same address, but I assume a different port)?
Is it possible that when I do an https connection I get one certificate and when I use the .net SMTP client I get a completely different certificate (from exactly the same address, but I assume a different port)?
Yes, you can have a different certificate for each listening socket on the machine, that is SMTP and HTTPS can use different certificates. On a machine with multiple hostnames you could even have multiple different certificates on a single socket, which get distinguished by the hostname (using SNI).

Installing Wildcard SSL Certificate on Azure VM

I'm developing an application on Azure VM and would like to secure it by using the wildcard SSL certificate that I'm already using with my main domain. The SSL cert works with any *.mydomain.com and the application on Azure VM is accessible through myapplication.cloudapp.net
Based on the research that I've done, CNAME should be the best option to do that (I can't use A record since we need to shutdown the VMs every week and turn them back on the next week and will lose the ip addresses).
My two questions are:
How can I have myapplication.cloudapp.net be shown as subdomain.mydomain.com?
Will doing that make it possible for wildcard SSL certificate to be used for Azure application too?
How can I have myapplication.cloudapp.net be shown as
subdomain.mydomain.com?
Yes - this is just the CNAME forwarding and ensuring that the appropriate SSL certificate is installed on the server.
Will doing that make it possible for wildcard SSL certificate to be used for Azure application too?
Well as you're already exposing the Application through the VM - this should happen seemlessly.
Just a word of caution, you mention that you're using the certificate on the main domain, but haven't mentioned where you're using this. Be aware that, out-of-the-box, you can only assign one SSL per HTTPS endpoint. You can enable multiple SSL certificates on an Endpoint for Azure / IIS using Server Name Identification and can be enabled directly or automatically. If you do take this route, remember to configure your SNI bindings first, then apply the default binding - it kinda screws up otherwise.