SSL application load balancer on AWS WITHOUT a custom domain - aws-cloudformation

Is it possible to give a application load balancer on AWS a SSL certificate, allowing allowing only HTTPS connections, if I don't want to use a custom domain?
Currently developing some internal dashboard applications, so have no need/want for a domain name attached to them.
I can only dig up info and tutorials of creating to a certificate in Cloudformation, when wanting to add a domain forwarding to the LB.

The SSL certificate has to have a valid DNS name associated with it in order to work. You need to request a certificate via ACM and then attach that to the ELB. You can configure the ELB to only have an HTTPS listener to force secure communication.

Probably not.
It's not generally kosher to issue an SSL certificate to an IP address, and since all *.compute.amazonaws.com style DNS names are floating and could be reassigned at any moment, they damn well won't issue one for them either. (Same stands for Let's Encrypt, by the way: you have to have a DNS name not issued by a provider.)
Just give your internal service a DNS name, be it something like mydashboard.internal.mycompany.com or whatever; it'll be easier to access, too.

Related

Keycloak internal and external link

I understand that the question was asked and discussed in different formats before. However, I still miss clear guidelines on how to handle the situation.
Our keycloak setup has multiple keycloak replicas and is behind a load balancer without a fixed ip in a separate infrastructure. So that our DNS records look like:
CNAME keycloak.acme.com public-lb.acme.com
And public-lb.acme.com forwards the request to specific instances of keycloak.
One of our end-user applications is located in a completely different infrastructure with strict access. The end-user application is built using java and is using Keycloak integration org.keycloak:keycloak-servlet-filter-adapter. We do not have any custom adapters and simply follow "standard" configuration:
{
"auth-server-url" : "https://keycloak.acme.com",
..
However, this does not work since keycloak.acme.com ip address have to be whitelisted in that "special" infrastructure. So that validation requests from the application inside the "special" infrastructure do not hit the keycloak. And we cannot whitelist the ip, since the ip of our load balancer public-lb.acme.com is not fixed and changes with time.
We have a "tunnel" between the keycloak infrastructure and that "special" infrastructure with a dedicated ip cidr range which is whitelisted.
Hence we have create a special internal load balancer that is in the tunnels cidr range and forwards requests to the keycloak replicas. Unfortunately that internal load balancer does not have a fixed ip address, and can change within time.
Since we do not have fixed ip address, is the only correct method is to use add DNS record inside the "special" infrastructure pointing to the internal load balancer? Something like:
CNAME keycloak.acme.com internal-lb.acme.com
Or are there any alternative solutions? I understand the historical reasons behind this.

How to create a Trusted CA Signed certificate for Service Fabric

All the documentation for Service Fabric mentions that for a production cluster you should use an X509 certificate from a trusted CA with the common name of the cluster address. The problem is I can't find any documentation on the process of obtaining the certificate. As far as I can tell for creating a certificate you need to prove you are who you say you are and to do so you either need to own the domain or expose some sort of file on the specified address.
The problem is that the url of the cluster is on a domain owned by Microsoft and my cluster is not exposed to the outside world as a website. Am I missing something? Do I have to create a web service and expose it in order to just create a certificate?
You can use any a free solution like Letsencrypt, for this it's not required to own the domain (specifically; control the DNS records). They also provide the option to respond to a HTTP based challenge, as proof of control.
To kick off the process, the agent asks the Let’s Encrypt CA what it
needs to do in order to prove that it controls example.com. The Let’s
Encrypt CA will look at the domain name being requested and issue one
or more sets of challenges. These are different ways that the agent
can prove control of the domain. For example, the CA might give the
agent a choice of either: Provisioning a DNS record under example.com,
or Provisioning an HTTP resource under a well-known URI on
https://example.com/
An easy way to get started with Letsencrypt is by using CertBot.
This needs to run on the domain, so it can respond to the HTTP challenge, which results in the issuing of a certificate for your specific cluster endpoint.
Maybe this sample project helps.

Setting up SSL on EC2 instances (FQDN)

Sorry if the question is too basic.
I want to set up SSL on my EC2 instance, let us say foo.us-west1.compute.amazonaws.com. And I want to create a record set of type A to point a subdomain at my EC2 instance, let us say sub.foo.com. When I am creating a SSL certificate, which one of the two domains do I have to use?
I'll be saving the certificate and key on the ec2-instance and be configuring NginX to use those keys.
The subject of the certificate must match the domain from the URL used to access the site. Neither IP address matter nor DNS domain alias (CNAME) - all what matters is the domain name used in the URL since this is used by the client to verify the certificate.

HTTPS for local IP address

I have a gadget[*] that connects to the user's WiFi network and responds to commands over a simple REST interface. The user uses a web app to control this gadget. The web app is currently served over http and the app's javascript does AJAX calls to the gadget's local IP address to control it. This scheme works well and I have no issues with it.
[*] By "gadget" I mean an actual, physical IoT device that the user buys and installs within their home, and configures to connect to their home WiFi network
Now, I want to serve this web app over https. I have no issue setting up https on the hosting side. The problem is, now the browser blocks access to the gadget (since the gadget's REST API is over http and not https).
The obvious solution is to have the gadget serve it's REST API over https. But how? It has a local IP address and no one will issue a certificate for it. (Even if they did, I'd have to buy a boatload of certificates for each possible local IP address.) I could round-trip via the cloud (by adding additional logic on my server side to accept commands from the web app and forward it to the gadget over another connection), but this will increase latencies.
Is there a way around this problem? One possibility that I have in mind is to:
Get a wildcard certificate (say, *.mydomain.com)
Run my own DNS that maps sub-domains to a local IP address following a pattern (For example, 192-168-1-123.mydomain.com would map to 192.168.1.123)
Use the wild-card certificate in all the gadgets
My web app could then make AJAX calls to https://192-168-1-123.mydomain.com instead of http://192.168.1.123 and latencies would remain unaffected aside from the initial DNS lookup
Would this work? It's an expensive experiment to try out (wildcard certificates cost ~$200) and running a DNS server seems like a lot of work. Plus I find myself under-qualified to think through the security implications.
Perhaps there's already a service out there that solves this problem?
While this is a pretty old question, it is still nothing that you find out-of-the-box solutions for today.
Just as #Jaffa-the-cake posted in a comment, you can lean on how Plex did it, which Filippo Valsorda explained in his blog:
https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
This is very similar to what you proposed yourself. You don't even need a wildcard certificate, but you can generate certificates on-the-fly using Let's Encrypt. (You can still use wildcard certificates, if you want, which Let's Encrypt supports now, too.)
Just yesterday I did a manual proof-of-concept for that workflow, that can be automated with the following steps:
Write a Web Service that can create DNS entries for individual devices dynamically and generate matching certificates via Let's Encrypt - this is pretty easy using certbot and e.g. Google Cloud DNS. I guess Azure, AWS and others have similar offerings, too. When you use certbot's DNS plugins, you don't even need to have an actual web server running on port 80/443.
On you local device, contact that Web Service to generate a unique DNS entry (e.g. ..yourdns.com) and certificate for that domain
Use that certificate in your local HTTPS server
Browse to that domain instead of your local IP
Now you will have a HTTPS connection to your local server, using a local IP, but a publicly resolved DNS entry.
The downside is that this does not work offline from arbitrary clients. And you need to think of a good security concept to create trust between the client that requests a DNS and certificate, and your web service that will generate those.
BTW, do you mind sharing what kind of gadget it is that you are building?
If all you want is to access the device APIs through the web browser, A Simple solution would be to proxy all the requests to the device through your web server.this was even self signed certs for the devices wont be a problem. Only problem though is that the server would have to be on the same network as your devices.
If you are not on the same network, you can write a simple browser plugin (chrome) to send the api request to IoT device. but then the dependency on the app/plugin will be clumsy.

Installing Wildcard SSL Certificate on Azure VM

I'm developing an application on Azure VM and would like to secure it by using the wildcard SSL certificate that I'm already using with my main domain. The SSL cert works with any *.mydomain.com and the application on Azure VM is accessible through myapplication.cloudapp.net
Based on the research that I've done, CNAME should be the best option to do that (I can't use A record since we need to shutdown the VMs every week and turn them back on the next week and will lose the ip addresses).
My two questions are:
How can I have myapplication.cloudapp.net be shown as subdomain.mydomain.com?
Will doing that make it possible for wildcard SSL certificate to be used for Azure application too?
How can I have myapplication.cloudapp.net be shown as
subdomain.mydomain.com?
Yes - this is just the CNAME forwarding and ensuring that the appropriate SSL certificate is installed on the server.
Will doing that make it possible for wildcard SSL certificate to be used for Azure application too?
Well as you're already exposing the Application through the VM - this should happen seemlessly.
Just a word of caution, you mention that you're using the certificate on the main domain, but haven't mentioned where you're using this. Be aware that, out-of-the-box, you can only assign one SSL per HTTPS endpoint. You can enable multiple SSL certificates on an Endpoint for Azure / IIS using Server Name Identification and can be enabled directly or automatically. If you do take this route, remember to configure your SNI bindings first, then apply the default binding - it kinda screws up otherwise.