Valid HTTPS certificate with Docker and Traefik - docker-compose

I'm trying to create a local website using Docker Compose and Traefik. I was able to create the HTTPS domain, but the generated certificate is not valid as you can see in the following image
This is the configuration of my services
You can access the code here: https://gist.github.com/jdeg/6b9cd5283d71edf3304ab9d0a9cce75d
What is the correct way to create a valid certificate with Docker Compose and Traefik?

Related

Host name does not match the certificate subject in deployment

Facing an issue with the below error reason in kubernetes deployment for the HTTPS Certificate
Error : Host name does not match the certificate subject provided by the peer (CN=customer.endpoint.com)
My application is running with Network ip address with port number. Network ip is dynamic for the pods. So how do we alias customer.endpoint.com to avoid the above issue
To access your application first you have to create service for it. Read more here: kubernetes-services.
Then you have to create a TLS certificate for a Kubernetes service accessed through DNS.
Please take a look at tls-certificates. In this documentation you will find how to properly set up certificates.
The flow will be like:
1. Create service to expose you app - for example ClusterIP.
Remember that choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
2. Download and install CFSSL - source: pkg-cfssl.
3. Create a Certificate Signing Request
4. Create a Certificate Signing Request object to send to the Kubernetes API
5. Get the Certificate Signing Request Approved
6. Download the Certificate and use it

Keycloak Admin Console requires HTTPS when connected remotely (Should I disable SSL)

I am connecting to Keycloak remotely. And when I try to open the Admin Console, I get an error saying HTTPS required.
In one of the websites, it says that I should do this: "update REALM set ssl_required='NONE' where id = 'master';"
But I do not know the consequences of doing this. Will this make it unsecure? Or can I reverse this?
Thank you
(And If I would run Keycloak within a docker image would this problem be solved?)
Admim console uses open id connect protocol, which needs HTTPS to be secure. So it isn't good idea to disable ssl in the prod environment.
Keycloak in the container doesn't solve your problem. But it provides user friendly approach to generate selfsigned cert - Keycloak Docker HTTPS required
Secure option is to generate valid TLS certificate and use it in your Keycloak instance, so you have valid secure TLS/HTTPS connection.

How to enable https for Keycloak in a Jhipster generated project?

I am trying to enable https for keycloak in a Jhipster-generated project. In Jhipster doc (https://www.jhipster.tech/security/), it says "In production, it is required by Keycloak that you use HTTPS. There are several ways to achieve this, including using a reverse proxy or load balancer that will manage HTTPS. We recommend that you read the Keycloak HTTPS documentation to learn more about this topic." And in the Keycloak doc, there are step that is "First, you must edit the standalone.xml, standalone-ha.xml, or host.xml file".
Sounds reasonable, right? But, if installing and running Keycloak server on mac, the configuration file is in /opt/jboss/keycloark..., but when running this keycloak within the jhipster-generated project (using the nice and easy command 'docker-compose -f src/main/docker/keycloak.yml up'), I find that there is no such folder /opt/jboss/... Either I did something wrong, or it was in some other space like Docker container, or Jhipter container, or somewhere else. So the question is, how should we enable https on this Keycloak shipped with the Jhipster generated project?
Would appreciate it very much for any help from the community. Thanks!
Expose https port 8443 of your Keycloak container and you will have selfsigned https, e.g.:
ports:
- 443:8443
+ use volumes if you have own TLS certificate, e.g.:
volumes:
- /path/my-cert.crt:/etc/x509/https/tls.crt
- /path/my-cert.key:/etc/x509/https/tls.key

Hostname verification failed in OpenShift when integration a external service using an External Domain Name

I want to call a REST service running outside OpenShift via a Service and external domain name. This works perfect with a http:// request. The mechanism is described in the documentation : https://docs.okd.io/latest/dev_guide/integrating_external_services.html#saas-define-service-using-fqdn
However the external service is secured with https. In this case I got the following exception:
Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=.xxx, O=xxx, L=xxx, ST=GR, C=CH); nested exception is javax.net.ssl.SSLPeerUnverifiedException: Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=.xxx, O=xxx, L=xxx, ST=GR, C=CH)
The exception is clear to me because we use the Service name from OpenShift. This name does not correspond to the origin host name in the certificate. So currently I see three possibilities to solve this issue:
Add the name of the OpenShift Service to the certificate
Deactivate hostname verification before calling the external REST service
Configure OpenShift (don't know this is possible)
Has anybody solve this or a similar issue?
Currently I used OpenShift v3.9. We are running a simple Spring Boot application in a pod accessing REST services outside OpenShift.
Any hint will be appreciated.
Thank you
Markus
Ugly and might cost you extra $$
Defeats the purpose of TLS.
On Kubernetes 1.10 and earlier you can use ExternalName.
You can also use with OpenShift.
You can also use and Kubernetes Ingress with TLS. Also, documented for OpenShift
Hope it helps!

Authenticating Gcloud Kubernetes with just a Service Account kubectl token?

Following: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-2-use-the-token-option
I want to be able to connect to project / cluster context to our GKE clusters.
Normally, one would use gcloud, and login with a browser, or with a password json file.
Is it possible to authenticate with just a service account token that you can feed into kubectl (without using gcloud)?
I cannot get the above documentation working, doesn't seem to connect me to gcloud as I get:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Never able to connect outside of a local context.
I'm wondering if this is even possible, to connect to GKE clusters using nothing but a service account token?