Facing an issue with the below error reason in kubernetes deployment for the HTTPS Certificate
Error : Host name does not match the certificate subject provided by the peer (CN=customer.endpoint.com)
My application is running with Network ip address with port number. Network ip is dynamic for the pods. So how do we alias customer.endpoint.com to avoid the above issue
To access your application first you have to create service for it. Read more here: kubernetes-services.
Then you have to create a TLS certificate for a Kubernetes service accessed through DNS.
Please take a look at tls-certificates. In this documentation you will find how to properly set up certificates.
The flow will be like:
1. Create service to expose you app - for example ClusterIP.
Remember that choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
2. Download and install CFSSL - source: pkg-cfssl.
3. Create a Certificate Signing Request
4. Create a Certificate Signing Request object to send to the Kubernetes API
5. Get the Certificate Signing Request Approved
6. Download the Certificate and use it
Related
On Google Kubernetes Engine (GKE) you can use the cloud.google.com/app-protocols annotation on a Service to specify what protocol is used on that port (HTTP or HTTPS) docs
When you create an External HTTP(S) Ingress, it will use this protocol between the Ingress and the Service.
How do I set things up so that the Service uses a certificate that is actually trusted by the Ingress?
Does it just trust any certificate signed by the Cluster Root CA? Manage tls in a cluster suggests you need to include the pod IP address in the CSR - does that mean generating the CSR and waiting for the signed certificate to be created should be part of my container startup process?
Turns out when the "GKE Ingress for HTTP(S) Load Balancing" uses HTTPS to connect to the service, it accepts any certificate valid (even a self-signed one), without further configuration.
Apparently it does not use TLS to protect against MITM attacks here (which I guess might be reasonable).
So I currently have a self-managed certificate, but I want to switch to a google-managed certificate. The google docs for it say to keep the old certificate active while the new one is provisioned. When I try to create a google-managed certificate for the same ingress IP, I get the following error: Invalid value for field 'resource.IPAddress': 'xx.xxx.xx.xx'. Specified IP address is in-use and would result in a conflict.
How can I keep the old certificate active, like it tells me to, if it won't let me start provisioning a certificate for the same ingress?
This can happen if 2 load balancers are sharing the same IP address (source). most likely you would have to detach that IP - or add another IP and then swap, once the certificate had been provisioned. it's difficult to tell by the error message, while not knowing which command had been issued.
All the documentation for Service Fabric mentions that for a production cluster you should use an X509 certificate from a trusted CA with the common name of the cluster address. The problem is I can't find any documentation on the process of obtaining the certificate. As far as I can tell for creating a certificate you need to prove you are who you say you are and to do so you either need to own the domain or expose some sort of file on the specified address.
The problem is that the url of the cluster is on a domain owned by Microsoft and my cluster is not exposed to the outside world as a website. Am I missing something? Do I have to create a web service and expose it in order to just create a certificate?
You can use any a free solution like Letsencrypt, for this it's not required to own the domain (specifically; control the DNS records). They also provide the option to respond to a HTTP based challenge, as proof of control.
To kick off the process, the agent asks the Let’s Encrypt CA what it
needs to do in order to prove that it controls example.com. The Let’s
Encrypt CA will look at the domain name being requested and issue one
or more sets of challenges. These are different ways that the agent
can prove control of the domain. For example, the CA might give the
agent a choice of either: Provisioning a DNS record under example.com,
or Provisioning an HTTP resource under a well-known URI on
https://example.com/
An easy way to get started with Letsencrypt is by using CertBot.
This needs to run on the domain, so it can respond to the HTTP challenge, which results in the issuing of a certificate for your specific cluster endpoint.
Maybe this sample project helps.
I want to call a REST service running outside OpenShift via a Service and external domain name. This works perfect with a http:// request. The mechanism is described in the documentation : https://docs.okd.io/latest/dev_guide/integrating_external_services.html#saas-define-service-using-fqdn
However the external service is secured with https. In this case I got the following exception:
Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=.xxx, O=xxx, L=xxx, ST=GR, C=CH); nested exception is javax.net.ssl.SSLPeerUnverifiedException: Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=.xxx, O=xxx, L=xxx, ST=GR, C=CH)
The exception is clear to me because we use the Service name from OpenShift. This name does not correspond to the origin host name in the certificate. So currently I see three possibilities to solve this issue:
Add the name of the OpenShift Service to the certificate
Deactivate hostname verification before calling the external REST service
Configure OpenShift (don't know this is possible)
Has anybody solve this or a similar issue?
Currently I used OpenShift v3.9. We are running a simple Spring Boot application in a pod accessing REST services outside OpenShift.
Any hint will be appreciated.
Thank you
Markus
Ugly and might cost you extra $$
Defeats the purpose of TLS.
On Kubernetes 1.10 and earlier you can use ExternalName.
You can also use with OpenShift.
You can also use and Kubernetes Ingress with TLS. Also, documented for OpenShift
Hope it helps!
I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.