How can I overcome the x509 signed by unknown certificate authority error when using the default Kubernetes API Server virtual IP? - kubernetes

I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?

It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.

Related

Purchase SSL certificate for kubernetes cluster

My service (with no ingress) is running in the amazon EKS cluster and I was asked to provide a CA signed cert for a third party that consumes the API hosted in the service. I have tried provisioning my cert using certificates.k8s.io API but it is still self-signed I believe. Is there a CA that provides certification for services in the Kubernetes cluster?
Yes, Certificates created using the certificates.k8s.io API are signed by a dedicated CA. It is possible to configure your cluster to use the cluster root CA for this purpose, but you should never rely on this. Do not assume that these certificates will validate against the cluster root CA.
Refer this Certificate Signing Request Process

Vault Kubernetes Authentication

I have my own hosted Kubernetes cluster where I store my secrets in vault. To give my microservices access to the secrets managed by vault, I want to authenticate my microservices via their service accounts. The problem I'm facing is that vault rejects the service accounts (JWTs) with the following error:
apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority
The service accounts are signed with Kubernetes own CA. I did not replace this with Vault's pki solution. Is it possible to configure Vault to trust my Kubernetes CA certificate and therefore the JWTs?
This kind of error can be caused by a recent change to Service Account Issuer Discovery in Kubernetes 1.21.
In order to mitigate this issue, there are a couple of options that you can choose from based on your expectations:
Manually create a service account, secret and mount it in the pod as mentioned on this github post.
Disable issuer validation as mentioned on another github post.
Downgrade the cluster to version 1.20.
There are also a couple of external blog articles about this on banzaicloud.com and particule.io.

Self-signed certificates ok for kubernetes validating webhooks?

I'm trying to understand the security implications for using self-signed certificates for a Kubernetes validating webhook.
If I'm understanding correctly, the certificate is simply used to be able to serve the validating webhook server over https. When the Kubernetes api-server receives a request that matches the configuration for a validating webhook, it'll first check with the validating webhook server over https. If your validating webhook server lives on the Kubernetes cluster (is not external) then this traffic is all internal to a Kubernetes cluster. If this is the case is it problematic that the cert is self-signed?
If I'm understanding correctly, the certificate is simply used to be
able to serve the validating webhook server over https.
Basically yes.
If your validating webhook server lives on the Kubernetes cluster (is
not external) then this traffic is all internal to a Kubernetes
cluster. If this is the case is it problematic that the cert is
self-signed?
If the issuing process is handled properly and in secure manner, self-signed certs shouldn't be a problem at all. Compare with this example.

Host name does not match the certificate subject in deployment

Facing an issue with the below error reason in kubernetes deployment for the HTTPS Certificate
Error : Host name does not match the certificate subject provided by the peer (CN=customer.endpoint.com)
My application is running with Network ip address with port number. Network ip is dynamic for the pods. So how do we alias customer.endpoint.com to avoid the above issue
To access your application first you have to create service for it. Read more here: kubernetes-services.
Then you have to create a TLS certificate for a Kubernetes service accessed through DNS.
Please take a look at tls-certificates. In this documentation you will find how to properly set up certificates.
The flow will be like:
1. Create service to expose you app - for example ClusterIP.
Remember that choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
2. Download and install CFSSL - source: pkg-cfssl.
3. Create a Certificate Signing Request
4. Create a Certificate Signing Request object to send to the Kubernetes API
5. Get the Certificate Signing Request Approved
6. Download the Certificate and use it

Fabric access with client certificate auth fails

We're using Fabric secure cluster and need client certificate for CI/CD tools.
I've created both Cluster primary certificate and client certificate with this script https://gist.github.com/kagarlickij/d63a4061a1066d3a85abcc658f0856f5
so both have been uploaded to the same Kay vault and both have been installed to local keystore on my machine.
I've added client certificate to my Fabric security settings (Authentication type = Admin client, Authorization method = Certificate thumbprint).
The problem is that I can connect (I'm using Connect-ServiceFabricCluster in PowerShell) to Fabric cluster with Cluster primary certificate but can't with Client certificate.
I'm getting this error: Connect-ServiceFabricCluster : FABRIC_E_SERVER_AUTHENTICATION_FAILED: 0x800b0109
Please advice what can be done?
Based on this link the corresponding error code for 0x800b0109 is:
A certificate chain processed, but terminated in a root certificate
which is not trusted by the trust provider.
You're using a self-signed certificate as client cert. I'm not sure it's supported as explained in the Service Fabric Security documentation, moreover you'll have to make sure the SSL certificate has been added inside your local Store.
Client X.509 certificates
Client certificates typically are not issued by a third-party CA.
Instead, the Personal store of the current user location typically
contains client certificates placed there by a root authority, with an
Intended Purposes value of Client Authentication. The client can use
this certificate when mutual authentication is required. Note
All management operations on a Service Fabric cluster require server certificates. Client certificates cannot be used for management.
I had the same issue managing my cluster through powershell, I only had 1 cert on the cluster (the one azure generates when creating the cluster) and I believe it is a client cert since I have to select it in my browser when managing the cluster.
Ultimately I had to add the self signed cert to my Root certificate store (in addition to my personal store where I already had it) to get the powershell module to stop complaining about it.