We currently having 2 Master 2 Worker node cluster on Kubernetes v1.13.4.The cluster is down as the kubelet certificate located in /var/lib/kubelet/pki/kubelet.crt has expired and the kubelet service is not running. On checking the kubelet logs I get the following error
E0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTC
The following certificates ca.crt, apiserver-kubelet-client.crt are valid. We are unable to renew the kubelet.crt certificate manually by using the kubeadm-config.yaml. Can someone please provide the steps to renew the certificate.
We have tried setting --rotate-certificates property and also using kubeadm-config.yaml but since we are using v1.13.4 kubeadm --config flag is not present.
On checking the kubelet logs I get the following error
E0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTC
As you mentioned that only kubelet.crt has expired and apiserver-kubelet-client.crt is valid, you can try to renew it by command kubeadm alpha certs renew based on documentation.
Second way to renew kubeadm certificates is upgrade version like in this article.
You can also try by using kubeadm init phase certs all. It was explained in this Stackoverflow case.
Let me know if that helped. If not provide more information with more logs.
Related
I did a
kubeadm alpha certs renew
but after that, no pods get started. When starting from a Deployment, kubectl get pod doesn't even list the pod, when explicitly starting a pod, it is stuck on Pending.
What am I missing?
Normally I would follow a pattern to debug such issues starting with:
Check all the certificate files are rotated by kubeadm using sudo cat /etc/kubernetes/ssl/apiserver.crt | openssl x509 -text.
Make sure all the control plane services (api-server, controller, scheduler etc) have been restarted to use the new certificates.
If [1] and [2] are okay you should be able to do kubectl get pods
Now you should check the certificates for kubelet and make sure you are not hitting https://github.com/kubernetes/kubeadm/issues/1753
Make sure kubelet is restarted to use the new certificate.
I think of control plane (not being able to do kubectl) and kubelet (node status not ready, should see certificates attempts in api-server logs from the node) certificates expiry separately so I can quickly tell which might be broken.
The internal IP address of a Kubernetes single node has changed and now kubelet isn’t starting correctly anymore.
Therefore I’ve started changing the configuration of the follwoing files:
/.kube/config
/etc/kubernetes/*.conf
I’ve added the new ip address to these files.
After this step, I got the error saying that the X509 certificate is not valid for the new ip.
In order to solve this issue, I’ve done the following steps:
Stop kubelet and delete all old cert files from /etc/kubernetes/pki and /etc/kubernetes/pki/etcd
kubeadm init phase certs adminserver —adminserver-advertise-address —adminserver-cert-extra-sans
kubeadm init phase certs adminserver-kubelet-client
kubeadm init phase certs front-proxy-ca
kubeadm init phase certs front-proxy-client
kubeadm init phase certs apiserver-etcd-client
kubeadm init phase certs etcd-ca
kubeadm init phase certs etcd-healthcheck-client
kubeadm init phase certs etcd-peer
kubeadm init phase certs etcd-server
kubeadm init phase kubeconfig all —apiserver-advertise-address
kubeadm init phase certs renew all
copied /etc/kubernetes/admin.conf to ~/.kube and renamed it to config
kubeadm init phase kubelet-start
The problem is that I still get an error saying that the connection to the new IP was refused. I belive it’s due to a wrong certificate, but the apiserver.crt file seems correct if I compare it to the original certificate.
I tried the same approape on a machine which is running locally and I got kubelet to start correctly and kubectl to work again.
Can anyone point me to what I’m doing wrong?
Thank you
The solution was to update the manifest files located under /etc/kubernetes/manifests and ensure that the new host IP is set there as well.
Currently I am using a script to renew Kubernetes certificates before they expire. But this is a manual process. I have to monitor expiration dates carefully and run this script beforehand. What's the recommended way to update all control plane certificates automatically without updating control plane? Do kubelet's --rotate* flags rotate all components (e.g. controller) or it is just for kubelet? PS: Kubernetes cluster was created with kubeadm.
Answering following question:
What's the recommended way to update all control plane certificates automatically without updating control plane
According to the k8s docs and best practices the best practice is to use "Automatic certificate renewal" with control plane upgrade:
Automatic certificate renewal
This feature is designed for addressing the simplest use cases; if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure.
Note: It is a best practice to upgrade your cluster frequently in order to stay secure.
-- Kubernetes.io: Administer cluster: Kubeadm certs: Automatic certificate renewal
Why this is the recommended way:
From the best practices standpoint you should be upgrading your control-plane to patch vulnerabilities, add features and use the version that is currently supported.
Each control-plane upgrade will renew the certificates as described (defaults to true):
$ kubeadm upgrade apply --help
--certificate-renewal Perform the renewal of certificates used by component changed during upgrades. (default true)
You can also check the expiration of the control-plane certificates by running:
$ kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf May 30, 2022 13:36 UTC 364d no
apiserver May 30, 2022 13:36 UTC 364d ca no
apiserver-etcd-client May 30, 2022 13:36 UTC 364d etcd-ca no
apiserver-kubelet-client May 30, 2022 13:36 UTC 364d ca no
controller-manager.conf May 30, 2022 13:36 UTC 364d no
etcd-healthcheck-client May 30, 2022 13:36 UTC 364d etcd-ca no
etcd-peer May 30, 2022 13:36 UTC 364d etcd-ca no
etcd-server May 30, 2022 13:36 UTC 364d etcd-ca no
front-proxy-client May 30, 2022 13:36 UTC 364d front-proxy-ca no
scheduler.conf May 30, 2022 13:36 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca May 28, 2031 13:36 UTC 9y no
etcd-ca May 28, 2031 13:36 UTC 9y no
front-proxy-ca May 28, 2031 13:36 UTC 9y no
A side note!
kubelet.conf is not included in the list above because kubeadm configures kubelet for automatic certificate renewal.
From what it can be seen by default:
Client certificates generated by kubeadm expire after 1 year.
CA created by kubeadm are set to expire after 10 years.
There are other features that allows you to rotate the certificates in a "semi automatic" way.
You can opt for a manual certificate renewal with the:
$ kubeadm certs renew
where you can automatically (with the command) renew the specified (or all) certificates:
$ kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
Please take a specific look on the output:
You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
As pointed, you will need to restart the components of your control-plane to use new certificate but remember:
$ kubectl delete pod -n kube-system kube-scheduler-ubuntu will not work.
You will need to restart the docker container responsible for the component:
$ docker ps | grep -i "scheduler"
$ docker restart 8c361562701b (example)
8c361562701b 38f903b54010 "kube-scheduler --au…" 11 minutes ago Up 11 minutes k8s_kube-scheduler_kube-scheduler-ubuntu_kube-system_dbb97c1c9c802fa7cf2ad7d07938bae9_5
b709e8fb5e6c k8s.gcr.io/pause:3.4.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-ubuntu_kube-system_dbb97c1c9c802fa7cf2ad7d07938bae9_0
As pointed in below link, kubelet can automatically renew it's certificate (kubeadm configures the cluster in a way that this option is enabled):
Kubernetes.io: Configure Certificate Rotation for the Kubelet
Github.com: Kubernetes: Kubeadm: Issues: --certificate-renewal true doesn't renew kubelet.conf
Depending on the version used in your environment, this can be disabled. Currently in the newest version of k8s managed by kubeadm this option is enabled by default according to my knowledge.
Please keep in mind that before you start with any kubernetes node/control plane/update/upgrade to read "Urgent Upgrade Notes" specific to your k8s version (example):
Github.com: Kubernetes: CHANGELOG: 1.21: Urgent upgrade nodes
Defining the automatic way of certificate rotation could go in either way but you can use already mentioned commands to automate this process. You would need to create a script (which you already have) that would be put in cron that would fire after some time and renew them.
As of kubernetes 1.8 certificate rotation is added. You can read about it here, https://kubernetes.io/docs/tasks/tls/certificate-rotation/
I've recently upgraded with kubeadm, which I expect to rotate all certificates, and for good measure, I also ran kubeadm init phase certs all, but I'm not sure what steps are required to verify that the certs are all properly in place and not about to expire.
I've seen a SO answer reference kubeadm init phase kubeconfig all is required in addition, but cannot find in the kubernetes kubeadm documentation telling me that it needs to be used in conjunction with phase certs.
What do I need to do to make sure that the cluster will not encounter expired certificates?
I've tried verifying by connecting to the secure local port: echo -n | openssl s_client -connect localhost:10250 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not, which gives me expirations next month.
While openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text and openssl x509 -in /etc/kubernetes/pki/apiserver-kubelet-client.crt -noout -text yield dates over a year in advance.
These conflicting dates certainly have me concerned that I will find myself like many others with expired certificates. How do I get in front of that?
Thank you for any guidance.
In essence kubeadm init phase certs all regenerates all your certificates including your ca.crt (Certificate Authority), and Kubernetes components use certificate-based authentication to connect to the kube-apiserver (kubelet, kube-scheduler, kube-controller-manager) so you will also have to regenerate pretty much all of those configs by running kubeadm init phase kubeconfig all
Keep in mind that you will have to regenerate the kubelet.conf on all your nodes since it also needs to be updated to connect to the kube-apiserver with the new ca.crt. Also, make sure you add all your hostnames/IP addresses that your kube-apiserver is going to serve on to the kubeadm init phase certs all command (--apiserver-cert-extra-sans)
Most likely you are not seeing the updated certs when connecting through openssl is because you haven't restarted the Kubernetes components and in particular the kube-apiserver. So you will have to start your kube-apiserver, kube-scheduler, kube-controller-manager, etc (or kube-apiservers, kube-schedulers, etc if you are running a multi-master control plane) You will also have to restart your kubelets on all your nodes.
A month later, I've learned a little more and wanted to update this question for those who follow behind me.
I filed an issue on Kubernetes requesting more information on how the kubeadm upgrade process automatically updates certificates. The documentation on Kubernetes says:
Note: kubelet.conf is not included in the list above because kubeadm configures kubelet for automatic certificate renewal.
After upgrading, I did not see an automatic cert renewal for the kubelet. I was then informed that:
the decision on when to rotate the certificate is non-deterministic and it may happen 70 - 90% of the total lifespan of the certificate to prevent overlap on node cert rotations.
They also provided the following process, which resolved my last outstanding certificate rotation:
sudo mv /var/lib/kubelet/pki /var/lib/kubelet/pki-backup
sudo systemctl restart kubelet
# the pki folder should be re-created.
I'm a bit confused by this, because it was working for days without issue.
I use to be able to join nodes to my cluster withoout issue. I would run the below on the master node:
kubeadm init .....
After that, it would generate a join command and token to issue to the other nodes I want to join. Something like this:
kubeadm join --token 99385f.7b6e7e515416a041 192.168.122.100
I would run this on the nodes, and they would join without issue. The next morning, all of a sudden this stopped working. This is what I see when I run the command now:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for
production clusters.
[preflight] Running pre-flight checks
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://192.168.122.100:9898/cluster-info/v1/?token-id=99385f"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.122.100:6443]
[bootstrap] Trying to connect to endpoint https://192.168.122.100:6443
[bootstrap] Detected server version: v1.6.0-rc.1
[bootstrap] Successfully established connection with endpoint "https://192.168.122.100:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
failed to request signed certificate from the API server [cannot create certificate signing request: the server could not find the requested resource]
It seems like the node I'm trying to join does successfully connect to the API server on the master node, but for some reason, it now fails to request a certificate.
Any thoughts?
To me
sudo service kubelet restart
didn't work.
What I did was the following:
Copied from master node contents of /etc/kubernetes/* into slave nodes at same location /etc/kubernetes
I tried again "kubeadm join ..." command. This time the nodes joined the cluster without any complaint.
I think this is a temporary hack, but worked!
ok, I just stop and started kubelet on the master node as shown below, and things started working again:
sudo service kubelet stop
sudo service kubelet start
EDIT:
This only seemed to work on time for me.