Kubernetes ( Azure VM ) change node ip - kubernetes

The internal IP address of a Kubernetes single node has changed and now kubelet isn’t starting correctly anymore.
Therefore I’ve started changing the configuration of the follwoing files:
/.kube/config
/etc/kubernetes/*.conf
I’ve added the new ip address to these files.
After this step, I got the error saying that the X509 certificate is not valid for the new ip.
In order to solve this issue, I’ve done the following steps:
Stop kubelet and delete all old cert files from /etc/kubernetes/pki and /etc/kubernetes/pki/etcd
kubeadm init phase certs adminserver —adminserver-advertise-address —adminserver-cert-extra-sans
kubeadm init phase certs adminserver-kubelet-client
kubeadm init phase certs front-proxy-ca
kubeadm init phase certs front-proxy-client
kubeadm init phase certs apiserver-etcd-client
kubeadm init phase certs etcd-ca
kubeadm init phase certs etcd-healthcheck-client
kubeadm init phase certs etcd-peer
kubeadm init phase certs etcd-server
kubeadm init phase kubeconfig all —apiserver-advertise-address
kubeadm init phase certs renew all
copied /etc/kubernetes/admin.conf to ~/.kube and renamed it to config
kubeadm init phase kubelet-start
The problem is that I still get an error saying that the connection to the new IP was refused. I belive it’s due to a wrong certificate, but the apiserver.crt file seems correct if I compare it to the original certificate.
I tried the same approape on a machine which is running locally and I got kubelet to start correctly and kubectl to work again.
Can anyone point me to what I’m doing wrong?
Thank you

The solution was to update the manifest files located under /etc/kubernetes/manifests and ensure that the new host IP is set there as well.

Related

No pods started after "kubeadm alpha certs renew"

I did a
kubeadm alpha certs renew
but after that, no pods get started. When starting from a Deployment, kubectl get pod doesn't even list the pod, when explicitly starting a pod, it is stuck on Pending.
What am I missing?
Normally I would follow a pattern to debug such issues starting with:
Check all the certificate files are rotated by kubeadm using sudo cat /etc/kubernetes/ssl/apiserver.crt | openssl x509 -text.
Make sure all the control plane services (api-server, controller, scheduler etc) have been restarted to use the new certificates.
If [1] and [2] are okay you should be able to do kubectl get pods
Now you should check the certificates for kubelet and make sure you are not hitting https://github.com/kubernetes/kubeadm/issues/1753
Make sure kubelet is restarted to use the new certificate.
I think of control plane (not being able to do kubectl) and kubelet (node status not ready, should see certificates attempts in api-server logs from the node) certificates expiry separately so I can quickly tell which might be broken.

How to backup and restore a kubernetes master node?

There is a k8s single master node, I need to back it up and restore on a different server with different ip addresses. I googled this topic and found a solution -
https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/
Everything looked easy; so, I followed the instruction and got a copy of the certificates and a snapshot of the etcd database. Then I used the second script to restore the node on a different server. It did not go well this time. It gave me a bunch of errors related to mismatching the certificates and server's local ip addresses.
As far as I understood, when a kubernetes cluster is initializing, it creates a set of certificates assigned to the original server's ip addresses and I cannot just back it up and restore somewhere else.
So, how to backup a k8s master node and restore it?
Make sure that you added an extra flag to the kubeadm init command (--ignore-preflight-errors=DirAvailable--var-lib-etcd) to acknowledge that we want to use the pre-existing data.
Do the following steps:
replace the IP address in all config files in /etc/kubernetes
back up /etc/kubernetes/pki
identify certs in /etc/kubernetes/pki that have the old IP address
as an alt name - 1st step
delete both the cert and key for each of them (for me it was just
apiserver and etcd/peer)
regenerate the certs using kubeadm alpha phase certs - 2nd step
identify configmap in the kube-system namespace that referenced
the old IP - 3rd step
manually edit those configmaps
restart kubelet and docker (to force all containers to be
recreated)
1.
/etc/kubernetes/pki# for f in $(find -name "*.crt"); do openssl x509 -in $f -text -noout > $f.txt; done
/etc/kubernetes/pki# grep -Rl 12\\.34\\.56\\.78 .
./apiserver.crt.txt
./etcd/peer.crt.txt
/etc/kubernetes/pki# for f in $(find -name "*.crt"); do rm $f.txt; done
2.
/etc/kubernetes/pki# rm apiserver.crt apiserver.key
/etc/kubernetes/pki# kubeadm alpha phase certs apiserver
...
/etc/kubernetes/pki# rm etcd/peer.crt etcd/peer.key
/etc/kubernetes/pki# kubeadm alpha phase certs etcd-peer
...
3.
$ kubectl -n kube-system get cm -o yaml | less
...
$ kubectl -n kube-system edit cm ...
Take a look here: master-backup.
UPDATE:
During replacing master nodes and changing IP you cannot contact the api-server to change the configmaps in step 4. Moreover if you have single master k8s cluster connection between worker nodes will be interrupted till new master will be up.
To ensure connection between master and worker nodes during master replacement you have to create HA cluster.
The certificate is signed for {your-old-IP-here} and secure communication can't then happen to {your-new-ip-here}
You can add more IPs in the certificate in beforehand though...
The api-server certificate is signed for hostname kubernetes, so you can add that as an alias to the new IP address in /etc/hosts then do kubectl --server=https://kubernetes:6443 .... .

Renewing Kubernetes cluster certificates

We currently having 2 Master 2 Worker node cluster on Kubernetes v1.13.4.The cluster is down as the kubelet certificate located in /var/lib/kubelet/pki/kubelet.crt has expired and the kubelet service is not running. On checking the kubelet logs I get the following error
E0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTC
The following certificates ca.crt, apiserver-kubelet-client.crt are valid. We are unable to renew the kubelet.crt certificate manually by using the kubeadm-config.yaml. Can someone please provide the steps to renew the certificate.
We have tried setting --rotate-certificates property and also using kubeadm-config.yaml but since we are using v1.13.4 kubeadm --config flag is not present.
On checking the kubelet logs I get the following error
E0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTC
As you mentioned that only kubelet.crt has expired and apiserver-kubelet-client.crt is valid, you can try to renew it by command kubeadm alpha certs renew based on documentation.
Second way to renew kubeadm certificates is upgrade version like in this article.
You can also try by using kubeadm init phase certs all. It was explained in this Stackoverflow case.
Let me know if that helped. If not provide more information with more logs.

Is phase kubeconfig required after phase certs in kubeadm?

I've recently upgraded with kubeadm, which I expect to rotate all certificates, and for good measure, I also ran kubeadm init phase certs all, but I'm not sure what steps are required to verify that the certs are all properly in place and not about to expire.
I've seen a SO answer reference kubeadm init phase kubeconfig all is required in addition, but cannot find in the kubernetes kubeadm documentation telling me that it needs to be used in conjunction with phase certs.
What do I need to do to make sure that the cluster will not encounter expired certificates?
I've tried verifying by connecting to the secure local port: echo -n | openssl s_client -connect localhost:10250 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not, which gives me expirations next month.
While openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text and openssl x509 -in /etc/kubernetes/pki/apiserver-kubelet-client.crt -noout -text yield dates over a year in advance.
These conflicting dates certainly have me concerned that I will find myself like many others with expired certificates. How do I get in front of that?
Thank you for any guidance.
In essence kubeadm init phase certs all regenerates all your certificates including your ca.crt (Certificate Authority), and Kubernetes components use certificate-based authentication to connect to the kube-apiserver (kubelet, kube-scheduler, kube-controller-manager) so you will also have to regenerate pretty much all of those configs by running kubeadm init phase kubeconfig all
Keep in mind that you will have to regenerate the kubelet.conf on all your nodes since it also needs to be updated to connect to the kube-apiserver with the new ca.crt. Also, make sure you add all your hostnames/IP addresses that your kube-apiserver is going to serve on to the kubeadm init phase certs all command (--apiserver-cert-extra-sans)
Most likely you are not seeing the updated certs when connecting through openssl is because you haven't restarted the Kubernetes components and in particular the kube-apiserver. So you will have to start your kube-apiserver, kube-scheduler, kube-controller-manager, etc (or kube-apiservers, kube-schedulers, etc if you are running a multi-master control plane) You will also have to restart your kubelets on all your nodes.
A month later, I've learned a little more and wanted to update this question for those who follow behind me.
I filed an issue on Kubernetes requesting more information on how the kubeadm upgrade process automatically updates certificates. The documentation on Kubernetes says:
Note: kubelet.conf is not included in the list above because kubeadm configures kubelet for automatic certificate renewal.
After upgrading, I did not see an automatic cert renewal for the kubelet. I was then informed that:
the decision on when to rotate the certificate is non-deterministic and it may happen 70 - 90% of the total lifespan of the certificate to prevent overlap on node cert rotations.
They also provided the following process, which resolved my last outstanding certificate rotation:
sudo mv /var/lib/kubelet/pki /var/lib/kubelet/pki-backup
sudo systemctl restart kubelet
# the pki folder should be re-created.

Using kubeadm why would you want to manually generate certs?

I'm trying to follow this tutorial.
What would be the advantage of generating the certs yourself instead of depending on kubeadm?
if you create the certs yourself, does the auto-rotation happens after setting up the cluster from kubeadm?
Thanks!
No major advantage. kubeadm does the same: generate self-signed certs. The only mini advantage is that you could add some custom values in the CSR, such as a City, Organization, etc.
Not really.
There's a kubelet certificate rotation flag --rotate-certificates that needs to be enabled.
There's also the certificate rotation from the masters and kubeadm can help with that with these commands:
mkdir /etc/kubernetes/pkibak
mv /etc/kubernetes/pki/* /etc/kubernetes/pkibak
rm /etc/kubernetes/pki/*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=x.x.x.x,x.x.x.x
systemctl restart docker
If you'd like to regenerate the admin.conf file, you can also use kubeadm:
$ kubeadm init phase kubeconfig admin \
--cert-dir /etc/kubernetes/pki \
--kubeconfig-dir /tmp/.
I am creating all the certs by myself, the reason behind that is
The kubernetes cluster we use might not be updated every year, so we need certificates with longer expiry. Our applications doesn't support random docker restart and we are not accepting the kubeadm phase command to regenerate the certificates and restart the docker. Hence we created all the certificates with 5 years of expiry and provided it to kubeadm and it is working fine. Now, we don't have to worry about our certificate expiry every year.
No kubeadm doesn't provide the auto rotate facility of certificates, this is the reason we needed longer expiry of certificates in the first place.
Hope this helps.