Using kubeadm why would you want to manually generate certs? - kubernetes

I'm trying to follow this tutorial.
What would be the advantage of generating the certs yourself instead of depending on kubeadm?
if you create the certs yourself, does the auto-rotation happens after setting up the cluster from kubeadm?
Thanks!

No major advantage. kubeadm does the same: generate self-signed certs. The only mini advantage is that you could add some custom values in the CSR, such as a City, Organization, etc.
Not really.
There's a kubelet certificate rotation flag --rotate-certificates that needs to be enabled.
There's also the certificate rotation from the masters and kubeadm can help with that with these commands:
mkdir /etc/kubernetes/pkibak
mv /etc/kubernetes/pki/* /etc/kubernetes/pkibak
rm /etc/kubernetes/pki/*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=x.x.x.x,x.x.x.x
systemctl restart docker
If you'd like to regenerate the admin.conf file, you can also use kubeadm:
$ kubeadm init phase kubeconfig admin \
--cert-dir /etc/kubernetes/pki \
--kubeconfig-dir /tmp/.

I am creating all the certs by myself, the reason behind that is
The kubernetes cluster we use might not be updated every year, so we need certificates with longer expiry. Our applications doesn't support random docker restart and we are not accepting the kubeadm phase command to regenerate the certificates and restart the docker. Hence we created all the certificates with 5 years of expiry and provided it to kubeadm and it is working fine. Now, we don't have to worry about our certificate expiry every year.
No kubeadm doesn't provide the auto rotate facility of certificates, this is the reason we needed longer expiry of certificates in the first place.
Hope this helps.

Related

No pods started after "kubeadm alpha certs renew"

I did a
kubeadm alpha certs renew
but after that, no pods get started. When starting from a Deployment, kubectl get pod doesn't even list the pod, when explicitly starting a pod, it is stuck on Pending.
What am I missing?
Normally I would follow a pattern to debug such issues starting with:
Check all the certificate files are rotated by kubeadm using sudo cat /etc/kubernetes/ssl/apiserver.crt | openssl x509 -text.
Make sure all the control plane services (api-server, controller, scheduler etc) have been restarted to use the new certificates.
If [1] and [2] are okay you should be able to do kubectl get pods
Now you should check the certificates for kubelet and make sure you are not hitting https://github.com/kubernetes/kubeadm/issues/1753
Make sure kubelet is restarted to use the new certificate.
I think of control plane (not being able to do kubectl) and kubelet (node status not ready, should see certificates attempts in api-server logs from the node) certificates expiry separately so I can quickly tell which might be broken.

How to set custom dir to generate certs in minikube

Using kubeadm we can use --cert-dir to use the custom dir to save and store the certificates.
--cert-dir The path where to save and store the certificates. (default "/etc/kubernetes/pki")
How can we set the custom dir in minikube?
Due to the fact that kubeadm is the main bootstrapper for minikube implementation by default, thus it can be possible to pass to minikube special kubeadm command line parameters via --extra-config flag.
The target configuration with desired effect to change certificates inventory directory via --cert-dir flag may looks like:
$ sudo minikube start --vm-driver=none --extra-config=kubeadm.cert-dir="/$CERTS_PATH"
However , since I've launched the above code, I've received the following error:
😄 minikube v1.2.0 on linux (amd64)
💡 Sorry, the kubeadm.cert-dir parameter is currently not supported
by --extra-config
From minikube help guide:
Valid kubeadm parameters: ignore-preflight-errors, dry-run,
kubeconfig, kubeconfig-dir, node-name, cri-socket,
experimental-upload-certs, certificate-key, rootfs, pod-network-cidr
Which actually breaks my plans to get on hand solution as apparently I didn't find any other methods to afford it.
Will go further and share my progress though...

hostname invalid when executing "kubeadm alpha certs xxx "

When I try to check my k8s certs expiration states, I run the following command:
kubeadm alpha certs check-expiration
which ended up with:
name: Invalid value: "alpha_53_116": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
I was wondering why it needs to check the node's hostname? As the hostname of my master node couldn't be altered, is there any way to solve this problem?
complement:
OS: Centos 7.4
kubeadm version: 1.15.0
As you pointed in your question, problem is with your node name.
According to documentation command kubeadm alpha certs
The command shows expiration/residual time for the client certificates
in the /etc/kubernetes/pki folder and for the client certificate
embedded in the KUBECONFIG files used by kubeadm (admin.conf,
controller-manager.conf and scheduler.conf).
Mentioned files can be found in /etc/kubernetes. You can also check kubeadm init configuration using kubeadm config print init-defaults.
Those files will contain name of your hostname which is invalid in kubeadm/kubernetes.
In short, as kubeadm alpha certs is based on KUBECONFIG files and pki folder, it will not go trought validation due to "_" sign.
Unfortunately it is syntax issue so there is no workaround.
Please keep in mind that alpha is Kubeadm experimental sub-commands. So it might be changed in the future.

Is phase kubeconfig required after phase certs in kubeadm?

I've recently upgraded with kubeadm, which I expect to rotate all certificates, and for good measure, I also ran kubeadm init phase certs all, but I'm not sure what steps are required to verify that the certs are all properly in place and not about to expire.
I've seen a SO answer reference kubeadm init phase kubeconfig all is required in addition, but cannot find in the kubernetes kubeadm documentation telling me that it needs to be used in conjunction with phase certs.
What do I need to do to make sure that the cluster will not encounter expired certificates?
I've tried verifying by connecting to the secure local port: echo -n | openssl s_client -connect localhost:10250 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not, which gives me expirations next month.
While openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text and openssl x509 -in /etc/kubernetes/pki/apiserver-kubelet-client.crt -noout -text yield dates over a year in advance.
These conflicting dates certainly have me concerned that I will find myself like many others with expired certificates. How do I get in front of that?
Thank you for any guidance.
In essence kubeadm init phase certs all regenerates all your certificates including your ca.crt (Certificate Authority), and Kubernetes components use certificate-based authentication to connect to the kube-apiserver (kubelet, kube-scheduler, kube-controller-manager) so you will also have to regenerate pretty much all of those configs by running kubeadm init phase kubeconfig all
Keep in mind that you will have to regenerate the kubelet.conf on all your nodes since it also needs to be updated to connect to the kube-apiserver with the new ca.crt. Also, make sure you add all your hostnames/IP addresses that your kube-apiserver is going to serve on to the kubeadm init phase certs all command (--apiserver-cert-extra-sans)
Most likely you are not seeing the updated certs when connecting through openssl is because you haven't restarted the Kubernetes components and in particular the kube-apiserver. So you will have to start your kube-apiserver, kube-scheduler, kube-controller-manager, etc (or kube-apiservers, kube-schedulers, etc if you are running a multi-master control plane) You will also have to restart your kubelets on all your nodes.
A month later, I've learned a little more and wanted to update this question for those who follow behind me.
I filed an issue on Kubernetes requesting more information on how the kubeadm upgrade process automatically updates certificates. The documentation on Kubernetes says:
Note: kubelet.conf is not included in the list above because kubeadm configures kubelet for automatic certificate renewal.
After upgrading, I did not see an automatic cert renewal for the kubelet. I was then informed that:
the decision on when to rotate the certificate is non-deterministic and it may happen 70 - 90% of the total lifespan of the certificate to prevent overlap on node cert rotations.
They also provided the following process, which resolved my last outstanding certificate rotation:
sudo mv /var/lib/kubelet/pki /var/lib/kubelet/pki-backup
sudo systemctl restart kubelet
# the pki folder should be re-created.

K8s - add kubeadm to existing cluster

I have the cluster created many time ago without kubeadm (maybe it was kubespray, but the configuration for that also lost).
Is any way exists to add nodes to that cluster or attach kubeadm to current configuration or extend without erasing by kubespray?
If Kubeadm was used to generate the original cluster then you can log into the Master and run kubeadm token generate. This will generate an API Token for you. With this API token your worker nodes will be able to preform an authenticated CSR against your Master to perform a joining request. You can follow this guide from there to add a new node with the command kubeadm join.