Joining cluster takes forever - kubernetes

I have set up my master node and I am trying to join a worker node as follows:
kubeadm join 192.168.30.1:6443 --token 3czfua.os565d6l3ggpagw7 --discovery-token-ca-cert-hash sha256:3a94ce61080c71d319dbfe3ce69b555027bfe20f4dbe21a9779fd902421b1a63
However the command hangs forever in the following state:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
Since this is just a warning, why does it actually fails?
edit: I noticed the following in my /var/log/syslog
Mar 29 15:03:15 ubuntu-xenial kubelet[9626]: F0329 15:03:15.353432 9626 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Unit entered failed state.

First if you want to see more detail when your worker joins to the master use:
kubeadm join 192.168.1.100:6443 --token m3jfbb.wq5m3pt0qo5g3bt9 --discovery-token-ca-cert-hash sha256:d075e5cc111ffd1b97510df9c517c122f1c7edf86b62909446042cc348ef1e0b --v=2
Using the above command I could see that my worker could not established connection with the master, so i just stoped the firewall:
systemctl stop firewalld

This can be solved by creating a new token
using this command:
kubeadm token create --print-join-command
and use the token generated for joining other nodes to the cluster

The problem had to do with kubeadm not installing a networking CNI-compatible solution out of the box;
Therefore, without this step the kubernetes nodes/master are unable to establish any form of communication;
The following task addressed the issue:
- name: kubernetes.yml --> Install Flannel
shell: kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
become: yes
environment:
KUBECONFIG: "/etc/kubernetes/admin.conf"
when: inventory_hostname in (groups['masters'] | last)

I did get the same error on CentOS 7 but in my case join command worked without problems, so it was indeed just a warning.
> [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker
> cgroup driver. The recommended driver is "systemd". Please follow the
> guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading
> configuration from the cluster... [preflight] FYI: You can look at
> this config file with 'kubectl -n kube-system get cm kubeadm-config
> -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
As the official documentation mentions, there are two common issues that make the init hang (I guess it also applies to join command):
the default cgroup driver configuration for the kubelet differs from
that used by Docker. Check the system log file (e.g. /var/log/message)
or examine the output from journalctl -u kubelet. If you see something
like the following:
First try the steps from official documentation and if that does not work please provide more information so we can troubleshoot further if needed.

I had a bunch of k8s deployment scripts that broke recently with this same error message... it looks like docker changed it's install. Try this --
previous install:
apt-get isntall docker-ce
updated install:
apt-get install docker-ce docker-ce-cli containerd.io

How /var/lib/kubelet/config.yaml is created?
Regarding the /var/lib/kubelet/config.yaml: no such file or directory error.
Below are steps that should occur on the worker node in order for the mentioned file to be created.
1 ) The creation of the /var/lib/kubelet/ folder. It is created when the kubelet service is installed as mentioned here:
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
2 ) The creation of config.yaml. The kubeadm join flow should take place so when you run kubeadm join, kubeadm uses the Bootstrap Token credential to perform a TLS bootstrap, which fetches the credential needed to download the kubelet-config-1.X ConfigMap and writes it to /var/lib/kubelet/config.yaml.
After a successful execution you should see the logs below:
.
.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
.
.
So, after these 2 steps you should have /var/lib/kubelet/config.yaml in place.
Failure of the kubeadm join flow
In your case, it seems that the kubeadm join flow failed which might happen due to multiple reasons like bad configuration of iptables, ports that are already in use, container runtime not installed properly, etc' - as described here and here.
As far as I know, the fact that no networking CNI-compatible solution was in place should not affect the creation of /var/lib/kubelet/config.yaml:
A) We can see the under the kubeadm preflight checks what issues will cause the join phase to fail.
B ) I also tested this issue by removing the current solution I used (Calico) and ran kubeadm reset and kubeadm join again and no errors appeared in the kubeadm logs (I've got the successful execution logs I mentioned above) and /var/lib/kubelet/config.yaml was created properly.
(*) Of course that the cluster can't function in this state - I just wanted to emphasize that I think the problem was one of the options mentioned in A.

Related

Running Kubeadm from source build

Attempting to deploy a k8 master node using kubeadm from a fork of the Kubernetes repository, branch release-1.19. What configuration is necessary ahead of running kubeadm init {opts...}
The kubeadm guide recommends install of kubeadm, kubectl and kubelet using apt. The guide states, following installation that "The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do."
From a local repository I'm compiling the Kubernetes binaries (kubeadm, kubectl and kubelet) using the 'make all' method. Then scp'ing them to the master node at /usr/local/bin with exec perms.
Executing kubeadm init fails since the kubelet is not running/configured. However, initialising the required kubelet.service from the kubelet binary seems to require the certs (ca.pem) and configs (kubelet.config.yaml) that I assumed kubeadm generates. So chicken-egg situation regarding kubeadm and the kubelet.
The question then is, what additional configurations does the apt installation complete for initialising the kubelet.service?
Is there a minimal config & service template kubelet can be started with ahead of kubeadm init?
Does kubeadm replace the certs used by the pre-initialised kubelet?
Any help/direction would be hugely appreciated. Online docs/threads for building from source are sparse
For anyone searching, found the solution to this:
Install dependencies through apt: apt-transport-https, conntrack, socat, ipset
Move kubelet, kubeadm, kubectl binaries to /usr/local/bin and give exec perms
Write systemd kubelet.service file to /etc/systemd/system
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
Write kubelet config file to /etc/systemd/system/kubelet.service.d
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Build cni plugins
https://github.com/containernetworking/plugins
ie. For linux, build_linux.sh
Copy cni plugin binaries to /opt/cni
Start Kubelet
systemctl daemon-reload
systemctl enable kubelet --now
systemctl start kubelet
Now kubeadm init can run
In short this initialised the kubelet.service systemd process prior to the kubeadm init; with some default/minimal configs. kubeadm init then modifies the process's configs on execution.

How can I rename master nodes in a HA kubernetes cluster?

I have a kubernetes cluster with 3 master nodes. They are named master-1, master-2 and master-3. I would like to rename them as control-plane-n.
I could not find a clear procedure to do this. The closest one is how to rename a node in a cluster. So I just tried that. Here is what I did (my hosts are running ubuntu 18.04, and kubernetes v1.16.2):
On master-1:
kubectl drain master-3 --ignore-daemonsets
kubectl delete node master-3
Run "kubeadm token create --print-join-command" and copy the output
On master-3:
sudo kubeadm reset
sudo hostnamectl set-hostname control-plane-3
Modify /etc/cloud/cloud.cfg to set preserve_hostname to true
Reboot the VM
Paste in the join command from master-1, with --control-plane option added
Here is the log I got:
sudo kubeadm join 172.22.19.188:6443 --control-plane --token nxxzby.zsfdx86e7cv1rq0e --discovery-token-ca-cert-hash sha256:553366c2f91fd3abffe3e3d1c39d9314e2d73e8a6181f4da9938a8e24fd77456
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/data/kubernetes/pki"
error execution phase control-plane-prepare/certs: error creating PKI assets: failed to write or validate certificate "apiserver": certificate apiserver is invalid: x509: certificate is valid for master-3, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not control-plane-3
To see the stack trace of this error execute with --v=5 or higher
How can I proceed? Or is there a better approach?
Thanks in advance for any idea or suggestion you can offer.
Based on #zerkms comment, you can create a 4th node with a proper name, join, then remove one of the old from the cluster.
Doing this 3 times you will be able to have all node with the desired name.

Kubelet config yaml is missing when restart work node docker service

When I restart the docker service in work node, the logs of kubelet in master node report a no such file error.
# in work node
# systemctl restart docker service
# in master node
# journalctl -u kubelet
# failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Arghya is right but I would like to add some info you should be aware of:
You can execute kubeadm init phase kubelet-start to only invoke a particular step that will write the kubelet configuration file and environment file and then start the kubelet.
After performing a restart there is a chance that swap would re-enable. Make sure to run swapoff -a in order to turn it off.
If you encounter any token validation problems than simply run kubeadm token create --print-join-command and than do the join process with the provided info. Remember that tokens expire after 24 hours by default.
If you wish to know more about kubeadm init phase you can find it here and here.
Please let me know if that helped.
You might have done kubeadm reset which cleans up all files.
Just do kubeadm reset --force to reset the node and then kubeadm init in master node and kubeadm join in woker node thereafter.

The connection to the server xxxx:6443 was refused - did you specify the right host or port?

I follow this to install kubernetes on my cloud.
When I run command kubectl get nodes I get this error:
The connection to the server localhost:6443 was refused - did you specify the right host or port?
How can I fix this?
If you followed only mentioned docs it means that you have only installed kubeadm, kubectl and kubelet.
If you want to run kubeadm properly you need to do 3 steps more.
1. Install docker
Install Docker ubuntu version. If you are using another system chose it from left menu side.
Why:
If you will not install docker you will receive errror like below:
preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": e
xecutable file not found in $PATH
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
2. Initialization of kubeadm
You have installed properly kubeadm and docker but now you need to initialize kubeadm. Docs can be found here
In short version you have to run command
$ sudo kubeadm init
After initialization you will receive information to run commands like:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
and token to join another VM to cluster. It looks like
kubeadm join 10.166.XX.XXX:6443 --token XXXX.XXXXXXXXXXXX \
--discovery-token-ca-cert-hash sha256:aXXXXXXXXXXXXXXXXXXXXXXXX166b0b446986dd05c1334626aa82355e7
If you want to run some special action in init phase please check this docs.
3. Change node status to Ready
After previous step you will be able to execute
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-kubeadm NotReady master 4m29s v1.16.2
But your node will be in NotReady status. If you will describe it $ kubectl describe node you will see error:
Ready False Wed, 30 Oct 2019 09:55:09 +0000 Wed, 30 Oct 2019 09:50:03 +0000 KubeletNotReady runtime network not ready: Ne
tworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
It means that you have to install one of CNIs. List of them can be found here.
EDIT
Also one thing comes to my mind.
Sometimes when you turned off and on VM you need to restart
kubelet and docker service. You can do it by using
$ service docker restart
$ systemctl restart kubelet
Hope it helps.
Looks like kubeconfig file is missing.. Did you copy admin.conf file to ~/.kube/config ?
Verify if there are any proxies set like "http_proxy" or "https_proxy", mostly we set it as environment variables. If yes, then remove the proxies and it should work for you.
I did the following 2 steps. The kubectl works now.
$ service docker restart
$ systemctl restart kubelet

Unable to find CGroups details in 10-kubeadm.conf file

I was trying to setup a Kubernetes cluster based on the documentation. https://kubernetes.io/docs/tasks/tools/install-kubeadm/
I install kubeadm by running:
yum install -y kubeadm
I was about to update the 10-kubeadm.conf file as mentioned in the doc. But the file looks completely different, it was like this https://github.com/kubernetes/kubernetes/blob/master/build/rpms/10-kubeadm.conf.
Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
This is a file that kubeadm init and kubeadm join generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
The .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
It does not have Cgroup driver variable . So in this case how should we proceed the installation .
First of all ensure that besides kubeadm you also installed kubelet and kubectl. If not, install them.
yum install -y kubelet kubectl
Check if Docker has been started with cgroup driver systemd.
docker info | grep -i cgroup
Modify your 10-kubeadm.conf file and add a new string.
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Additionally, you have to add $KUBELET_CGROUP_ARGS variable to the ExecStart section.
And as a final step, reload systemd manager configuration and restart kubelet service as described here.
systemctl daemon-reload && service kubelet restart
UPDATE
Since version 1.11 Kubernetes automatically detects the right cgroup driver and you can just skip step about settings of cgroup driver.
That is from the changelog:
kubeadm now detects the Docker cgroup driver and starts the kubelet with the matching driver. This eliminates a common error experienced by new users in when the Docker cgroup driver is not the same as the one set for the kubelet due to different Linux distributions setting different cgroup drivers for Docker, making it hard to start the kubelet properly. (#64347, #neolit123)