Kubernetes install in offline mode error : timeout at wait-control-plane - kubernetes

I am new to Kubernetes and I am trying to install Kubernetes with kubeadm in offline mode. I am using Kubernetes v1.24.1. After bringing all the images into my Oracle linux 8 server and doing the neccessary changes as suggested by the official guide my Kubeadm init command is running into a timeout with the following error:
[root#hhpsoscn0001 ~]# kubeadm init --ignore-preflight-errors all --config Configfile.yaml
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[WARNING Port-10250]: Port 10250 is in use
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Running Kubelet status command is showing me this
[root#hhpsoscn0001 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2022-06-24 08:32:31 UTC; 7min ago
Docs: https://kubernetes.io/docs/
Main PID: 155445 (kubelet)
Tasks: 20 (limit: 202900)
Memory: 47.3M
CGroup: /system.slice/kubelet.service
└─155445 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.141148 155445 kubelet.go:2419] "Error getting node" err="node \"hhpsoscn0001\" not foun>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.242350 155445 kubelet.go:2419] "Error getting node" err="node \"hhpsoscn0001\" not foun>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.342806 155445 kubelet.go:2419] "Error getting node" err="node \"hhpsoscn0001\" not foun>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.381220 155445 controller.go:144] failed to ensure lease exists, will retry in 7s, error>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.443850 155445 kubelet.go:2419] "Error getting node" err="node \"hhpsoscn0001\" not foun>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: I0624 08:39:51.539027 155445 kubelet_node_status.go:70] "Attempting to register node" node="hhpsoscn00>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.539440 155445 kubelet_node_status.go:92] "Unable to register node with API server" err=>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.544727 155445 kubelet.go:2419] "Error getting node" err="node \"hhpsoscn0001\" not foun>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.645092 155445 kubelet.go:2419] "Error getting node" err="node \"hhpsoscn0001\" not foun>
Jun 24 08:39:51 hhpsoscn0001 kubelet[155445]: E0624 08:39:51.745630 155445 kubelet.go:2419] "Error getting node" err="node \"hhpsoscn0001\" not foun>
lines 1-22/22 (END)
and ps x | grep kubelet command shows me this :
[root#hhpsoscn0001 ~]# ps x | grep kubelet
156080 ? Ssl 0:10 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=docker-hub.m-net.vv/k8s/pause:3.7
156734 pts/4 S+ 0:00 grep --color=auto kubelet
Can somebody please tell me what is wrong?

Looks like you have tried to run kubeadm init multiple times.
Try this out
kubeadm reset --force
This will reset your bootstrap server, once done this should come up. Right now from the provided logs ... it looks like you must have changed the hostname post init or could be kubelet bootup was form previous init.

Related

unable to initialize kubernetes cluster

I was learning to set up a single node Kubernetes cluster but was not able to initalise it. Ive done some research and have done a few fixes but still couldn't get my kubelet running. Can someone help tks a lot!
My environment (on my Ubuntu 64 bit, VMware Workstation 17 Player ) :
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
The errors are shown below.
liming#liming-virtual-machine-3:/$ sudo kubeadm init
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local liming-virtual-machine-3] and IPs [10.96.0.1 192.168.2.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [liming-virtual-machine-3 localhost] and IPs [192.168.2.132 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [liming-virtual-machine-3 localhost] and IPs [192.168.2.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
liming#liming-virtual-machine-3:~$ systemctl status kubelet
× kubelet.service - kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor pres>
Active: failed (Result: exit-code) since Mon 2022-12-26 17:05:54 +08; 2h >
Main PID: 3223 (code=exited, status=217/USER)
CPU: 1ms
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: kubelet.service: Schedule>
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: Stopped kubelet.
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: kubelet.service: Start re>
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: kubelet.service: Failed w>
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: Failed to start kubelet.
lines 1-11/11 (END)
Fixes Ive done beforehand are shown below .
$ iptables -F
$ swapoff -a
$ free -m
$ kubeadm reset
$ kubeadm init
Ive also set the docker cgroup driver to systemd
docker info |grep -i cgroup
Cgroup Driver: systemd
Cgroup Version: 2
cgroupns
Below is the kubelet log:
liming#liming-virtual-machine-3:~$ journalctl -xeu kubelet
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ Automatic restarting of the unit kubelet.service has been scheduled, as the>
░░ the configured Restart= setting for the unit.
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: Stopped kubelet.
░░ Subject: A stop job for unit kubelet.service has finished
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A stop job for unit kubelet.service has finished.
░░
░░ The job identifier is 3203 and the job result is done.
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: kubelet.service: Start re>
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: kubelet.service: Failed w>
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit kubelet.service has entered the 'failed' state with result 'exit-c>
Dec 26 17:05:54 liming-virtual-machine-3 systemd[1]: Failed to start kubelet.
░░ Subject: A start job for unit kubelet.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit kubelet.service has finished with a failure.
░░
░░ The job identifier is 3203 and the job result is failed.
My swap should have already been disabled as shown below.
liming#liming-virtual-machine-3:~$ systemctl list-units --type=swap --state=active
UNIT LOAD ACTIVE SUB DESCRIPTION
0 loaded units listed.

kubeadm v1.18.2 with crio version 1.18.2 failing to start master node from private repo on Centos7 / RH7

Description
I am relatively new to kubernetes. I can run my cluster when using the default socket (/var/run/dockershim.sock) but when I tried the crio socket to pull the images from my private repo I noticed the speed is not even close to compare with.
I am trying to configure all my nodes to use the crio.socket but I am failing to launch the master node with this socket.
I followed the documentation both from the kubernetes Configuring each kubelet in your cluster using kubeadm and also the git documentation cri-o.
Unfortunately I am not able to get it working as it seems to be ignoring the private repo flag.
Steps to reproduce the issue:
Launch a master node (prime) with the following init (using a private repo):
kubeadm init \
--upload-certs \
--cri-socket=/var/run/crio/crio.sock \
--node-name=my_node_name \
--image-repository=my.private.repo \
--pod-network-cidr=10.96.0.0/16 \
--kubernetes-version=v1.18.2 \
--control-plane-endpoint=ip:6443 \
--apiserver-cert-extra-sans=ip \
--apiserver-advertise-address=ip
Run as root or with sudo: journalctl -xeu crio -f
Observe in debug or info mode the logs sample below
Describe the results you received:
Sample of logs from crio in debug mode:
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043499089+02:00" level=debug msg="Trying to access \"k8s.gcr.io/pause:3.2\"" file="docker/docker_image_src.go:68"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043547722+02:00" level=debug msg="Credentials not found" file="config/config.go:123"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043576124+02:00" level=debug msg="Using registries.d directory /etc/containers/registries.d for sigstore configuration" file="docker/lookaside.go:51"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043706369+02:00" level=debug msg=" Using \"default-docker\" configuration" file="docker/lookaside.go:169"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043736378+02:00" level=debug msg=" No signature storage configuration found for k8s.gcr.io/pause:3.2" file="docker/lookaside.go:174"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043769424+02:00" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/k8s.gcr.io" file="tlsclientconfig/tlsclientconfig.go:21"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.043858410+02:00" level=debug msg="GET https://k8s.gcr.io/v2/" file="docker/docker_client.go:516"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.046154250+02:00" level=debug msg="Ping https://k8s.gcr.io/v2/ err Get \"https://k8s.gcr.io/v2/\": dial tcp 10.254.3.15:443: connect: connection refused (&url.Error{Op:\"Get\", URL:\"https://k8s.gcr.io/v2/\", Err:(*net.OpError)(0xc00084d5e0)})" file="docker/docker_client.go:708"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.046239456+02:00" level=debug msg="GET https://k8s.gcr.io/v1/_ping" file="docker/docker_client.go:516"
Jun 30 20:03:45 hostname crio[6693]: time="2020-06-30 20:03:45.048653448+02:00" level=debug msg="Ping https://k8s.gcr.io/v1/_ping err Get \"https://k8s.gcr.io/v1/_ping\": dial tcp 10.254.3.15:443: connect: connection refused (&url.Error{Op:\"Get\", URL:\"https://k8s.gcr.io/v1/_ping\", Err:(*net.OpError)(0xc0006b0690)})" file="docker/docker_client.go:735"
Describe the results you expected:
Launching node with the use of crio socket
Additional information you deem important (e.g. issue happens only occasionally):
If I launch the node using the default socket e.g.:
# kubeadm init \
--upload-certs \
--cri-socket=/var/run/dockershim.sock \
--node-name=my_node_name \
--image-repository=my.private.repo \
--pod-network-cidr=10.96.0.0/16 \
--kubernetes-version=v1.18.2 \
--control-plane-endpoint=ip:6443 \
--apiserver-cert-extra-sans=ip \
--apiserver-advertise-address=ip
W0630 20:24:33.223266 29033 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0630 20:24:35.839949 29033 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0630 20:24:35.841420 29033 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.003647 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
key
[mark-control-plane] Marking the node hostname as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node hostname as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: token
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join ip:6443 --token token \
--discovery-token-ca-cert-hash sha256:hash \
--control-plane --certificate-key key
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join ip:6443 --token token \
--discovery-token-ca-cert-hash sha256:hash
If I launch the node with crio socket:
# kubeadm init \
--upload-certs \
--cri-socket=/var/run/crio/crio.sock \
--node-name=my_node_name \
--image-repository=my.private.repo \
--pod-network-cidr=10.96.0.0/16 \
--kubernetes-version=v1.18.2 \
--control-plane-endpoint=ip:6443 \
--apiserver-cert-extra-sans=ip \
--apiserver-advertise-address=ip
W0630 20:32:33.827957 2916 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hostname kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.96.134.57 10.96.134.57 10.96.134.57]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hostname localhost] and IPs [10.96.134.57 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hostname localhost] and IPs [10.96.134.57 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0630 20:32:37.829806 2916 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0630 20:32:37.830826 2916 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I can see the localhost is listening on port 10248:
# curl -sSL http://localhost:10248/healthz
ok
Sample of crio socket (as described in documentation):
# curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info | jq
* About to connect() to localhost port 80 (#0)
* Trying /var/run/crio/crio.sock...
* Failed to set TCP_KEEPIDLE on fd 3
* Failed to set TCP_KEEPINTVL on fd 3
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to localhost (/var/run/crio/crio.sock) port 80 (#0)
> GET /info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Tue, 30 Jun 2020 18:36:35 GMT
< Content-Length: 240
<
{ [data not shown]
100 240 100 240 0 0 144k 0 --:--:-- --:--:-- --:--:-- 234k
* Connection #0 to host localhost left intact
{
"storage_driver": "overlay2",
"storage_root": "/var/lib/containers/storage",
"cgroup_driver": "systemd",
"default_id_mappings": {
"uids": [
{
"container_id": 0,
"host_id": 0,
"size": 4294967295
}
],
"gids": [
{
"container_id": 0,
"host_id": 0,
"size": 4294967295
}
]
}
}
Output of kubelet status
# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Tue 2020-06-30 20:39:49 CEST; 6s ago
Docs: https://kubernetes.io/docs/
Main PID: 8502 (kubelet)
Tasks: 15
Memory: 20.1M
CGroup: /system.slice/kubelet.service
└─8502 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=hostname
Jun 30 20:39:55 hostname kubelet[8502]: I0630 20:39:55.369441 8502 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Jun 30 20:39:55 hostname kubelet[8502]: I0630 20:39:55.399015 8502 kubelet_node_status.go:70] Attempting to register node hostname
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.403707 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.503871 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.604115 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.704324 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.769448 8502 kubelet_node_status.go:92] Unable to register node "hostname" with API server: Post https://ip:6443/api/v1/nodes: dial tcp ip:6443: connect: connection refused
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.805779 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:55 hostname kubelet[8502]: E0630 20:39:55.906014 8502 kubelet.go:2267] node "hostname" not found
Jun 30 20:39:56 hostname kubelet[8502]: E0630 20:39:56.007272 8502 kubelet.go:2267] node "hostname" not found
From the little that I know the network errors is not relevant as I have not yet launched the network container, so the errors are expected at this point.
Output of crio --version:
# crio --version
crio version 1.18.2
Version: 1.18.2
GitCommit: 7f261aeebffed079b4475dde8b9d602b01973d33
GitTreeState: clean
BuildDate: 2020-06-18T21:05:27Z
GoVersion: go1.14
Compiler: gc
Platform: linux/amd64
Linkmode: static
Output of kubelet --version:
# kubelet --version
Kubernetes v1.18.2
Output of LinuxOS version:
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.8 (Maipo)
Additional environment details (AWS, VirtualBox, physical, etc.):
The installation is applied on barebone node.
kubelet file sample
# cat /etc/default/kubelet
KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m
Update: I have raised a ticket in github Kubernetes v1.18.2 with crio version 1.18.2 failing to sync with kubelet on RH7 #3915. It looks that there is a bug as cri-o is not able to process the remote-repository and it is trying to pull the default repo k8s.io. I will update the ticket as soon as I have more information.
So the problem is not exactly a bug on CRI-O as we initially thought (also the CRI-O dev team) but it seems to be a lot of configurations that need to be applied if the user desires to use CRI-O as the CRI for kubernetes and also desire to use a private repo.
So I will not put here the configurations for the CRI-O as it is already documented on the ticket that I raised with the team
Kubernetes v1.18.2 with crio version 1.18.2 failing to sync with kubelet on RH7
#3915.
The first configuration that someone should apply is to configure the registries of the containers where the images will be pulled:
$ cat /etc/containers/registries.conf
[[registry]]
prefix = "k8s.gcr.io"
insecure = false
blocked = false
location = "k8s.gcr.io"
[[registry.mirror]]
location = "my.private.repo"
CRI-O recommends that this configuration should be passed as a flag to the kubelet (haircommander/cri-o-kubeadm) but for me it was not working with only this configuration.
I went back to the kubernetes manual and it is recommended not to pass the flag there for kubelet but to the file /var/lib/kubelet/config.yaml during run time. For me this is not possible as the node needs to start with the CRI-O socket and not any other socket (ref Configure cgroup driver used by kubelet on control-plane node).
So I managed to get it up and running by passing this flag on my config file sample below:
$ cat /tmp/config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: node.name
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
controlPlaneEndpoint: 1.2.3.4:6443
imageRepository: my.private.repo
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
networking:
dnsDomain: cluster.local
podSubnet: 10.85.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
Then simply the user can start the master / worker node with the flag --config <file.yml> and the node will be launched successfully.
Hope all the information here will help someone else.

kube init: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

I've tried many things to get it to work. I've disabled my proxy settings (removed all the environmental variables), tried with docker, containerd and crio. I've tried with serviceSubnet: "11.96.0.0/12", authorization-mode: "None". Below are some related details and logs. Any help would be appreciated.
Environment:
ftps_proxy=http://proxy:3128
XDG_SESSION_ID=5
HOSTNAME=my-hostname
SHELL=/bin/bash
TERM=xterm-256color
HISTSIZE=1000
SYSTEMCTL_SKIP_REDIRECT=1
USER=root
http_proxy=http://proxy:3128
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
SUDO_UID=68247485
ftp_proxy=http://proxy:3128
USERNAME=root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/root
LANG=en_US.UTF-8
https_proxy=http://proxy:3128
SHLVL=1
SUDO_COMMAND=/usr/bin/su
HOME=/root
LC_TERMINAL_VERSION=3.3.11
no_proxy=***REDACTED****
LOGNAME=root
LESSOPEN=||/usr/bin/lesspipe.sh %s
SUDO_GID=39999
LC_TERMINAL=iTerm2
_=/usr/bin/env
Kubernetes version (use kubectl version):
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:45:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration:
from cat /proc/cpuinfo:
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Platinum 8167M CPU # 2.00GHz
stepping : 4
microcode : 0x1
cpu MHz : 1995.315
cache size : 16384 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke md_clear
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit
bogomips : 3990.63
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
OS (e.g. from /etc/os-release):
NAME="Oracle Linux Server"
VERSION="7.8"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.8"
PRETTY_NAME="Oracle Linux Server 7.8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:8:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.8
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.8
Kernel (e.g. uname -a):
Linux ********* REDACTED ********* 4.14.35-2020.el7uek.x86_64 #2 SMP Fri May 15 12:40:03 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
Others:
Output from KUBECONFIG=/etc/kubernetes/admin.conf kubectl get po -A is Unable to connect to the server: Forbidden
Output from: tail -n 100 /var/log/messages | grep kubelet is:
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.245860 16845 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node info: node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.268437 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.367850 16845 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.368580 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.468741 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.568945 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.669102 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.769265 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.869423 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:49 my-host kubelet: E0702 02:31:49.969613 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.069779 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.169952 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.270162 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.370314 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.470518 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.570690 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.670844 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.771025 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.871242 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:50 my-host kubelet: E0702 02:31:50.971404 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.071568 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.171749 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.271907 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.372112 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.472280 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.572449 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.672617 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.769715 16845 event.go:269] Unable to write event: 'Patch https://10.41.11.150:6443/api/v1/namespaces/default/events/my-host.161de4f886249d98: Forbidden' (may retry after sleeping)
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.772793 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.872998 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.911040 16845 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://10.41.11.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/my-host?timeout=10s: Forbidden
Jul 2 02:31:51 my-host kubelet: E0702 02:31:51.973186 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:52 my-host kubelet: E0702 02:31:52.073314 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:52 my-host kubelet: E0702 02:31:52.173498 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:52 my-host kubelet: E0702 02:31:52.273690 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:52 my-host kubelet: E0702 02:31:52.373853 16845 kubelet.go:2267] node "my-host" not found
Jul 2 02:31:52 my-host kubelet: E0702 02:31:52.474005 16845 kubelet.go:2267] node "my-host" not found
What happened?
I ran kubeadm init and got this instead:
kubeadm init --v=5
I0702 02:19:47.181576 16698 initconfiguration.go:103] detected and using CRI socket: /run/containerd/containerd.sock
I0702 02:19:47.181764 16698 interface.go:400] Looking for default routes with IPv4 addresses
I0702 02:19:47.181783 16698 interface.go:405] Default route transits interface "ens3"
I0702 02:19:47.181863 16698 interface.go:208] Interface ens3 is up
I0702 02:19:47.181909 16698 interface.go:256] Interface "ens3" has 1 addresses :[10.41.11.150/28].
I0702 02:19:47.181929 16698 interface.go:223] Checking addr 10.41.11.150/28.
I0702 02:19:47.181939 16698 interface.go:230] IP found 10.41.11.150
I0702 02:19:47.181949 16698 interface.go:262] Found valid IPv4 address 10.41.11.150 for interface "ens3".
I0702 02:19:47.181958 16698 interface.go:411] Found active IP 10.41.11.150
I0702 02:19:47.182015 16698 version.go:183] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
W0702 02:19:47.660545 16698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
I0702 02:19:47.660897 16698 checks.go:577] validating Kubernetes and kubeadm version
I0702 02:19:47.660931 16698 checks.go:166] validating if the firewall is enabled and active
I0702 02:19:47.670323 16698 checks.go:201] validating availability of port 6443
I0702 02:19:47.670487 16698 checks.go:201] validating availability of port 10259
I0702 02:19:47.670518 16698 checks.go:201] validating availability of port 10257
I0702 02:19:47.670552 16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0702 02:19:47.670567 16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0702 02:19:47.670578 16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0702 02:19:47.670587 16698 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0702 02:19:47.670597 16698 checks.go:432] validating if the connectivity type is via proxy or direct
I0702 02:19:47.670632 16698 checks.go:471] validating http connectivity to first IP address in the CIDR
I0702 02:19:47.670654 16698 checks.go:471] validating http connectivity to first IP address in the CIDR
I0702 02:19:47.670662 16698 checks.go:102] validating the container runtime
I0702 02:19:47.679912 16698 checks.go:376] validating the presence of executable crictl
I0702 02:19:47.679978 16698 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0702 02:19:47.680030 16698 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0702 02:19:47.680065 16698 checks.go:649] validating whether swap is enabled or not
I0702 02:19:47.680141 16698 checks.go:376] validating the presence of executable conntrack
I0702 02:19:47.680166 16698 checks.go:376] validating the presence of executable ip
I0702 02:19:47.680190 16698 checks.go:376] validating the presence of executable iptables
I0702 02:19:47.680216 16698 checks.go:376] validating the presence of executable mount
I0702 02:19:47.680245 16698 checks.go:376] validating the presence of executable nsenter
I0702 02:19:47.680270 16698 checks.go:376] validating the presence of executable ebtables
I0702 02:19:47.680292 16698 checks.go:376] validating the presence of executable ethtool
I0702 02:19:47.680309 16698 checks.go:376] validating the presence of executable socat
I0702 02:19:47.680327 16698 checks.go:376] validating the presence of executable tc
I0702 02:19:47.680343 16698 checks.go:376] validating the presence of executable touch
I0702 02:19:47.680365 16698 checks.go:520] running all checks
I0702 02:19:47.690210 16698 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0702 02:19:47.691000 16698 checks.go:618] validating kubelet version
I0702 02:19:47.754775 16698 checks.go:128] validating if the service is enabled and active
I0702 02:19:47.764254 16698 checks.go:201] validating availability of port 10250
I0702 02:19:47.764336 16698 checks.go:201] validating availability of port 2379
I0702 02:19:47.764386 16698 checks.go:201] validating availability of port 2380
I0702 02:19:47.764435 16698 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0702 02:19:47.772992 16698 checks.go:838] image exists: k8s.gcr.io/kube-apiserver:v1.18.5
I0702 02:19:47.782489 16698 checks.go:838] image exists: k8s.gcr.io/kube-controller-manager:v1.18.5
I0702 02:19:47.790023 16698 checks.go:838] image exists: k8s.gcr.io/kube-scheduler:v1.18.5
I0702 02:19:47.797925 16698 checks.go:838] image exists: k8s.gcr.io/kube-proxy:v1.18.5
I0702 02:19:47.805928 16698 checks.go:838] image exists: k8s.gcr.io/pause:3.2
I0702 02:19:47.814148 16698 checks.go:838] image exists: k8s.gcr.io/etcd:3.4.3-0
I0702 02:19:47.821926 16698 checks.go:838] image exists: k8s.gcr.io/coredns:1.6.7
I0702 02:19:47.821971 16698 kubelet.go:64] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0702 02:19:47.952580 16698 certs.go:103] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [my-host kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.41.11.150]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0702 02:19:48.880369 16698 certs.go:103] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0702 02:19:49.372445 16698 certs.go:103] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [my-host localhost] and IPs [10.41.11.150 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [my-host localhost] and IPs [10.41.11.150 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0702 02:19:50.467723 16698 certs.go:69] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0702 02:19:50.617181 16698 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0702 02:19:50.763578 16698 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0702 02:19:51.169983 16698 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0702 02:19:51.328280 16698 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0702 02:19:51.469999 16698 manifests.go:91] [control-plane] getting StaticPodSpecs
I0702 02:19:51.470375 16698 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0702 02:19:51.470394 16698 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0702 02:19:51.470400 16698 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0702 02:19:51.476683 16698 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0702 02:19:51.476735 16698 manifests.go:91] [control-plane] getting StaticPodSpecs
W0702 02:19:51.476802 16698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0702 02:19:51.477044 16698 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0702 02:19:51.477062 16698 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0702 02:19:51.477068 16698 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0702 02:19:51.477095 16698 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0702 02:19:51.477101 16698 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0702 02:19:51.478030 16698 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0702 02:19:51.478061 16698 manifests.go:91] [control-plane] getting StaticPodSpecs
W0702 02:19:51.478146 16698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0702 02:19:51.478368 16698 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0702 02:19:51.479022 16698 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0702 02:19:51.479773 16698 local.go:72] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0702 02:19:51.479799 16698 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.502800 seconds
I0702 02:20:05.985260 16698 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0702 02:20:05.998189 16698 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
I0702 02:20:06.006321 16698 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I0702 02:20:06.006340 16698 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "my-host" as an annotation
[kubelet-check] Initial timeout of 40s passed.
timed out waiting for the condition
Error writing Crisocket information for the control-plane node
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runUploadKubeletConfig
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/uploadconfig.go:129
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
error execution phase upload-config/kubelet
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/anago-v1.18.5-rc.1.1+d0eb837f519592/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
It sounds like you may need to cleanup your node. The log file indicates that kubeadm cannot communicate to etcd which may be because of some existing iptables rules or hostnames not matching. You can try:
sudo swapoff -a
sudo kubeadm reset
sudo rm -rf /var/lib/cni/
sudo systemctl daemon-reload
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
And then re-run kubeadm init.
A similar issue is described here.
I use the following scripts to completely remove an existing Kubernetes cluster, including running Docker containers
sudo kubeadm reset
sudo apt purge kubectl kubeadm kubelet kubernetes-cni -y
sudo apt autoremove
sudo rm -fr /etc/kubernetes/; sudo rm -fr ~/.kube/; sudo rm -fr /var/lib/etcd; sudo rm -rf /var/lib/cni/
sudo systemctl daemon-reload
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
# remove all running docker containers
docker rm -f `docker ps -a | grep "k8s_" | awk '{print $1}'`

kubeadm join failure, error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition

When join node :
sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2
The outputs:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
And systemctl status kubelet:
node#node:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 26716 (kubelet)
Tasks: 16 (limit: 1111)
CGroup: /system.slice/kubelet.service
└─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml -
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347 26716 kubelet.go:2244] node "node" not found
To Unauthorized I checked at master with kubeadm token list, token is valid.
So what's the problem? Thanks a lot.
Please verify pre and post installation steps here:
Please verify also the status of your services enabled and running, docker env.
sudo systemctl enable docker
sudo systemctl enable kubelet
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
Are the results the same if you run init command with --ignore-preflight-errors=all
For more details please use also "journalctl -u kubelet"
Having more details from your logs, please take a look at "github - kubeadm/issues" here:
Please provide more details about you env in order to recreate this issue and share with your additional findings.
Could you please perform another test and run kubeadm init on your worker node, in the same way as on the first node (in short please create second master node) just to verify your working env.

kubeadm init 1.9 hangs with vsphere cloud provider

kubeadm init seems to be hanging when I started using vsphere cloud provider. Followed instructions from here - Anybody got it working with 1.9?
root#master-0:~# kubeadm init --config /tmp/kube.yaml
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "master-0" could not be reached
[WARNING Hostname]: hostname "master-0" lookup master-0 on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.11.0.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Master os details
root#master-0:~# uname -r
4.4.0-21-generic
root#master-0:~# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
root#master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
kubelet service seems to be running fine
root#master-0:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Mon 2018-01-22 11:25:00 UTC; 13min ago
Docs: http://kubernetes.io/docs/
Main PID: 4270 (kubelet)
Tasks: 13 (limit: 512)
Memory: 37.6M
CPU: 11.626s
CGroup: /system.slice/kubelet.service
└─4270 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeco
nfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10
--cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.cr
t --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
journalctl -f -u kubelet has some connection refused errors which probably networking service is missing. Those errors should go away when networking service is installed post kubeadm init
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.759764 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.761350 1184 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.762632 1184 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:46 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:46 localhost kubelet[1184]: W0122 11:17:46.070619 1184 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081384 1184 server.go:182] Version: v1.9.1
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081417 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.082271 1184 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:46 localhost kubelet[1184]: error: failed to run Kubelet: unable to load bootstrap kubecon
fig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILU
RE
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Unit entered failed state.
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 22 11:17:56 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 22 11:17:56 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.410883 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411198 1229 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411353 1229 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:56 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:56 localhost kubelet[1229]: W0122 11:17:56.424264 1229 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429102 1229 server.go:182] Version: v1.9.1
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429156 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429247 1229 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:56 localhost kubelet[1229]: E0122 11:17:56.461608 1229 certificate_manager.go:314] Fail
ed while requesting a signed certificate from the master: cannot create certificate signing request: Po
st https://10.11.0.101:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.11
.0.101:6443: getsockopt: connection refused
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.491374 1229 server.go:428] --cgroups-per-qos
enabled, but --cgroup-root was not specified. defaulting to /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492069 1229 container_manager_linux.go:242]
container manager verified user specified cgroup-root exists: /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492102 1229 container_manager_linux.go:247]
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: Kubelet
CgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootD
ir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemRe
servedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvict
ionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeri
od:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1
} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Pe
rcentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] Experiment
alCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
docker ps, controller & scheduler logs
root#master-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6db549891439 677911f7ae8f "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
4f7ddefbd86e 4978f9a64966 "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
18604db89db6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
252b86ea4b5e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
4021061bf8a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master-0_kube-system_7a3ae9279d0ca7b4ada8333fbe7442ed_0
4f94163d313b gcr.io/google_containers/etcd-amd64:3.1.10 "etcd --name=etcd0..." About an hour ago Up About an hour 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
root#master-0:~# docker logs -f 4f7ddefbd86e
I0122 11:25:06.253706 1 controllermanager.go:108] Version: v1.9.1
I0122 11:25:06.258712 1 leaderelection.go:174] attempting to acquire leader lease...
E0122 11:25:06.259448 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:09.711377 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:13.969270 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:17.564964 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:20.616174 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
root#master-0:~# docker logs -f 6db549891439
W0122 11:25:06.285765 1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0122 11:25:06.292865 1 server.go:551] Version: v1.9.1
I0122 11:25:06.295776 1 server.go:570] starting healthz server on 127.0.0.1:10251
E0122 11:25:06.295947 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.11.0.101:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296027 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.11.0.101:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296092 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.11.0.101:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296160 1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.11.0.101:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296218 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.11.0.101:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.297374 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.11.0.101:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
There was a bug in the controller manager when starting with the vsphere cloud provider. See https://github.com/kubernetes/kubernetes/issues/57279, fixed in 1.9.2