Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I created master node on GCE using this commands:
gcloud compute instances create master --machine-type g1-small --zone europe-west1-d
gcloud compute addresses create myexternalip --region europe-west1
gcloud compute target-pools create kubernetes --region europe-west1
gcloud compute target-pools add-instances kubernetes --instances master --instances-zone europe-west1-d
gcloud compute forwarding-rules create kubernetes-forward --address myexternalip --region europe-west1 --ports 1-65535 --target-pool kubernetes
gcloud compute forwarding-rules describe kubernetes-forward
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
and opened all firewalls.
After it i created aws ec2 instance, opened firewalls and using:
kubeadm join --token 55d287.b540e254a280f853 ip:6443 --discovery-token-unsafe-skip-ca-verification
to connecting instance to cluster.
But in the master node it is not displayed
Docker version: 17.12
Kubernetes version: 1.9.3
UPD:
Output from node on aws ec2
systemctl status kubelet.service:
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2018-02-24 20:23:53 UTC; 23s ago
Docs: http://kubernetes.io/docs/
Main PID: 30678 (kubelet)
Tasks: 5
Memory: 13.4M
CPU: 125ms
CGroup: /system.slice/kubelet.service
└─30678 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests -
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420375 30678 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420764 30678 controller.go:114] kubelet config controller: starting controller
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420944 30678 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: W0224 20:23:53.425410 30678 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.444969 30678 server.go:182] Version: v1.9.3
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.445274 30678 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.445565 30678 plugins.go:101] No cloud provider specified.
journalctl -u kubelet:
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.819249 30243 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.821054 30243 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.821243 30243 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.186834 30304 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.187255 30304 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.187451 30304 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.422948 30311 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.423349 30311 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.423525 30311 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:43 ip-172-31-0-250 kubelet[30319]: I0224 20:15:43.671742 30319 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:43 ip-172-31-0-250 kubelet[30319]: I0224 20:15:43.672195 30319 controller.go:114] kubelet config controller: starting controller
UPD:
Error is on aws ec2 instance side, but i can't find what is wrong.
PROBLEM SOLVED
Should initialize kubeadm with this flag --apiserver-advertise-address
After you create load balancer you need to type this command for show you external load balancer ip address
gcloud compute forwarding-rules describe kubernetes-forward
And the initialize cluster with this flag
--apiserver-advertise-address=external_load_balancer_ip
So your kubeadm command looks like this
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=external_load_balancer_ip
Related
I am trying to deploy an springboot microservices in kubernetes cluster having 1 master and 2 worker node. When I am trying to get the node state using the command sudo kubectl get nodes, I am getting one of my worker node is not ready. It showing not ready in status.
When I am applying to troubleshoot the following command,
sudo journalctl -u kubelet
I am getting response like kubelet.service: Unit entered failed state and kubelet service stopped. The following is the response what I am getting when applying the command sudo journalctl -u kubelet.
-- Logs begin at Fri 2020-01-03 04:56:18 EST, end at Fri 2020-01-03 05:32:47 EST. --
Jan 03 04:56:25 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 03 04:56:31 MILDEVKUB050 kubelet[970]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Jan 03 04:56:31 MILDEVKUB050 kubelet[970]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.053962 970 server.go:416] Version: v1.17.0
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.084061 970 plugins.go:100] No cloud provider specified.
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.235928 970 server.go:821] Client rotation is on, will bootstrap in background
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.280173 970 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curre
Jan 03 04:56:38 MILDEVKUB050 kubelet[970]: I0103 04:56:38.107966 970 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jan 03 04:56:38 MILDEVKUB050 kubelet[970]: F0103 04:56:38.109401 970 server.go:273] failed to run Kubelet: running with swap on is not supported, please disable swa
Jan 03 04:56:38 MILDEVKUB050 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jan 03 04:56:38 MILDEVKUB050 systemd[1]: kubelet.service: Unit entered failed state.
Jan 03 04:56:38 MILDEVKUB050 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 03 04:56:48 MILDEVKUB050 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 03 04:56:48 MILDEVKUB050 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 03 04:56:48 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.901632 1433 server.go:416] Version: v1.17.0
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.907654 1433 plugins.go:100] No cloud provider specified.
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.907806 1433 server.go:821] Client rotation is on, will bootstrap in background
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.947107 1433 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
Jan 03 04:56:49 MILDEVKUB050 kubelet[1433]: I0103 04:56:49.263777 1433 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to
Jan 03 04:56:49 MILDEVKUB050 kubelet[1433]: F0103 04:56:49.264219 1433 server.go:273] failed to run Kubelet: running with swap on is not supported, please disable sw
Jan 03 04:56:49 MILDEVKUB050 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jan 03 04:56:49 MILDEVKUB050 systemd[1]: kubelet.service: Unit entered failed state.
Jan 03 04:56:49 MILDEVKUB050 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.712729 1500 server.go:416] Version: v1.17.0
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.714927 1500 plugins.go:100] No cloud provider specified.
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.715248 1500 server.go:821] Client rotation is on, will bootstrap in background
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.763508 1500 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.956706 1500 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: F0103 04:56:59.957078 1500 server.go:273] failed to run Kubelet: running with swap on is not supported, please disable sw
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Unit entered failed state.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 03 04:57:10 MILDEVKUB050 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 03 04:57:10 MILDEVKUB050 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 03 04:57:10 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
log file: service: Unit entered failed state
I tried by restarting the kubelet. But still there is no change in node state. Not ready state only.
Updates
When I am trying the command systemctl list-units --type=swap --state=active , then I am getting the following response,
docker#MILDEVKUB040:~$ systemctl list-units --type=swap --state=active
UNIT LOAD ACTIVE SUB DESCRIPTION
dev-mapper-MILDEVDCR01\x2d\x2dvg\x2dswap_1.swap loaded active active /dev/mapper/MILDEVDCR01--vg-swap_1
Important
When I am getting these kind of issue with node not ready, each time I need to disable the swap and need to reload the daemon and kubelet. After that node becomes ready state. And again I need to repeat the same.
How can I find a permanent solution for this?
failed to run Kubelet: running with swap on is not supported, please disable swap
You need to disable swap on the system for kubelet to work. You can disable swap with sudo swapoff -a
For systemd based systems, there is another way of enabling swap partitions using swap units which gets enabled whenever systemd reloads even if you have turned off swap using swapoff -a
https://www.freedesktop.org/software/systemd/man/systemd.swap.html
Check if you have any swap units using systemctl list-units --type=swap --state=active
You can permanently disable any active swap unit with systemctl mask <unit name>.
Note: Do not use systemctl disable <unit name> to disable the swap unit as swap unit will be activated again when systemd reloads. Use systemctl mask <unit name> only.
To make sure swap doesn't get re-enabled when your system reboots due to power cycle or any other reason, remove or comment out the swap entries in /etc/fstab
Summarizing:
Run sudo swapoff -a
Check if you have swap units with command systemctl list-units --type=swap --state=active. If there are any active swap units, mask them using systemctl mask <unit name>
Remove swap entries in /etc/fstab
The root cause is the swap space. To disable completely follow steps:
run swapoff -a: this will immediately disable swap but will activate on restart
remove any swap entry from /etc/fstab
reboot the system.
If the swap is gone, good. If, for some reason, it is still here, you
had to remove the swap partition. Repeat steps 1 and 2 and, after
that, use fdisk or parted to remove the (now unused) swap partition.
Use great care here: removing the wrong partition will have disastrous
effects!
reboot
This should resolve your issue.
Removing /etc/fstab will give the vm error, I think we should find another way to solve this issue. I tried to remove the fstab, all command (install, ping and other command) error.
I try to join worker node to k8s kluser.
sudo kubeadm join 10.2.67.201:6443 --token x --discovery-token-ca-cert-hash sha2566 x
But i get error on this stage:
curl -sSL http://localhost:10248/healthz'
failed with error: Get http://localhost:10248/healthz: dial tcp
Error:
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I see that kubelet service is down:
journalctl -xeu kubelet
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: F1122 15:49:00.224350 286703 server.go:251] unable to load client CA file /etc/kubernetes/ssl/ca.crt: open /etc/kubernetes/ssl/ca.cr
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: Unit kubelet.service entered failed state.
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: kubelet.service failed.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: F1122 15:49:10.476478 286717 server.go:251] unable to load client CA file /etc/kubernetes/ssl/ca.crt: open /etc/kubernetes/ssl/ca.cr
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Unit kubelet.service entered failed state.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service failed.
I fixed it.
Just copy /etc/kubernetes/pki/ca.crt into /etc/kubernetes/ssl/ca.crt
I m having below error while installing Kubernetes cluster in Ubunutu 18.04. Kubernetes master is ready. I' m using flannel as pod network. I m going to add my first node to the cluster using join command.
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletNotReady Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Update:
I noticed below in worker node
root#worker02:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-12-11 06:47:41 UTC; 27s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 14247 (kubelet)
Tasks: 14 (limit: 2295)
CGroup: /system.slice/kubelet.service
└─14247 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driv
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.085292 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kuber
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.086115 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-nbss2" (UniqueName
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.087975 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubern
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.088104 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kube
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.088153 14247 reconciler.go:156] Reconciler: start to sync state
Dec 11 06:47:45 worker02 kubelet[14247]: E1211 06:47:45.130889 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:48 worker02 kubelet[14247]: E1211 06:47:48.134042 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:50 worker02 kubelet[14247]: E1211 06:47:50.538096 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:53 worker02 kubelet[14247]: E1211 06:47:53.131425 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:56 worker02 kubelet[14247]: E1211 06:47:56.840529 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Please let me know how to fix this?
I have the same issue, however, my situation is that I have a running k8s cluster, and suddenly, the CISNodeInfo issue happened on some of my k8s node, and the node not in the cluster anymore. Search with google for some days, finally get the answer from
Node cannot join #86094
Just editing /var/lib/kubelet/config.yaml to add:
featureGates:
CSIMigration: false
... at the end of the file seemed to allow the cluster to start as expected.
Link to install Kubernetes
This link is only for 1 cluster with 1 master node. If you want to added worker nodes. You need to specify your IP and name of the machine in /etc/hosts file of your master node. Then instatiate your kubernetes master. Once it is started. Then join your worker nodes to the master. Make sure you install kubectl, and docker in your worker node. If you want the master only to manage kubernetes cluster then please skip step 26 of the link shared.
I'm having issues getting a node to join my k8s cluster. It was in the cluster before, but now refuses to join. Nothing on the servers themselves has changed, besides leaving and re-joining the cluster.
Kubernetes version: 1.14.9beta (Compiled from source for kubespawner compatibility)
OS version(s): Manjaro Openbox (All systems are on the same patch levels)
Below is the journalctl log from a join attempt:
Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-con>Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --allow-privileged has been deprecated, will be removed in a future version
Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubel>Nov 18 14:56:07 dragon-den kubelet[17701]: I1118 14:56:07.856840 17701 server.go:418] Version: v1.14.9-beta.0.44+500f5aba80d712
Nov 18 14:56:07 dragon-den kubelet[17701]: I1118 14:56:07.857493 17701 plugins.go:103] No cloud provider specified.
Nov 18 14:56:07 dragon-den kubelet[17701]: W1118 14:56:07.857566 17701 server.go:557] standalone mode, no API client
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.026479 17701 server.go:475] No api server defined - no events will be sent to API server.
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.026530 17701 server.go:629] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027259 17701 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027322 17701 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime>Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027617 17701 container_manager_linux.go:286] Creating device plugin manager: true
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027819 17701 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.036358 17701 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.036403 17701 client.go:104] Start docker client with request timeout=2m0s
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.063851 17701 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.063918 17701 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.064142 17701 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.068056 17701 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.070412 17701 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.086972 17701 docker_service.go:258] Docker Info: &{ID:UZIS:BXME:6KSB:34PI:HKEZ:HRT6:I4XW:44VI:AR5M:Q4P7:EGFA:KRDD Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersS>Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.087132 17701 docker_service.go:271] Setting cgroupDriver to systemd
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.099887 17701 remote_runtime.go:62] parsed scheme: ""
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.099950 17701 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100029 17701 remote_image.go:50] parsed scheme: ""
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100049 17701 remote_image.go:50] scheme "" not registered, fallback to default scheme
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100315 17701 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100356 17701 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100381 17701 clientconn.go:796] ClientConn switching balancer to "pick_first"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100416 17701 clientconn.go:796] ClientConn switching balancer to "pick_first"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100519 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000cbaa0, CONNECTING
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100575 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000349b20, CONNECTING
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100959 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000349b20, READY
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.101021 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000cbaa0, READY
Nov 18 14:56:28 dragon-den kubelet[17701]: E1118 14:56:28.454199 17701 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Nov 18 14:56:28 dragon-den kubelet[17701]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.488935 17701 kuberuntime_manager.go:210] Container runtime docker initialized, version: 19.03.4-ce, apiVersion: 1.40.0
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.489074 17701 volume_host.go:77] kubeClient is nil. Skip initialization of CSIDriverLister
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.489559 17701 csi_plugin.go:215] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.509315 17701 server.go:1055] Started kubelet
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.509368 17701 kubelet.go:1387] No api server defined - no node status update will be sent.
Nov 18 14:56:28 dragon-den kubelet[17701]: E1118 14:56:28.509369 17701 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory ca>Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.509441 17701 server.go:141] Starting to listen on 0.0.0.0:10250
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510125 17701 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510164 17701 status_manager.go:148] Kubernetes client is nil, not starting status manager.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510176 17701 kubelet.go:1808] Starting kubelet main sync loop.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510201 17701 volume_manager.go:248] Starting Kubelet Volume Manager
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510197 17701 kubelet.go:1825] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.] Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510272 17701 desired_state_of_world_populator.go:130] Desired state populator starts to run
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510546 17701 server.go:343] Adding debug handlers to kubelet server.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.610328 17701 kubelet.go:1825] skipping pod synchronization - container runtime status check may not have completed yet.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.667490 17701 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708302 17701 cpu_manager.go:155] [cpumanager] starting with none policy
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708327 17701 cpu_manager.go:156] [cpumanager] reconciling every 10s
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708362 17701 policy_none.go:42] [cpumanager] none policy: Start
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.710493 17701 reconciler.go:154] Reconciler: start to sync state
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.732840 17701 container.go:422] Failed to get RecentStats("/libcontainer_17701_systemd_test_default.slice") while determining the next housekeeping: unable to find data in memory>Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.783045 17701 manager.go:540] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.783441 17701 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
I think my issue is related to server.go:475] No api server defined - no events will be sent to API server.
or cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d but I'm not 100% sure.
Node config:
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# Add your own!
KUBELET_ARGS=""
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
Server config:
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# Add your own!
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
--kubeconfig=/etc/kubernetes/kubelet.conf \
--config=/var/lib/kubelet/config.yaml \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.1
--cgroup-driver=systemd"
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
#KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
I'm also using Calico for the networking, if that makes any difference
This is a sign that you have some things left behind from when this node was part of the cluster. This is not mandatory in most cases but I highly recommend running some commands besides kubeadm reset to cleanup your node after removing it from a cluster.
sudo kubeadm reset -f
sudo systemctl stop kubelet
sudo systemctl stop docker
sudo rm -rf /var/lib/cni/
sudo rm -rf /var/lib/kubelet/*
sudo rm -rf /var/lib/etcd/*
sudo rm -rf /etc/cni/
sudo ifconfig cni0 down
sudo ifconfig flannel.1 down
sudo ifconfig docker0 down
sudo ip link delete cni0
sudo ip link delete flannel.1
sudo systemctl start docker
Some of these commands may fail depending on your implementation.
I found this script created for this purpose. It's a bit old but you can take some things from that.
Here is a similar case to yours Reset Kuberenetes Cluster.
I am new to container worrld and trying to setup a kubernetes cluster locally in two linux VMs. During the cluster initialization it got stuck at
[apiclient] Created API client, waiting for the control plane to
become ready
I have followed the pre-flight check steps,
[root#lm--kube-glusterfs--central ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.0
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [lm--kube-glusterfs--central kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.99.7.215]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
OS : Red Hat Enterprise Linux Server release 7.4 (Maipo)
Kuberneter version :
kubeadm-1.6.0-0.x86_64.rpm
kubectl-1.6.0-0.x86_64.rpm
kubelet-1.6.0-0.x86_64.rpm
kubernetes-cni-0.6.0-0.x86_64.rpm
cri-tools-1.12.0-0.x86_64.rpm
How to debug the issue or is there any version mismatch of kube versions. Same was working before when i use google.cloud.repo to install yum -y install kubelet kubeadm kubectl .
I couldnot use repo due to some firewall issues. Hence using the rpms to install.
After executing the following command,journalctl -xeu kubelet
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: W0702 09:45:09.841246 28749 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: I0702 09:45:09.841304 28749 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: W0702 09:45:09.845626 28749 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: I0702 09:45:09.857969 28749 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: Unit kubelet.service entered failed state.
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: kubelet.service failed.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Started Kubernetes Kubelet Server.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Starting Kubernetes Kubelet Server...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.251465 28772 feature_gate.go:144] feature gates: map[]
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.251889 28772 server.go:469] No API client: no api servers specified
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.252009 28772 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.252049 28772 docker.go:384] Start docker client with request timeout=2m0s
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.259436 28772 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.275674 28772 manager.go:143] cAdvisor running in container: "/system.slice"
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.317509 28772 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial r
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.328881 28772 fs.go:117] Filesystem partitions: map[/dev/vda2:{mountpoint:/ major:253 mino
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.335711 28772 manager.go:198] Machine: {NumCores:8 CpuFrequency:2095078 MemoryCapacity:337
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: [7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unifie
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.338001 28772 manager.go:204] Version: {KernelVersion:3.10.0-693.11.6.el7.x86_64 Container
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.338967 28772 server.go:350] No api server defined - no events will be sent to API server.
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.338980 28772 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specifie
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.342041 28772 container_manager_linux.go:245] container manager verified user specified cg
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.342071 28772 container_manager_linux.go:250] Creating Container Manager object based on N
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.346505 28772 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.346571 28772 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.351473 28772 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.363583 28772 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Unit kubelet.service entered failed state.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service failed.
Related to issue
There are a few fixes shown there , all you need is to change the cgroup driver to systemd