Kubernetes kube-apiserver service not started after reboot system - kubernetes

I have setup cluster with kubeadm its working fine and 6443 port is up. but after reboot my system cluster is not getting up.
What should I do?
please find the logs
node#node1:~$ sudo kubeadm init
[init] using Kubernetes version: v1.11.1
......
node#node1:~$
node#node1:~$ mkdir -p $HOME/.kube
node#node1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
node#node1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
node#node1:~$
node#node1:~$
node#node1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady master 4m v1.11.1
node#node1:~$ ps -ef | grep 6443
root 5542 5503 8 13:17 ? 00:00:17 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.16.2.171 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
node 6792 4426 0 13:20 pts/1 00:00:00 grep --color=auto 6443
node#node1:~$
node#node1:~$
node#node1:~$
node#node1:~$ sudo reboot
Connection to node1 closed by remote host.
Connection to node1 closed.
abc#xyz:~$ ssh node#node1
node#node1's password:
node#node1:~$ kubectl get nodes
No resources found.
The connection to the server 172.16.2.171:6443 was refused - did you specify the right host or port?`enter code here`
node#node1:~$
node#node1:~$ ps -ef | grep 6443
node 7083 1920 0 13:36 pts/0 00:00:00 grep --color=auto 6443

Your kubelet service is not running. Try to view its logs:
$ journalctl -u kubelet
To start the service:
$ sudo systemctl start kubelet
If you want to make kubelet running during the boot you'll need to enbale it. First of all check the kubelet service status:
$ systemctl status kubelet
There will be a line:
...
Loaded: loaded (/etc/systemd/system/kubelet.service; (enabled|disabled)
...
"disabled" entry means you should enable it:
$ sudo systemctl enable kubelet
But, highly likely it is already enabled, because this was done by "systemd vendor preset", so you will have to debug why kubelet falls. You can post logs output here and stackoverflow's community will help you.

I assume that you did not install Kubernetes from packages delivered to your
Linux distribution - as far as I know, installation on Ubuntu makes services
dependent on Kubernetes installed to avoid the situation you are describing.
The problem you are facing is the lack of support for starting kubelet by systemd or other runtime scripts.
Systemd is a system and a service manager. On behalf of them, Kubernetes starts
on system boot.
You may try to repair your installation by copying/creating required systemd configuration
kubernetes.service to your installation /etc/systemd directory.
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStart=/usr/bin/kubelet \
--api-servers=http://127.0.0.1:8080 \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
and enable service with systemctl:
sudo systemctl enable kubelet
The journalctl logs may provide information about problems with Kubernetes
Services if they still exist.
sudo journalctl -xeu kubelet

Related

How to start kubelet service?

I ran command
systemctl stop kubelet
then try to start it
systemctl start kubelet
but can't able to start it
here is the output of systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2019-06-05 15:35:34 UTC; 7s ago
Docs: https://kubernetes.io/docs/home/
Process: 31697 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 31697 (code=exited, status=255)
Because of this i am not able to run any kubectl command
example kubectl get pods gives
The connection to the server 172.31.6.149:6443 was refused - did you specify the right host or port?
Worked
Need to disable swap using swapoff -a
then,
try to start it systemctl start kubelet
So i need to reset kubelete service
Here are the step :-
check status of your docker service.
If stoped,start it by cmd sudo systemctl start docker.
If not installed installed it
#yum install -y kubelet kubeadm kubectl docker
Make swap off by #swapoff -a
Now reset kubeadm by #kubeadm reset
Now try #kudeadm init
after that check #systemctl status kubelet
it will be working
Check nodes
kubectl get nodes
if Master Node is not ready ,refer following
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
if you not able to create pod ..check dns
kubectl get pods --namespace=kube-system
if dns pods are in pending state
i.e you need to use network service
i used calico
kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
Now your master node is ready .. now you can deploy pod

Kubectl connectivity issue

I installed first ectd, kubeapiserver and kubelet using systemd service. The services are running fine and listening to all required ports.
When I run kubectl cluster-info , I get below output
Kubernetes master is running at http://localhost:8080
When I run kubectl get componentstatuses, then I get below output
etcd-0 Healthy {"health": "true"}
But running kubectl get nodes , I get below error
Error from server (ServerTimeout): the server cannot complete the requested operation at this time, try again later (get nodes)
Can anybody help me out on this.
For the message:
:~# k get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
--------
Modify the following files on all master nodes:
$ sudo vim /etc/kubernetes/manifests/kube-scheduler.yaml
Comment or delete the line:
- --port=0
in (spec->containers->command->kube-scheduler)
$ sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml
Comment or delete the line:
- --port=0
in (spec->containers->command->kube-controller-manager)
Then restart kubelet service:
$ sudo systemctl restart kubelet.service
Your missing kubeconfig file. kubectl looks config file in this location $HOME/.kube/config
Part of install you can copy config file like this on master node.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
What is the status of controller manager and scheduler. Do you see them listed as Healthy when you run the below command
kubectl get cs

Unable to start MiniKube: Stuck on Starting Cluster Components

I am new to minikube. I followed the below steps to install minikube on oracle linux 7.5 (kernel 3.10.0-327.28.3.el7.x86_64)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && sudo install minikube-linux-amd64 /usr/local/bin/minikube
After installing i ran the minikube
sudo minikube start --vm-driver=none
Minikube is falied on
sudo minikube start --vm-driver=none
Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Waiting for SSH to be available...
Detecting the provisioner...
Setting Docker configuration on the remote daemon...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Starting cluster components...
E0105 13:00:41.436961 19330 start.go:343] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: net/http: TLS handshake timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: net/http: TLS handshake timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: net/http: TLS handshake timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: net/http: TLS handshake timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: net/http: TLS handshake timeout
I checked logs also and found that it is stuck in some loop and retrying but i am unable to understand
I0105 12:24:24.522907 19330 utils.go:224] > Your Kubernetes master has initialized successfully!
I0105 12:24:24.522916 19330 utils.go:224] > To start using your cluster, you need to run the following as a regular user:
I0105 12:24:24.522919 19330 utils.go:224] > mkdir -p $HOME/.kube
I0105 12:24:24.522925 19330 utils.go:224] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0105 12:24:24.522929 19330 utils.go:224] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0105 12:24:24.522934 19330 utils.go:224] > You should now deploy a pod network to the cluster.
I0105 12:24:24.522944 19330 utils.go:224] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0105 12:24:24.522950 19330 utils.go:224] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0105 12:24:24.522957 19330 utils.go:224] > You can now join any number of machines by running the following on each node
I0105 12:24:24.522959 19330 utils.go:224] > as root:
I0105 12:24:24.522972 19330 utils.go:224] > kubeadm join localhost:8443 --token 5apexw.uv7nfpirz4on2e33 --discovery-token-ca-cert-hash sha256:6dcf73220b8bc229269bb8c6a350592fe6b0cd068ef8f336163cc5b3a384990e
I0105 12:24:45.792762 19330 utils.go:117] sleeping 500ms
I0105 12:24:46.292883 19330 utils.go:106] retry loop 1
I0105 12:25:07.561761 19330 utils.go:117] sleeping 500ms
I0105 12:25:08.062042 19330 utils.go:106] retry loop 2
I0105 12:25:29.330434 19330 utils.go:117] sleeping 500ms
I0105 12:25:29.830625 19330 utils.go:106] retry loop 3
I0105 12:25:51.096835 19330 utils.go:117] sleeping 500ms
I0105 12:25:51.597099 19330 utils.go:106] retry loop 4
I suppose you did it in AWS.
Remove all and recreate from the scratch. Just reproduced and all works fine.
Remove all:
minikube delete
rm -rf ~/.minikube
My steps from the begging(under root):
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && sudo install minikube-linux-amd64 /usr/local/bin/minikube
sudo yum install docker-engine
systemctl enable docker.service
systemctl start docker.service
minikube start --vm-driver=none
Result:
======================================== Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Starting cluster components...
Verifying kubelet health ...
Verifying apiserver health ...Kubectl is now configured to use the
cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL
WORKSTATIONS The 'none' driver will run an insecure kubernetes
apiserver as root that may leave the host vulnerable to CSRF attacks
When using the none driver, the kubectl config and credentials
generated will be root owned and will appear in the root home
directory. You will need to move the files to the appropriate location
and then set the correct permissions. An example of this is below:
sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
sudo chown -R $USER $HOME/.kube
sudo chgrp -R $USER $HOME/.kube
sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
sudo chown -R $USER $HOME/.minikube
sudo chgrp -R $USER $HOME/.minikube
This can also be done automatically by setting the env var
CHANGE_MINIKUBE_NONE_USER=true Loading cached images from config file.
Everything looks great. Please enjoy minikube!

minikube start error exit status 1

Here is my error when i "minikube start " in Aliyun.
What I did:
minikube delete
kubectl config use-context minikube
minikube start --vm-driver=none
Aliyun(The 3rd Party Application Server) could not install VirtualBox or KVM,
so I tried to start it with --vm-driver=none.
[root#iZj6c68brirvucbzz5yyunZ home]# minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
[root#iZj6c68brirvucbzz5yyunZ home]# kubectl config use-context minikube
Switched to context "minikube".
[root#iZj6c68brirvucbzz5yyunZ home]# minikube start --vm-driver=none
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0618 16:06:56.885163 500 start.go:294] Error starting cluster: kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI running command: : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
output: [init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube" lookup minikube on 100.100.2.138:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [minikube] and IPs [172.31.4.34]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong CA cert
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
.: exit status 1
Versions of components:
[root#iZj6c68brirvucbzz5yyunZ home]# minikube version
minikube version: v0.28.0
[root#iZj6c68brirvucbzz5yyunZ home]# uname -a
Linux iZj6c68brirvucbzz5yyunZ 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root#iZj6c68brirvucbzz5yyunZ home]# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Why Minikube exit with the status 1?
Thank in advance.
First of all, try to cleanup all traces after the previous unsuccessful minikube start. It should help with mismatch certificate issue.
rm -rf ~/.minikube ~/.kube /etc/kubernetes
Then try to start minikube again.
minikube start --vm-driver=none
If you still running into errors, try to follow my "happy path":
(This was tested on fresh GCP instance with Ubuntu 16 OS on board)
# become root
sudo su
# turn off swap
swapoff -a
# edit /etc/fstab and comment swap partition.
# add repository key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# add repository
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# update repository cache
apt-get update
# install some software
apt-get -y install ebtables ethtool docker.io apt-transport-https kubelet kubeadm kubectl
# tune sysctl
cat <<EOF >>/etc/ufw/sysctl.conf
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF
sudo sysctl --system
# download minikube
wget https://github.com/kubernetes/minikube/releases/download/v0.28.0/minikube-linux-amd64
# install minikube
chmod +x minikube-linux-amd64
mv minikube-linux-amd64 /usr/bin/minikube
# start minikube
minikube start --vm-driver=none
---This is what you should see----------
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Finished Downloading kubelet v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks
When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions. An example of this is below:
sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
sudo chown -R $USER $HOME/.kube
sudo chgrp -R $USER $HOME/.kube
sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
sudo chown -R $USER $HOME/.minikube
sudo chgrp -R $USER $HOME/.minikube
This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.
-------------------
#check the results
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 18s v1.10.0
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 9m
kube-system kube-addon-manager-minikube 1/1 Running 0 9m
kube-system kube-apiserver-minikube 1/1 Running 0 9m
kube-system kube-controller-manager-minikube 1/1 Running 0 10m
kube-system kube-dns-86f4d74b45-p99gv 3/3 Running 0 10m
kube-system kube-proxy-hlfc8 1/1 Running 0 10m
kube-system kube-scheduler-minikube 1/1 Running 0 9m
kube-system kubernetes-dashboard-5498ccf677-scdf9 1/1 Running 0 10m
kube-system storage-provisioner 1/1 Running 0 10m

kubelet reading from wrong config file?

When I run kubelet version I get an error message ending in:
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
But when I check the config file located at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, I see the value IS systemd. I have changed the value and done a systemctl daemon-reload and systemctl restart kubelet in between each change and the error message is always the same.
I am guessing it must be reading from the wrong config file, but how can I find where it is reading from!
try this:
kubelet --cgroup-driver=systemd version
The "docker" package (1.13.1) already has "systemd" as the default cgroup-driver, see this.
The file driver is systemd changed by default cgroupfs, and docker file driver we installed is systemd caused by inconsistency, which causes the image to fail to start.
docker info
...
Cgroup Driver: systemd
There are two ways now, one is to modify docker, the other is to modify kubelet,
Modify docker: #
Modify or create /etc/docker/daemon.json and add the following:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Restart docker:
systemctl restart docker
systemctl status docker
Modify kubelet: #
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Add the following content --cgroup-driver=systemd to $KUBELET_EXTRA_ARGS.
Or:
$ DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
$ echo $DOCKER_CGROUPS
$ cat >/etc/sysconfig/kubelet<<EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=$DOCKER_CGROUPS"
EOF
#restart
$ systemctl daemon-reload
$ systemctl enable kubelet && systemctl restart kubelet
Or:
DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
echo $DOCKER_CGROUPS
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS"
EOF
# restart
$ systemctl daemon-reload
$ systemctl enable kubelet && systemctl restart kubelet