I have installed a Kubernetes cluster using kubeadm. It works fine. But accidentally I restarted my host machine where master was up. After the restart the kubelet was not running . I had to do
Kubeadm reset
And
Kubeadm init
What should I do to bring my cluster up automatically after a host machine restart ?
There is no need to do kubeadm reset and kubeadm init in such cases.
To start kubelet service during the current session use:
systemctl start kubelet
To start service automatically during the boot, you must enable it using:
systemctl enable kubelet
Keep in mind that if you are running above commands as a non-root user, you will have to use sudo.
Related
Today, my kubernetes(v1.21) cluster certificate was expired(1 year), after I using this command to renew the certificate:
kubeadm certs renew all
the logs shows that the kube-apiserver\etcd should be restart:
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
I have tried this way to restart:
[root#k8smasterone ~]# systemctl restart kube-apiserver
Failed to restart kube-apiserver.service: Unit not found.
what should I do to restart all the component's properly? I also tried to find the kubernetes pods that did not found pods with kubernetes with api server.
Failed to restart kube-apiserver.service: Unit not found.
Restart the kubelet on the master node, your kube-apiserver runs as static pod. Example: systemctl restart kubelet. When you do this on the master node, all core components (kube-*) will be restarted.
I need to simulate Node "NotReady" status for a node and tried to stop kubelet to achieve that. But am getting the below error. Looks like this is not the right command for my k8s environment. I need this to verify the working of taints and tolerations.
systemctl stop kubelet.service
Failed to stop kubelet.service: Unit kubelet.service not loaded.
RKE is a K8s distribution that runs entirely within Docker containers as per documentation. That means that none of the K8s services are native Linux services. Try docker ps, and you'll find a container named kubelet running (among others).
So to stop kubelet service on RKE clusters, you'll need to stop the container:
docker stop kubelet
To use systemctl stop kubelet, first this needs to be added to systemd by running command systemctl enable kubelet post that stop command can be used to stop the kubelet.
By running ps -ef kubelet process can be identified and stop it.
I want to set higher logging verbosity in my k8s setup. While i managed to enable verbosity for API server and Kubectl by --v=4 argument; I am having difficulties finding way to pass in this flag to Kubelet.
I am using kubeadm init method to launch small scale cluster, where master is also tainted so it can serve as minion. can you help in in enabling kubelet logging verbosity ?
1) ssh to the master node
2) append /var/lib/kubelet/kubeadm-flags.env file with --v=4, i.e.
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --v=4
3) restart kubelet service
sudo systemctl restart kubelet
I have multimaster kubernetes cluster setup using Kubespray.
I ran an application using helm on it which increased the load on master drastically. this made master almost inaccessible. After that I shutdown masters one by one and increased RAM and CPU on them. But after rebooting, both apiserver and scheduler pods are failing to start. They are in "CreateContainerError" state.
APIserver is logging lot of errors with the message x509: certificate has expired or is not yet valid.
There are other threads for this error and most of them suggest to fix apiserver or cluster certificates. But this is newly setup cluster and certificates are valid till 2020.
Here are some details of my cluster.
CentOS Linux release: 7.6.1810 (Core)
Docker version: 18.06.1-ce, build e68fc7a
Kubernetes Version
Client Version: v1.13.2
Server Version: v1.13.2
It is very much possible that during shutdown/reboot, docker containers for apiserver and scheduler got exited with non zero exit status like 255. I sugggest you to first remove all containers with non zero exit status using docker rm command. Do this on all masters, rather on worker nodes as well.
By default, kubernetes starts new pods for all services (apiserver, shceduler, controller-manager, dns, pod network etc.) after a reboot. You can see newly started containers for these services using docker commands
example:
docker ps -a | grep "kube-apiserver" OR
docker ps -a | grep "kube-scheduler"
after removing exited containers, I believe, new pods for apiserver and scheduler should run properly in the cluster and should be in "Running" status.
When I do a minikube start I get:
Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
However, if I do this again I get the same output.
Is this creating a new cluster, reprovisioning the existing cluster or just doing nothing?
minikube start resumes the old cluster that is paused/stoped.
But if you delete the minikube machine by minikube delete or manually delete from virtualbox, minikube start will create a new cluster in that case.
MINIKUBE is a lightweight kubernetes implementation that creates a VM on your
local machine.
and deploys a cluster containing only one NODE.
so basically MINIKUBE is the only NODE in the Kubernetes cluster.
Any multiple attempts to start the minikube is directed to create ONE NODE ONLY.
$ kubectl get nodes
this return minikube node
but i came across this link which provides info on creating multiple nodes multi node cluster