How to restart coredns pod without downtime in kubernetes - kubernetes

I have a coredns pod running in kube-system namespace. I need to restart the coredns pod without downtime. I am aware that we can do delete coredns pod using below command and new coredns pod will spin up automatically.
kubectl delete pods -n kube-system -l k8s-app=kube-dns
But it is creating downtime. So I am expecting to do a restart of this coredns pod without downtime.
Usecase: I have set TTL as 1hr i.e, cache 3600. I have an external dns server where I will forward any requests to that external dns server using forward plugin. And whenever I need to get recent changes in external dns entries before TTL gets expired, I think we need to restart the coredns pod. Is there any other way to achieve this? If restart is the only way, how can I do it without downtime? It would be really helpful when someone helps me on this. Thanks in advance!

Normally, the result of this command kubectl get deployment coredns --namespace kube-system --output jsonpath='{.spec.strategy.rollingUpdate.maxUnavailable}' will return 1; means for deployment of 2 pods (typical coredns setup), pod will be replace 1 at a time, leaving the other one serving request. In this case, you can run kubectl rollout restart deployment coredns --namespace kube-system to restart without downtime, no explicitly delete or hike coredns pod count required.

Related

Minkube: Remove coredns

I wonder, if it's possible to remove coredns from a kubernetes cluster managed by minikube.
Context:
I self manage coredns with modified kubernetes manifests.
In order to do so, I need to remove the preinstalled coredns resources.
I do so by calling kubectl -n kube-system delete all -l k8s-app=kube-dns.
However, if I restart minikube (minikube stop && minikube start), the previously deleted coredns resources get recreated.
I look forward to your responses.

How to change kube-proxy config?

I've tried to change kube-proxy configMap and kube-proxy command to set metricsBindAddress but kubernetes resets these changes(without any warnings) after couple seconds.
kubectl edit cm kube-proxy-config -n kube-system => add metricsBindAddress => wait couple seconds and open the config - there is empty metricsBindAddress
kubectl edit ds kube-proxy -n kube-system => add --metrics-bind-address to command => wait couple seconds => the command was reset to default
How to change kube-proxy config and keep these changes ?
Kubernetes version 1.17
UPDATE(as you can, after several seconds metricsBindAddress was changed to empty string):
UPDATE 2(pay attention on metricsBinAddress, it's changed after ~40-50 seconds):
FINAL UPDATE:
Answer from cloud provider(Yandex) - kube-proxy pod it is on the host's network, so to prevent security problems, it listens exclusively on the loopback address and therefore the parameter will be reset
p.s. https://github.com/helm/charts/tree/master/stable/prometheus-operator#kubeproxy - I want to make kube-proxy accessible by prometheus
First edit:
kubectl edit cm/kube-proxy -n kube-system
.....
metricsBindAddress: 0.0.0.0:10249
.....
Then,
kubectl rollout restart ds kube-proxy -n kube-system
You have to restart the pods otherwise they do not get the configuration.
You can check the status by:
kubectl rollout status ds kube-proxy -n kube-system
I am posting this Community Wiki because root cause of the issue has been determined.
Usually to change of metricsBindAddress: can be achieved by editing ConfigMap and delete kube-proxy pod or use rollout restart on DaemonSet.
Root cause of this issue was that this change was blocked by OP's environment - Yandex Cloud.
OP received feedback from Yandex Support
kube-proxy pod it is on the host's network, so to prevent security problems, it listens exclusively on the loopback address and therefore the parameter will be reset

Flush CoreDNS Cache on Kubernetes Cluster

How to flush CoreDNS Cache on kubernetes cluster?
I know it can be done by deleting the CoreDNS pods, but is there a proper way to to the cache flush ?
#coollinuxoid's answer is not suitable for production environment, it will have temporary downtime because the commands will terminate all pods at the same time. Instead, you should use kubernetes deployment's rolling update mechanism by setting an environment variable to avoid the downtime with command:
kubectl -n kube-system set env deployment.apps/coredns FOO="BAR"
The best way, as you said, would be restarting coredns pods. This can be done easily, by scaling the coredns deployment to "0" and then, scale it back to the desired number.
Like in the sample command below:
kubectl scale deployment.apps/coredns -n kube-system --replicas=0
kubectl scale deployment.apps/coredns -n kube-system --replicas=2
without timout:
kubectl rollout restart deployment coredns -n kube-system
Thanks #Nick for comment
execute this in each rabbitmq pod to remove mnesia, then restart pods
rm -rf /bitnami/rabbitmq/mnesia

How to restart a failed pod in kubernetes deployment

I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.

Error while creating pods in Kubernetes

I have installed Kubernetes in Ubuntu server using instructions here. I am trying to create pods using kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080 as listed in the example. However, when I do kubectl get pod I get the status of the container as pending. I further did kubectl describe pod for debugging and I see the message:
FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts.
I am further trying to delete this pod by kubectl delete pod hello-minikube-3383150820-1r4f7 but when I further do kubectl get pod I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.
The PodFitsHostPorts predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running kubectl describe svc.
kubectl run creates a deployment object (you can see it with kubectl describe deployments) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run kubectl delete deployments hello-minikube.