Minkube: Remove coredns - kubernetes

I wonder, if it's possible to remove coredns from a kubernetes cluster managed by minikube.
Context:
I self manage coredns with modified kubernetes manifests.
In order to do so, I need to remove the preinstalled coredns resources.
I do so by calling kubectl -n kube-system delete all -l k8s-app=kube-dns.
However, if I restart minikube (minikube stop && minikube start), the previously deleted coredns resources get recreated.
I look forward to your responses.

Related

How to restart coredns pod without downtime in kubernetes

I have a coredns pod running in kube-system namespace. I need to restart the coredns pod without downtime. I am aware that we can do delete coredns pod using below command and new coredns pod will spin up automatically.
kubectl delete pods -n kube-system -l k8s-app=kube-dns
But it is creating downtime. So I am expecting to do a restart of this coredns pod without downtime.
Usecase: I have set TTL as 1hr i.e, cache 3600. I have an external dns server where I will forward any requests to that external dns server using forward plugin. And whenever I need to get recent changes in external dns entries before TTL gets expired, I think we need to restart the coredns pod. Is there any other way to achieve this? If restart is the only way, how can I do it without downtime? It would be really helpful when someone helps me on this. Thanks in advance!
Normally, the result of this command kubectl get deployment coredns --namespace kube-system --output jsonpath='{.spec.strategy.rollingUpdate.maxUnavailable}' will return 1; means for deployment of 2 pods (typical coredns setup), pod will be replace 1 at a time, leaving the other one serving request. In this case, you can run kubectl rollout restart deployment coredns --namespace kube-system to restart without downtime, no explicitly delete or hike coredns pod count required.

Flush CoreDNS Cache on Kubernetes Cluster

How to flush CoreDNS Cache on kubernetes cluster?
I know it can be done by deleting the CoreDNS pods, but is there a proper way to to the cache flush ?
#coollinuxoid's answer is not suitable for production environment, it will have temporary downtime because the commands will terminate all pods at the same time. Instead, you should use kubernetes deployment's rolling update mechanism by setting an environment variable to avoid the downtime with command:
kubectl -n kube-system set env deployment.apps/coredns FOO="BAR"
The best way, as you said, would be restarting coredns pods. This can be done easily, by scaling the coredns deployment to "0" and then, scale it back to the desired number.
Like in the sample command below:
kubectl scale deployment.apps/coredns -n kube-system --replicas=0
kubectl scale deployment.apps/coredns -n kube-system --replicas=2
without timout:
kubectl rollout restart deployment coredns -n kube-system
Thanks #Nick for comment
execute this in each rabbitmq pod to remove mnesia, then restart pods
rm -rf /bitnami/rabbitmq/mnesia

How to kill pods on Kubernetes local setup

I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment

Error while creating pods in Kubernetes

I have installed Kubernetes in Ubuntu server using instructions here. I am trying to create pods using kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080 as listed in the example. However, when I do kubectl get pod I get the status of the container as pending. I further did kubectl describe pod for debugging and I see the message:
FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts.
I am further trying to delete this pod by kubectl delete pod hello-minikube-3383150820-1r4f7 but when I further do kubectl get pod I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.
The PodFitsHostPorts predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running kubectl describe svc.
kubectl run creates a deployment object (you can see it with kubectl describe deployments) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run kubectl delete deployments hello-minikube.

How to restart kube-proxy in Kubernetes 1.2 (GKE)

As of Kubernetes 1.2, kube-proxy is now a pod running in the kube-system namespace.
The old init script /etc/init.d/kube-proxy has been removed.
Aside from simply resetting the GCE instance, is there a good way to restart kube-proxy?
I just added an annotation to change the proxy mode, and I need to restart kube-proxy for my change to take effect.
The kube-proxy is run as an addon pod, meaning the Kubelet will automatically restart it if it goes away. This means you can restart the kube-proxy pod by simply deleting it:
$ kubectl delete pod --namespace=kube-system kube-proxy-${NODE_NAME}
Where $NODE_NAME is the node you want to restart the proxy on (this is assuming a default configuration, otherwise kubectl get pods --kube-system should include the list of kube-proxy pods).
If the restarted kube-proxy is missing your annotation change, you may need to update the manifest file, usually found in /etc/kubernetes/manifests on the node.