HELM admission is constantly creating Pod in status "Container Creating" - kubernetes

I am using K8S version 19.
I tried to install second nginx-ingress controller on my server (I have already one for Linux so I tried to install for Windows as well)
helm install nginx-ingress-win ingress-nginx/ingress-nginx
-f internal-ingress.yaml
--set controller.nodeSelector."beta\.kubernetes\.io/os"=windows
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=windows
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=windows
--set tcp.9000="default/frontarena-ads-win-test:9000"
This failed with "Error: failed pre-install: timed out waiting for the condition".
So I have run helm uninstall to remove that chart
helm uninstall nginx-ingress-win
release "nginx-ingress-win" uninstalled
But I am getting Validation Webhook Pod created constantly
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-win-ingress-nginx-admission-create-f2qcx 0/1 ContainerCreating 0 41m
I delete pod with kubectl delete pod but it get created again and again.
I tried also
kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-win-ingress-nginx-admission but I am getting message not found for all combinations. How I can resolve this and how I can get rid off this?
Thank you!!!

If this Pod is managed by a Deployment,StatefulSet,DaemonSet etc., it will be automatically recreated every time you delete it, so trying to remove a Pod in most situations makes not much sense.
If you want to check what controlls this Pod, run:
kubectl describe pod nginx-ingress-win-ingress-nginx-admission-create-f2qcx | grep Controlled
You would probably see some ReplicaSet, which is also managed by a Deployment or another object. Suppose I want to check what I should delete to get rid of my nginx-deployment-574b87c764-kjpf6 Pod. I can do this as follows:
$ kubectl describe pod nginx-deployment-574b87c764-kjpf6 | grep -i controlled
Controlled By: ReplicaSet/nginx-deployment-574b87c764
then I need to run again kubectl describe on the name of the ReplicaSet we found:
$ kubectl describe rs nginx-deployment-574b87c764 | grep -i controlled
Controlled By: Deployment/nginx-deployment
Finally we can see that it is managed by a Deployment named nginx-deployment and this is the resource we need to delete to get rid of our nginx-deployment-574b87c764-kjpf6 Pod.

Related

How to reset K3s cluster pods

I have a k3s cluster with following pods:
kube-system pod/calico-node-xxxx
kube-system pod/calico-kube-controllers-xxxxxx
kube-system pod/metrics-server-xxxxx
kube-system pod/local-path-provisioner-xxxxx
kube-system pod/coredns-xxxxx
How can I reset (stop and start the pods again) the pods either with command (kubectl maybe) or any script?
To reset a pod, you can just delete it. If it's managed by deployment (pods in your question should be), they should be recreated automatically.
kubectl delete pod <pod-name> <pod2-name> ... -n <namespace>
If the pods you want to reset, have common label, you can filter them with --selector flag
kubectl delete pods --selector=<label-name>=<label-value> -n <namespace>
However, if you changed the deployments somehow, you will need to apply the unmodified manifest.
kubectl apply -f <yaml-file>
Warning: - This will reset your whole cluster and delete all running data.
This is not the exact answer but best answer. take 1 min only.
Just uninstall by running below command
sudo /usr/local/bin/k3s-uninstall.sh
Then install a fresh cluster with below command
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable=traefik" sh -
Then export var using below command
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Also it may complain about some k3s config file access so
sudo chmod 444 /etc/rancher/k3s/k3s.yaml

Kubernetes pod crashLoopBackOff, need to remove a pod

I have installed Prometheus using helm chart, so I got 4 deployment files listed:
prometheus-alertmanager
prometheus-server
prometheus-pushgateway
prometheus-kube-state-metrics
All pods of deployment files are running accordingly.
By mistake I restarted one deployment file using this command:
kubectl rollout restart deployment prometheus-alertmanager
Now a new pod is getting created and getting crashed, if I delete deployment file then previous pod also be deleted. So what can I do for that crashLoopBackOff pod?
Screenshot of kubectl output
You can simply delete that pod with the kubectl delete pod <pod_name> command or attempt to delete all pod in crashLoopBackOff status with:
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
Make sure that the corresponding deployment is set to 1 replica (or any other chosen number). If you delete a pod(s) of that deployment, it will create a new one while keeping the desired replica count.
These two pods (one running and the other crashloopbackoff) belong to different deployments, as they're suffixed by different tags, i.e: pod1-abc-123 and pod2-abc-456 belong to the same deployment template, however pod1-abc-123 and pod2-def-566 belong to different deployments.
A deployment is going to create a replicaset, make sure you delete that corresponding old replicase, kubectl get rs | grep 99dd and delete that one, similar to the prometheus server one.

using kubectl delete command to remove core-dns pod blocked / No activity

I found my coredns pod throw error: Readiness probe failed: Get http://172.30.224.7:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers) . I am delete pod using this command:
kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
but the command keep waiting and nothing response,how to know the progress of deleting? this is output:
[root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
pod "coredns-89764d78c-mbcbz" deleted
and the terminal hangs or blocked,when I use browser UI with using kubernetes dashboard the pod exits.how to force delete it? or fix it the right way?
You are deleting a pod which is monitored by deployment controller. That's why when you delete one of the pods, the controller create another to make sure the number of pods equal to the replica count. If you really want to delete the coredns[not recommended], delete the deployment instead of the pods.
$ kubectl delete deployment coredns -n kube-system
Answering another part of your question:
but the command keep waiting and nothing response,how to know the
progress of deleting? this is output:
[root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
pod "coredns-89764d78c-mbcbz" deleted
and the terminal blocked...
When you're deleting a Pod and you want to see what's going on under the hood, you can additionally provide -v flag and specify the desired verbosity level e.g.:
kubectl delete pod coredns-89764d78c-mbcbz -n kube-system -v 8
If there is some issue with the deletion of specific Pod, it should tell you the details.
I totally agree with #P Ekambaram's comment:
if coredns is not started. you need to check logs and find out why it
is not getting started – P Ekambaram
You can always delete the whole coredns Deployment and re-deploy it again but generally you shouldn't do that. Looking at Pod logs:
kubectl logs coredns-89764d78c-mbcbz -n kube-system
should also tell you some details explaining why it doesn't work properly. I would say that deleting the whole coredns Deployment is a last resort command.

How to kill pods on Kubernetes local setup

I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment

How to list Kubernetes recently deleted pods?

Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version).
I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one.
Commands like
kubectl get pods
kubectl get pod <pod-name>
work only with current pods (live or stopped).
How I could get more details about old pods? I would like to see
when they were created
which environment variables they had when created
why and when they were stopped
As of today, kubectl get pods -a is deprecated, and as a result you cannot get deleted pods.
What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the ttl for kubernetes events - by running:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
You can then investigate further issues within your logging pipeline if you have one in place.
As far as I know you cannot get the Pod details once the Pod is deleted. Can I know what is the usecase?
Example:
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/false
you will have a Pod with status terminated:error
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/true
you will have a Pod with status terminated:Completed
if a container in a Pod restarts: the Pod will be alive and you can get the logs of previous container (only the previous container) using
kubectl logs --container <container name> --previous=true <pod name>
if you doing an upgrade of you app and you are creating Pods using Deployments. If the update deployment "say a new image", the Pod will be terminated and new Pod will be created. You can get the Pod details from the Deployment's YAML. if you want to get details of previous Pod you have see "spec" section of previous Deployment's YAML
You can try kubectl logs --previous to list the logs of a previously stopped pod
http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/
You may also want to check out these debugging tips
http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/
There is a way to find out why pods were deleted and who deleted them.
The only way to find out something is to set the ttl for k8s to be greater than the default 1h and search through the events:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
If your container has previously crashed, you can access the previous container’s crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
There is this flag:
-a, --show-all=false: When printing, show all resources (default hide terminated pods.)
But this may not help in all cases of old pods.
kubectl get pods -a
you will get the list of running pods and the terminated pods in case you are searching for this
If you want to see all the previously deleted pods and you are trying to fetch the previous pods.
Command line:
kubectl get pods
in which you will get all the pod details, because every service has one or more pods and they have unique ip address
Here you can check the lifecycle of pods and what phases of pod has.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
and you can see the previous pod logs by typing a command:
kubectl logs --previous