I want to remove zk and kafka from my k8s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka1-mvzch 1/1 Running 1 25s
kafka2-m292k 0/1 CrashLoopBackOff 8 20m
zookeeper1-qhmnf 1/1 Running 0 20m
zookeeper2-t7r8w 1/1 Running 0 20m
$kubectl delete pod kafka1-mvzch kafka2-m292k zookeeper1-qhmnf zookeeper2-t7r8w
pod "kafka1-mvzch" deleted
pod "kafka1-m292k" deleted
pod "zookeeper1-qhmnf" deleted
pod "zookeeper2-t7r8w" deleted
but when I run get pods, it still shows the pods.
And I got no service and deployment
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 7h1m
$ kubectl get deployment
No resources found in default namespace.
You are removing the pods, and they will be deleted.
But there is some other construct that re-creates pods to replace the (now deleted) previous pods.
In fact, the names of the pods with the random-looking suffix suggest that there is another controller operating the pods.
When looking at the linked tutorial, you notice that a ReplicationController is created. This ensures the pods.
If you want to remove it, remove the replication controller; the pods will be deleted as well.
You can use kubectl get pod -ojsonpath='{.metadata.ownerReferences}' to identify the owner object of the pods. The owner might be a Deployment, StatefulSet, etc.
Looking at the medium.com guide that you mentioned, I see that they suggest to create ReplicationControllers.
You can cleanup your namespace by running kubectl delete replicationcontroller --all.
Related
We have a data visualization server hosted in Kubernetes pods. The dashboards in that data viz are displayed in the browser of different monitors/terminals for near-real time operational reporting. Sometimes the pods fail, and when they come alive again, the browser redirects to Single Sign-On page instead of going to the dashboard the URL is originally configured to.
The server are hosted in I would presume a replica set. There are two pods that exist as far as I can tell.
I was granted privilege on using kubectl to solve this problem, but still quite new with the whole Kubernetes thing. Using kubectl, how do I simulate pod failure/restart for testing purposes? Since the pods are in duplicate, shutting one of them will only redirect the traffic to the other pod. How to make both pods fail/restart at the same time? (I guess doing kubectl delete pod on both pods will do, but I want to make sure k8s will respawn the pods automatically, and not delete them forever).
If I understand the use case correctly, you might want to use kubectl scale command. This will give you the flexibility to make the replica count to zero to N by running a simple kubectl scale command. See examples. Also, if you are using deployment, you can just do the kubectl delete pod, the deployment controller will spawn a new one to satisfy the replica count.
kubectl scale deployment/<DEPLOYMENT-NAME> --replicas=<DESIRED-NUMBER-OF-REPLICA>
short example:
kubectl scale deployment/deployment-web --replicas=0
deployment.apps/deployment-web scaled
Long Example:
// create a deployment called, deployment-web with two replicas.
kubectl create deployment deployment-web --image=nginx --replicas 2
deployment.apps/deployment-web created
// verify that both replicas are up
kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-web 2/2 2 2 13s
// expose the deployment with a service [OPTIONAL-STEP, ONLY FOR EXPLANATION]
kubectl expose deployment deployment-web --port 80
service/deployment-web exposed
//verify that the service is created
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-web ClusterIP 10.233.24.174 <none> 80/TCP 5s
// dump the list of end-points for that service, there would be one for each replica. Notice the two IPs in the 2nd column.
kubectl get ep
NAME ENDPOINTS AGE
deployment-web 10.233.111.6:80,10.233.115.9:80 12s
//scale down to 1 replica for the deployment
kubectl scale --current-replicas=2 --replicas=1 deployment/deployment-web
deployment.apps/deployment-web scaled
// Notice the endpoint is reduced from 2 to 1.
kubectl get ep
NAME ENDPOINTS AGE
deployment-web 10.233.115.9:80 43s
// also note that there is only one pod remaining
kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-web-64c769b44-qh2qf 1/1 Running 0 105s
// scale down to zero replica
kubectl scale --current-replicas=1 --replicas=0 deployment/deployment-web
deployment.apps/deployment-web scaled
// The endpoint list is empty
kubectl get ep
NAME ENDPOINTS AGE
deployment-web <none> 9m4s
//Also, both pods are gone
kubectl get pod
No resources found in default namespace.
// When you are done with testing. restore the replicas
kubectl scale --current-replicas=0 --replicas=2 deployment/deployment-web
deployment.apps/deployment-web scaled
//endpoints and pods are restored back
kubectl get ep
NAME ENDPOINTS AGE
deployment-web 10.233.111.8:80,10.233.115.11:80 10m
foo-svc 10.233.115.6:80 50m
kubernetes 192.168.22.9:6443 6d23h
kubectl get pod -l app=deployment-web
NAME READY STATUS RESTARTS AGE
deployment-web-64c769b44-b72k5 1/1 Running 0 8s
deployment-web-64c769b44-mt2dd 1/1 Running 0 8s
I have 2 pods running on default namespace as shown below
NAMESPACE NAME READY STATUS RESTARTS AGE
default alpaca-prod 1/1 Running 0 36m
default alpaca-test 1/1 Running 0 4m26s
kube-system coredns-78fcd69978-xd7jw 1/1 Running 0 23h
But when I try to get deployments I do not see any
kubectl get deployments
No resources found in default namespace.
Can someone explain this behavior ?
I am running k8 on Minikube.
I think these are pods which were spawned without Deployment, StatefulSet or DaemonSet.
You can run pod like this using the command, e.g.:
kubectl run nginx-test --image=nginx -n default
pods created via DaemonSet usually end with -xxxxx
pods created via Deployment usually end with -xxxxxxxxxx-xxxxx
pods created via StatefulSet usually end with -0, -1 etc.
pods created without upper resource, usually have exact name as you specified e.g. nginx-test, nginx, etc.
So my guess that is a standalone Pod resource (last option)
I'm trying to delete some old deployments / replicasets I have in my cluster but when I run kubectl delete deployment
It'll say the deployment is deleted and the pod from that deployment is Terminating, but then a few seconds later the deployment is magically recreated and the pod comes back.
This is the same result for another replicaset I have.
What could be re-creating these deployments / replicasets and how can I stop it so I can permanently delete these deployments/rs?
Edit: Here's some output. This is on a kubernetes cluster in GKE btw:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 1/1 1 1 41m
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
quickstart-kb-f9b65577f-4fxph 1/1 Running 0 40m
kubectl delete deployment quickstart-kb
deployment.extensions "quickstart-kb" deleted
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 0/1 1 0 7s
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
quickstart-kb-6cb6cf897d-qcjff 0/1 Running 0 11s
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
quickstart-kb 1/1 1 1 4m6s
ubuntu 1/1 1 1 66d
kubectl get pods
NAME READY STATUS RESTARTS AGE
quickstart-kb-6cb6cf897d-qcjff 1/1 Running 0 4m13s
ubuntu-677fc9fd77-fgd7k 1/1 Running 0 19d
I think your deployment object is created with the deployment of some custom resources (CRD).
When you created the CRD, the CRD controller created the deployment object. So, even if you delete the deployment object, the CRD controller re-creates it.
Delete the CRD object itself, to delete the deployment and other objects (if any) that were created with it.
From the name, it seems like Kibana CRD object:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
Use the following command to delete the Kibana object:
$ kubectl delete Kibana quickstart-kb
I would like to perform rolling back a deployment in my environment.
Command:
kubectl rollout undo deployment/foo
Steps which are perform:
create pods with old configurations
delete old pods
Is there a way to not perform last step - for example - developer would like to check why init command fail and debug.
I didn't find information about that in documentation.
Yes it is possible, before doing rollout, first you need to remove labels (corresponding to replica-set controlling that pod) from unhealthy pod. This way pod won't belong anymore to the deployment and even if you do rollout, it will still be there. Example:
$kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
sleeper 1/1 1 1 47h
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 5m46s pod-template-hash=d75b55fc9,run=sleeper
$kubectl label pod sleeper-d75b55fc9-87k5k pod-template-hash- run-
pod/sleeper-d75b55fc9-87k5k labeled
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 6m34s <none>
sleeper-d75b55fc9-swkj9 1/1 Running 0 3s pod-template-hash=d75b55fc9,run=sleeper
So what happens here, we have a pod sleeper-d75b55fc9-87k5k which belongs to sleeper deployment, we remove all labels from it, deployment detects that pod "has gone" so it creates a new one sleeper-d75b55fc9-swkj9, but the old one is still there and ready for debugging. Only pod sleeper-d75b55fc9-swkj9 will be affected by rollout.
How do you hard force a pod to terminate?
We have tried every documented form of kubectl delete pod and they remain. kubectl reports that they've been deleted, but kubectl get pods tells a different story. All pods that they could've been using have been deleted as well as any pods that could be using them.
Is there any form of kubectl SERIOUSLY_DELETE this pod?
I've tried: kubectl delete pods --all --grace-period=0 --force -n monitoring
With no favorable result. I've also tried to delete them individually.
NAME READY STATUS RESTARTS AGE
es-master-962148878-7vxpp 1/1 Terminating 1 12d
es-master-962148878-h1r4k 1/1 Terminating 1 12d
es-master-962148878-rkg9g 1/1 Terminating 1 12d
Taken from kubectl delete --help:
kubectl delete pod foo --grace-period=0 --force
Note that if your pods are controlled via e.g. a deployment, then a new one will be recreated every time you delete one. So do make sure that's not the symptom you're observing!