I am new to Kubernetes so pardon me if you find my question noob.
I have created a Daemonset called testDaemon.yml which is having a service in it called testService. Both were created but now I have made some minor changes and I need to re-create it.
I tried deleting the services and daemonset:
kubectl delete ds testDaemon
kubectl delete svc testService
but both of them is getting recreated everytime I delete them and I am getting an error that services "testService" already existswhen I run kubectl create -f testDaemon.yml again.
What should I do to remove the DaemonSet completely or update the same with the new template?
You can delete it using
kubectl delete -f testDaemon.yml
It will delete the statefulset and service both.
Related
I added a pod through Kubernetes Dashboard. I used Create new resource and I created a pod from input.
I then tried to delete it with:
kubectl delete -n default pod pod-name-0
It deletes it, but gets redeployed. As I understand, I should delete it's deployment first. So to list deployments, I used
kubectl get deployments
But it's not there. How do I permanently delete a pod?
The pods are maintained by a ReplicationController and they are automatically replaced if they fail, are deleted, or are terminated, you should check
kubectl describe pods POD_NAME
kubectl describe replicationcontrollers/REPLICATION_CONTROLLER_NAME
Alternatively you can check the ReplicaSet kubectl get rs
Afterwards you can: kubectl edit rs REPLICASET_NAME and change the replicas count up or down as you desire.
Nice explanation regarding ReplicaSet vs ReplicationController
I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx
Had requirement to change the pod or deployment name.now when we deploy,we have 2 deployments and 3 pods each with old and new name
So far i was deleting the old deployments manually.
do i need to manually delete the old deployment and pods or is there a better method?
To delete deployment use
$ kubectl delete deploy/old_deployment_name
This will delete deployment, including pods and rs, if you had them.
And dont do this mistake the second time :) #Kamol is right - the best way of managing resources is change config file(e.g your deployment) and re-apply with
kubectl apply -f deployment.yaml
I think we can also remove everything if its installed with the apply command
kubectl apply -f deployment.yaml
you can delete
kubectl delete -f deployment.yaml
I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment
I'm unable to delete the kubernetes pod, it keeps recreating it.
There's no service or deployment associated with the pod. There's a label on the pod thou, is that the root cause?
If I edit the label out with kubectl edit pod podname it removes the label from the pod, but creates a new pod with the same label at the same time. ¿?
Pod can be created by ReplicationControllers or ReplicaSets. The latter one might be created by an Deployment. The described behavior strongly indicates, that the Pod is managed by either of these two.
You can check for these with this commands:
kubectl get rs
kubectl get rc