I have an example deployment running on a kubernetes cluster that also exposes a service and has a persistent volume bound by persistent volume claim.
I would expect that running:
kubectl delete deployment 'deployment_name'
Will delete everything but after running the above the service and storage still exist and I still have to manually delete the service and the persistent volume for the persistent volume claim to release.
Isn't there a single command to remove everything cleanly?
Thank you.
If you are creating deployment, service and PV in 3 separate YAML files you will have to remove them one by one.
However if you have 3 of them in the same YAML file, you can delete all three at once by applying:
kubectl delete -f file.yaml
If you have defined deployment, pv, pvc and service in a single file say file.yaml, then you can delete all of them using single command:
kubectl delete -f file.yaml
This will delete all the objects defined in that yaml file.
Related
A pod can be created by Deployment or ReplicaSet or DaemonSet, if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod? Would it be erroneous once I have done that?
Brief Question:
Is kubectl apply -f xxx.yml the silver bullet for all pod update?
...if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod?
The fact that the pod spec is part of the controller spec (eg. deployment, daemonset), to update the container spec you naturally start with the controller spec. Also, a running pod is largely immutable, there isn't much you can change directly unless you do a replace - which is what the controller already doing.
you should not make changes to the pods directly, but update the spec.template.spec section of the deployment used to create the pods.
reason for this is that the deployment is the controller that manages the replicasets and therefore the pods that are created for your application. that means if you apply changes to the pods manifest directly, and something like a pod rescheduling/restart happens, the changes made to the pod will be lost because the replicaset will recreate the pod according to its own specification and not the specification of the last running pod.
you are safe to use kubectl apply to apply changes to existing resources but if you are unsure, you can always extract the current state of the deployment from kubernetes and pipe that output into a yaml file to create a backup:
kubectl get deploy/<name> --namespace <namespace> -o yaml > deploy.yaml
another option is to use the internal rollback mechanism of kubernetes to restore a previous revision of your deployment. see https://learnk8s.io/kubernetes-rollbacks for more infos on that.
I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx
On an occasional basis I need to perform a rolling replace of all Pods in my StatefulSet such that all PVs are also recreated from scratch. The reason to do so is to get rid of all underlying hard drives that use old versions of encryption key. This operation should not be confused with regular rolling upgrades, for which I still want volumes to survive Pod terminations. The best routine I figured so far to do that is following:
Delete the PV.
Delete the PVC.
Delete the Pod.
Wait until all deletions complete.
Manually recreate the PVC deleted in step 2.
Wait for the new Pod to finish streaming data from other Pods in the StatefulSet.
Repeat from step 1. for the next Pod.
I'm not happy about step 5. I wish StatefulSet recreated the PVC for me, but unfortunately it does not. I have to do it myself, otherwise Pod creation fails with following error:
Warning FailedScheduling 3s (x15 over 15m) default-scheduler persistentvolumeclaim "foo-bar-0" not found
Is there a better way to do that?
I just recently had to do this. The following worked for me:
# Delete the PVC
$ kubectl delete pvc <pvc_name>
# Delete the underlying statefulset WITHOUT deleting the pods
$ kubectl delete statefulset <statefulset_name> --cascade=false
# Delete the pod with the PVC you don't want
$ kubectl delete pod <pod_name>
# Apply the statefulset manifest to re-create the StatefulSet,
# which will also recreate the deleted pod with a new PVC
$ kubectl apply -f <statefulset_yaml>
This is described in https://github.com/kubernetes/kubernetes/issues/89910. The workaround proposed there, of deleting the new Pod which is stuck pending, works and the second time it gets replaced a new PVC is created. It was marked as a duplicate of https://github.com/kubernetes/kubernetes/issues/74374, and reported as potentially fixed in 1.20.
It seems like you're using "Persistent" volume in a wrong way. It's designed to keep the data between roll-outs, not to delete it. There are other different ways to renew the keys. One can use k8s Secret and ConfigMap to mount the key into the Pod. Then you just need to recreate a Secret during a rolling update
I am trying to delete persistent volumes on a Kubernetes cluster. I ran the following command:
kubectl delete pv pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2 pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2
However it showed:
persistentvolume "pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2" deleted
But the command did not exit. So I CONTROL+C to force exit the command. After a few minutes, I ran:
kubectl get pv
And the status is Terminating, but the volumes don't appear to be deleting.
How can I delete these persistent volumes?
It is not recommended to delete pv it should be handled by cloud provisioner. If you need to remove pv just delete pod bounded to claim and then pvc. After that cloud provisioner should also remove pv as well.
kubectl delete pvc --all
It sometimes could take some time so be patient.
Delete all the pods, which is using the pvc(you want to delete), then delete the PVC(PersistentVolumeClaim) & PV(PersistentVolume) in sequence.
Some thing like below(in sequence):
kubectl delete pod --all / pod-name
kubectl delete pvc --all / pvc-name
kubectl delete pv --all / pv-name
I have created below diagram to help explain this better.
The Kubectl commands are mentioned by other answers in this thread. The same should work.
kubectl delete sts sts-name
kubectl delete pvc pvc-name
kubectl delete pv pv-name
Some more useful info
If you see something stuck in terminating state, its because of guardrails set in place by k8s. These are referred to as 'Finalizers'.
If your PV is stuck in terminating state after deletion, it likely because you have deleted the PV before deleting the PVC.
If your PVC is stuck in terminating state after deletion, it likely because your pods are still running. (simply delete the pods/statefulset in such cases)
If you wish to delete the resource in terminating state, use below commands to bypass the pvc, pv protection finalizers.
kubectl patch pvc pvc_name -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pv_name -p '{"metadata":{"finalizers":null}}'
Here is the documentation on PVC retention policy.
Here is the documentation on PV reclaim policy.
PVs are cluster resources provisioned by an administrator, whereas PVCs are a user's request for storage and resources. I guess you have still deployed the corresponding PVC.
Delete the deployment. E.g.:
kubectl delete deployment mongo-db
List the Persistent Volume Claim. E.g.:
kubectl get pvc
Delete the corresponding pcv. E.g.:
kubectl delete pvc mongo-db
I am new to Kubernetes so pardon me if you find my question noob.
I have created a Daemonset called testDaemon.yml which is having a service in it called testService. Both were created but now I have made some minor changes and I need to re-create it.
I tried deleting the services and daemonset:
kubectl delete ds testDaemon
kubectl delete svc testService
but both of them is getting recreated everytime I delete them and I am getting an error that services "testService" already existswhen I run kubectl create -f testDaemon.yml again.
What should I do to remove the DaemonSet completely or update the same with the new template?
You can delete it using
kubectl delete -f testDaemon.yml
It will delete the statefulset and service both.