Move or change a volume namespace - kubernetes

We re-organise our namespaces in Kubernetes. We want to move our Persistent volume Claims created by a storageclass from one namespace to another.
(Our backup tool don't help).

Option 1: use a backup tool
The easiest and safest option to migrate a pvc/pv to a new namespace is to use a backup tool (like velero)
Option 2: no backup tool (by hand)
This is undocumented.
In this exemple, we use VMware storage provider, but it should work with any storageClass.
Prepare
Make a * Backup * Backup * Backup * Backup * Backup * !!!
well, if you do have a backup tool for kubernetes (like velero) you can restore directly in the target namespace, otherwise use kubectl cp as explained in How to copy files from kubernetes Pods to local system
Let's set some environment variable and backup the existing PV and PVC ressources
NAMESPACE1=XXX
NAMESPACE2=XXX
PVC=mypvc
kubectl get pvc -n $NAMESPACE1 $PVC -o yaml | tee /tmp/pvc.yaml
PV=pvc-XXXXXXXXXXXXX-XXXXXXXXXXXX
kubectl get pv $PV -o yaml | tee /tmp/pv.yaml
Change ReclaimPolicy for PV
If your persistent volume (or storage provider) has persistentVolumeReclaimPolicy=Delete, make sure to change it to "Retain" to avoid data loss when removing the PVC below.
Run this:
kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Then check:
kubectl describe pv "$PV" | grep -e Reclaim
Remove the PVC
Manually delete the Persistent volume claim (you have a copy, right?).
kubectl delete -n "$NAMESPACE1" "$PVC"
Modify the Persistent Volume (PV)
A PV is attached to a namespace when it's first used by a PVC. Furthermore, the PV become "attached" to the PVC (by it's uid:, not by it's name).
Change the namespace of the PV. Temporarily use PVC "name" to "lock" the PV for that PVC (rather than PVC uid).
"kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"namespace":"'$NAMESPACE2'","name":"'$PVC'","uid":null}}}'
Check what we have now:
kubectl get pv "$PV" -o yaml | grep -e Reclaim -e namespace -e uid: -e name: -e claimRef | grep -v " f:"
Create the new PVC
Create a PVC in the new namespace. Make sure to explicitly choose the PV to use (don't use StorageClass to provision the volume). Typically, you can copy the original PVC YAML, but drop namespace:, selfLink:, uid: in the section metadata:.
This command should work (it re-use the previous PVC), but you can use your own kubectl apply command.
grep -v -e "uid:" -e "resourceVersion:" -e "namespace:" -e "selfLink:" /tmp/pvc.yml | kubectl -n "$NAMESPACE2" apply -f -
Assign PVC to PV
At this point, the PV is bounded to the former PVC's name (but it may not work, and it is not the standard configuration). Running kubectl describe -n "$NAMESPACE2" pvc "$PVC"will complain with Status: Lost and/or Warning ClaimMisbound. So let's fix the problem:
Retrieve the new PVC's uid:
PVCUID=$( kubectl get -n "$NAMESPACE2" pvc "$PVC" -o custom-columns=UID:.metadata.uid --no-headers )
Then update the PV accordingly :
kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"uid":"'$PVCUID'","name":null}}}'
After a few seconds the PV should be Status: Bound.
Restore PV ReclaimPolicy=Delete
(This step is optional. Needed to ensure PV is deleted when user delete the PVC)
Once the PV is in Bound state again, you can restore the reclaim policy if you want to preserve the original behaviour (i.e removing the PV when the PVC is removed) :
kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
## Check :
kubectl get pv $PV -o yaml | grep -e Reclaim -e namespace
Voilà

I have migrated a pv that was with a storage class of storageClassName: nfs-client in a different way to another namespace.
The steps performed:
Change the pvc policy to Retain (the default is Delete), it means that once the pvc is removed, the pv resource will not be removed.:
kubectl patch pvc <pvc-name> '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Get the name of the pvc and copy it to another directory, taking into consideration its owner, permissions and everything (the flags -avR of the cp command, are really important to achieve this task).
dirpath=$(kubectl get pv <pvc-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR ${dirpath} /tmp/pvc_backup
Once it has been copied, you can proceed to remove the previous pvc and pv:
kubectl delete pvc <pvc-name>
kubectl delete pv <pv-name>
Create the new pvc resource with your own yaml file path/pvc.yaml( for this example this was my pvc yaml file):
kubectl -n <target-namespace> create -f path/pvc.yaml
Once created in the right namespace, proceed to copy the content of the backup directory to the new pvc created (remember that you all need to do is to create the nfs pvc and the pv is automatically created):
nfs_pvc_dir=$(kubectl -n <target-namespace> get pv <pv-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR /tmp/pvc_backup/* ${nfs_pvc_dir}/
Finally, proceed to bound your new pvc to a deployment or pod resource.
It was performed with microk8s on a public vps with a nfs storage:
https://microk8s.io/docs
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
Greetings :)

Related

how can i delete kubernetes's pvc

i try delete pvc but i cant
kubectl get --all-namespaces pvc
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-logging es-local-pvc1 Terminating es-local-pv1 450Gi RWO 21d
kubectl delete pvc es-local-pvc1
if you see any problem, most likely that the pvc is protected from deletion.
finalizers:
- kubernetes.io/pvc-protection
you need to edit the pvc and verify that finalizers under metadata is set to null using the below patch.
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'
Then you should be able to delete the pvc
First of all you should try
kubectl delete pvc es-local-pvc1 -n test-logging
If it doesnt help, then I absolutely agree with solution provided by #PEkambaram.
Sometimes you can resolve this issue only by patching pv and pvc finalizers.
You can list finalizers by
kubectl describe pvc PVC_NAME | grep Finalizers
and change by
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'
Btw,the same could happen with PV also, ans you can do the same:
kubectl patch pv PV-NAME -p ’{“metadata”:{“finalizers”:null}}’
Github PV is stuck at terminating after PVC is deleted post also can help in situation when you need to patch pod
kubectl patch pvc db-pv-claim -p '{"metadata":{"finalizers":null}}'
kubectl patch pod db-74755f6698-8td72 -p '{"metadata":{"finalizers":null}}'

How to delete persistent volumes in Kubernetes

I am trying to delete persistent volumes on a Kubernetes cluster. I ran the following command:
kubectl delete pv pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2 pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2
However it showed:
persistentvolume "pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2" deleted
But the command did not exit. So I CONTROL+C to force exit the command. After a few minutes, I ran:
kubectl get pv
And the status is Terminating, but the volumes don't appear to be deleting.
How can I delete these persistent volumes?
It is not recommended to delete pv it should be handled by cloud provisioner. If you need to remove pv just delete pod bounded to claim and then pvc. After that cloud provisioner should also remove pv as well.
kubectl delete pvc --all
It sometimes could take some time so be patient.
Delete all the pods, which is using the pvc(you want to delete), then delete the PVC(PersistentVolumeClaim) & PV(PersistentVolume) in sequence.
Some thing like below(in sequence):
kubectl delete pod --all / pod-name
kubectl delete pvc --all / pvc-name
kubectl delete pv --all / pv-name
I have created below diagram to help explain this better.
The Kubectl commands are mentioned by other answers in this thread. The same should work.
kubectl delete sts sts-name
kubectl delete pvc pvc-name
kubectl delete pv pv-name
Some more useful info
If you see something stuck in terminating state, its because of guardrails set in place by k8s. These are referred to as 'Finalizers'.
If your PV is stuck in terminating state after deletion, it likely because you have deleted the PV before deleting the PVC.
If your PVC is stuck in terminating state after deletion, it likely because your pods are still running. (simply delete the pods/statefulset in such cases)
If you wish to delete the resource in terminating state, use below commands to bypass the pvc, pv protection finalizers.
kubectl patch pvc pvc_name -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pv_name -p '{"metadata":{"finalizers":null}}'
Here is the documentation on PVC retention policy.
Here is the documentation on PV reclaim policy.
PVs are cluster resources provisioned by an administrator, whereas PVCs are a user's request for storage and resources. I guess you have still deployed the corresponding PVC.
Delete the deployment. E.g.:
kubectl delete deployment mongo-db
List the Persistent Volume Claim. E.g.:
kubectl get pvc
Delete the corresponding pcv. E.g.:
kubectl delete pvc mongo-db

Unabel to deploy mariadb on kubernetes using openstack-helm charts

I am trying to deploy OpenStack on kubernetes using helm charts. I am seeing the below error when trying to deploy MariaDB. Mariadb-server-0 looks for PVC which is in LOST state. I tried creating the PersistentVolume and assign the same but still, the pod looks for a lost PVC as shown in the error below.
2018-10-05T17:05:04.087573+00:00 node2: kubelet[9897]: E1005 17:05:04.087449 9897 desired_state_of_world_populator.go:273] Error processing volume "mysql-data" for pod "mariadb-server-0_openstack(c259471b-c8c0-11e8-9636-441ea14dfc98)": error processing PVC "openstack"/"mysql-data-mariadb-server-0": PVC openstack/mysql-data-mariadb-server-0 has non-bound phase ("Lost") or empty pvc.Spec.VolumeName ("pvc-74e81ef0-bb97-11e8-9636-441ea14dfc98")
Is there a way we can delete the old PVC entry from a cluster, so MariaDB doesn't look for the same while deploying ??
Thanks,
Ab
To delete a PVC, you can just use the typical kubectl commands.
See all the PVCs:
kubectl -n <namespace> get pvcs
To delete PVCs:
kubectl -n <namespace> delete pvc <pvc-id-from-the-previous-command>
Similarly, I would try PVs, to see if there are any dangling PVs.
See all the PVs:
kubectl -n <namespace> get pvcs
To delete PVs:
kubectl -n <namespace> delete pv <pv-id-from-the-previous-command>

How to delete only unmounted PVCs and PVs?

We don't want to delete PV and PVC as pods reuse them most of the times.
However, in the long term, we end up with many PVs' and PVCs' that are not used.
How to safely clean?
Not very elegant but bash way to delete Released PV's
kubectl get pv | grep Released | awk '$1 {print$1}' | while read vol; do kubectl delete pv/${vol}; done
Looking through the current answers it looks like most of these don't directly answer the question (I could be mistaken). A PVC that is Bound is not the same as Mounted. The current answers should suffice to clean up Unbound PVC's, but finding and cleaning up all Unmounted PVC's seems unanswered.
Unfortunately it looks like the -o=go-template=... doesn't have a variable for Mounted By: as shown in kubectl describe pvc.
Here's what I've come up with after some hacking around:
To list all PVC's in a cluster (mounted and not mounted) you can do this: kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$"
The -A will return every PVC in the cluster in every namespace. We then filter down to show just the Name, Namespace and Mounted By fields.
The best I could come up with to then get the names and namespaces of all unmounted PVC's is this:
kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$"
To actually delete the PVC's is somewhat difficult because we need to know the name of the PVC as well as it's namespace. We use cut, paste and xargs to do this:
kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$" | cut -f2 -d: | paste -d " " - - | xargs -n2 bash -c 'kubectl -n ${1} delete pvc ${0}'
cut removes Name: and Namespace: since they just get in the way
paste puts the Name of the PVC and it's Namespace on the same line
xargs -n bash makes it so the PVC name is ${0} and the namespace is ${1}.
I admit that I have a feeling that this isn't the best way to do this but it was the only obvious way I could come up with (on the CLI) to do this.
After running this your volumes will go from Bound to Unbound and the other answers in this thread have good ideas on how to clean those up.
Also, keep in mind that some of the volume controllers don't actually delete your data when the volumes are deleted in Kubernetes. You might still need to clean that up in whichever system you are using.
For example, in the NFS controller the data gets renamed with an archived- prefix and on the NFS side you can run rm -rf /persistentvolumes/archived-*. For AWS EBS you might still need to delete the EBS volumes if they are detached from any instance.
I hope this helps!
If you'd like to remove all the Unbound PVs and PVCs, you can do this:
First delete the PVCs:
$ kubectl -n <namespace> get pvc | tail -n +2 | grep -v Bound | \
awk '{print $1}' | xargs -I{} kubectl -n namespace delete pvc {}
Then just delete the PVs:
$ kubectl -n <namespace> get pv | tail -n +2 | grep -v Bound | \
awk '{print $1}' | xargs -I{} kubectl -n namespace delete pv {}
All previous answsers are valid and interesting. Here is another simple way to delete persistent volumes.
You should first delete your associated persistentvolumeclaim but in some cases the persistentvolumes could not be deleted automaticaly. (Ex : a "Retain" reclaim policy).
Here is a safe syntax for persistentvolumes deletion with Released satus (unused and unmounted).
kubectl get --no-headers persistentvolumes|awk '$5=="Released" { print $1 }'|xargs echo "kubectl delete persistentvolumes"
Until you keep pvc your pv will be in Bound state. So you can just go and delete unused pvc with:
kubectl -n namespace get pvc -o name | grep myname | xargs kubectl -n namespace delete
Yeah, first you need to delete unused PVC.
With kubectl get pvc --all-namespaces you can list all of them in all namespaces along with the corresponding PVs.
In order to delete unused PVs you need to change its ReclaimPolicy because if it's set to Retain the PVs won't be deleted but will hang in "Released" status. So in order to do that you need to patch PV (it's not possible to edit it manually with kubectl edit for some reason):
kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

How to delete all resources from Kubernetes one time?

Include:
Daemon Sets
Deployments
Jobs
Pods
Replica Sets
Replication Controllers
Stateful Sets
Services
...
If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?
Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:
kubectl delete all --all
all refers to all resource types such as pods, deployments, services, etc. --all is used to delete every object of that resource type instead of specifying it using its name or label.
To delete everything from a certain namespace you use the -n flag:
kubectl delete all --all -n {namespace}
Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:
kubectl delete namespace {namespace}
kubectl create namespace {namespace}
Note (thanks #Marcus): all in kubernetes does not refers to every kubernetes object, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use kubectl api-resources to get all resource types, as seen here:
kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
Kubernetes Namespace would be the perfect options for you. You can easily create namespace resource.
kubectl create -f custom-namespace.yaml
$ apiVersion: v1
kind: Namespace
metadata:
name:custom-namespace
Now you can deploy all of the other resources(Deployment,ReplicaSet,Services etc) in that custom namespaces.
If you want to delete all of these resources, you just need to delete custom namespace. by deleting custom namespace, all of the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted.
To work with Namespace, you need to add --namespace flag to k8s commands.
For example:
kubectl create -f deployment.yaml --namespace=custom-namespace
you can list all the pods in custom-namespace.
kubectl get pods --namespace=custom-namespace
You can also delete Kubernetes resources with the help of labels attached to it. For example, suppose below label is attached to all resource
metadata:
name: label-demo
labels:
env: dev
app: nginx
now just execute the below commands
deleting resources using app label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l app=nginx
deleting resources using envirnoment label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l env=dev
can also try kubectl delete all --all --all-namespaces
all refers to all resources
--all refers to all resources, including uninitialized ones
--all-namespaces in all all namespaces
First backup your namespace resources and then delete all resources found with the get all command:
kubectl get all --namespace={your-namespace} -o yaml > {your-namespace}.yaml
kubectl delete -f {your-namespace}.yaml
Nevertheless, still some resources exists in your cluster.
Check with
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --namespace {your-namespace}
If you really want to COMPLETELY delete your namespace, go ahead with:
kubectl delete namespace {your-namespace}
(tested with Client v1.23.1 and Server v1.22.3)
In case if you want to delete all K8S resources in the cluster. Then, easiest way would be to delete the entire namespace.
kubectl delete ns <name-space>
kubectl delete deploy,service,job,statefulset,pdb,networkpolicy,prometheusrule,cm,secret,ds -n namespace -l label
kubectl delete all --all
to delete all the resource in cluster.
after deleting all resources k8's will again relaunch the default services for cluster.