Kubernetes: How to move a pod to another node - kubernetes

I have a cluster with 2 nodes with local storage, I want to move a pod and its volume from node 1 to node 2, because the disk of node 1 is a little full. thanks

Either use Volume Snapshot
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support
Or
Use Velero, Velero is an open-source tool to safely backup and restore k8s resources & PV's
https://velero.io/

Related

Can kubernetes volumes be used for deployments? If so what happens if each pod is on different host?

Can we use kubernetes volumes for deployments? If yes than that will be mutliple pods sharing the same volume?
If that is possible then what happens when all the pods for the deployment are on different host machines?
Especially when using Amazon EBS where an ebs volume cannot be shared across multiple hosts.
Yes, you can use a persistent volume for deployments
Such a volume will be mounted to your desired location in all the pods
If you use EBS block storage, all your pods will need to be scheduled on the same node where you have attached your volume. This may not work if you have many replicas
You will have to use a network file storage, such as EFS, GlusterFS, Portworx, etc. with ReadWriteMany if you want your pods to be spun up on different nodes
EBS will give you the best performance with the aforementioned single node limitation

Access Kubernetes Persistent Volume data

Is there any way to access Google cloud Kubernetes persistent volume data without using pod. I cannot start pod due to data corruption in persistent volume. Have any command line tool or any other way.
If you have any concerns running pod with any specific application, in that case, you can run the Ubuntu POD and attach that pod to the PVC and access the data.
There also another option to clone the PV and PVC, perform the testing, and newly created PV and PVC while the old one will work as the backup option.
For cloning PV and PVC you can also use the tool : https://velero.io/
You can also attach the PVC to the POD in read-only mode and try accessing the data.
PersistentVolume resources are cluster resources that exist independently of Pods. This means that the disk and data represented by a PersistentVolume continue to exist as the cluster changes and as Pods are deleted and recreated.
It is possible to save data from your PersistentVolume with Status: Terminating and RetainPolicy set to default(delete). Your PersistentVolumes will not be terminated until there is a pod, deployment or to be more specific a PersistentVolumeClaim using it.
The steps we took to remedy our broken state are as follows:
The first thing you want to do is to create a snapshot of your PersistentVolumes.
In GKE console, go to Compute Engine -> Disks and find your volume there and create a snapshot of your volume. use
kubectl get pv | grep pvc-name
Use the snapshot to create a disk:
gcloud compute disks create name-of-disk --size=10 --source-snapshot=name-of-snapshot --type=pd-standard --zone=your-zone
At this point, stop the services using the volume and delete the volume and volume claim.
Re-create the volume manually with the data from the disk and update your volume claim to target a specific volume file.
For more information refer to the links below.
Accessing file shares from Google Kubernetes Engine clusters.
Configure a Pod to Use a PersistentVolume for Storage

Syncing daemonset

Let's say I have a daemonset running in my k8s cluster, and each pod created by the daemonset creates and writes to a directory on the node where it's running. Is there a way to automatically sync the folders with one in the masters? Given I have a multi-master cluster.
You can have a persistent volume that has ReadWriteMany access mode so that all daemonsets can share the same set of data between them.
A simple example is here: http://snippi.com/s/qpge73r
Edit: as #matt commented few drivers support that as here https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

StatefulSet behavior when a node dies/gets restarted and has a PersistentVolume

Suppose I have a resource foo which is a statefulset with 3 replicas. Each makes a persistent volume claim.
One of the foo pods (foo-1) dies, and a new one starts in its place. Will foo-1 be bound to the same persistent volume that the previous foo-1 had before it died? Will the number of persistent volume claims stay the same or grow?
This edge case doesn't seem to be in the documentation on StatefulSets.
Yes you can. A PVC is going to create a disk on GCP, and add it as secondary disk to the node in which the pod is running.
Upon deletion of an individual pod, K8s is going to re-create the pod on the same node it was running. If it is not possible (say the node no longer exists), the pod will be created on another node, and the secondary disk will be moved to that node.

backup and restore Logical volumes for glusterfs and kubernetes

I am working with kubernetes and i tried to dynamically provision volumes on top of gluster cluster and heketi through k8s pvc.
it can happen that my data will be corrupted or maybe lost so i need to know THE BEST WAY to backup and restore logical volumes (lv) for a running gluster cluster on top of kubernetes
The best approach of doing glusterfs backup is by using geo-replication.
https://docs.gluster.org/en/v3/Administrator%20Guide/Geo%20Replication/