What can be done to backup kubernetes PVC regularly for GCP and AWS?
GCP has VolumeSnapshot but I'm not sure how to schedule it, like every hour or every day.
I also tried Gemini/fairwinds but I get the following error when for GCP. I installed the charts as mentioned in README.MD and I can't find anyone else encountering the same error.
error: unable to recognize "backup-test.yml": no matches for kind "SnapshotGroup" in version "gemini.fairwinds.com/v1beta1"
You can implement Velero, which gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes.
Unfortunately, Velero only allows you to backup & restore PV, not PVCs.
Velero’s restic integration backs up data from volumes by accessing the node’s filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
Might wanna look into stash.run
Agree with #hdhruna - Velero is really the most popular tool for doing that task.
However, you can also try miracle2k/k8s-snapshots
Automatic Volume Snapshots on Kubernetes
How is it useful? Simply add
an annotation to your PersistentVolume or PersistentVolumeClaim
resources, and let this tool create and expire snapshots according to
your specifications.
Supported Environments:
Google Compute Engine disks,
AWS EBS disks.
I evaluated multiple solutions including k8s CSI VolumeSnapshots, https://stash.run/, https://github.com/miracle2k/k8s-snapshots and CGP disks snapshots.
The best one in my opinion, is using k8s native implementation of snapshots via CSI driver, that is if you have a cluster version > = 1.17. This allows snapshoting volumes while in use, doesn't require having a read many or write many volume like stash.
I chose gemini by fairwinds also to automate backup creation and deletion and restoration and it works like a charm.
I believe your problem is caused by that missing CRD from gemini in your cluster. Verify that the CRD is installed correctly and also that the version installed is indeed the version you are trying to use.
My installation went flawlessly using their install guide with Helm.
Related
I'm looking for to create a local backup for PV/PVC in K8s, then restore. (Not using any CSI)
Have tried VolumeSnapshot in k8s, but it creates a in-cluster backup, and what I need is a local copy, so I can archive it and move around. Also found some 3p tools like Stash/Velero/Kasten, but not sure if any of them fits my target.
Can someone point me to the correct document to look at, or if that's all possible? Thanks!
Looks like the 3rd party tools mentioned by you should be the best fit, especially Velero because as per this post:
Velero is a backup tool not only focused on volumes backups, it also
allows you to backup all your cluster (pods, services, volumes,…) with
a sorting system by labels or Kubernetes objects.
Stash is a tool only focused on volume backups.
To get more information on using Velero and its newest features you can visit the official documentation site and this website.
I want to setup Elasticsearch on Kubernetes Cluster using Helm. I can setup Elasticsearch on Kubernetes Cluster without persistence. I am using below helm chart.
helm install --name elasticsearch incubator/elasticsearch \
--set master.persistence.enabled=false \
--set data.persistence.enabled=false \
--set image.tag=6.4.2 \
--namespace logging
However, i am not able to use it with Persistence. Moreover i am confused as i am using neither cloud based storage(aws,gce) nor nfs. I am using Local VM storage.
I added disk in my VM environment formated it under ext4. And now i am trying to use it as a persistent disk for my elasticsearch Deployment.
I tried lots of ways, not working much.
For any data if you need i would be helpful to provide.
But kindly get a solution which will work.
I just need help..
I don't believe this chart will support local storage.
Looking at the volumeClaimTemplate such as on the master-statefulset.yaml shows that it's missing key parameters for a local volume setup (such as path, nodeAffinity, volumeBindingMode) described here. If you are using a cloud deployment, just use a cloud volume claim. If you have deployed a cluster on a on-prem or just onto your computer, then you should fork the chart and adjust the volume claims to meet the requirements for local storage.
Either way on your future posts you should include relevant logs. With kubernetes errors it's helpful to see from all parts of the stack such as: kubernetes control plane logs, object events (like the output from describing the volume claim), helm logs, elasticsearch pod logs failing to discover a volume, etc etc.
TL;DR
My pods mounted Azure file shares are (inconsistently) being deleted by either Kubernetes / Helm when deleting a deployment.
Explanation
I've recently transitioned to using Helm for deploying Kubernetes objects on my Azure Kubernetes Cluster via the DevOps release pipeline.
I've started to see some unexpected behaviour in relation to the Azure File Shares that I mount to my Pods (as Persistent Volumes with associated Persistent Volume Claims and a Storage Class) as part of the deployment.
Whilst I've been finalising my deployment, I've been pushing out the deployment via the Azure Devops release pipeline using the built in Helm tasks, which have been working fine. When I've wanted to fix / improve the process I've then either manually deleted the objects on the Kubernetes Dashboard (UI), or used Powershell (command line) to delete the deployment.
For example:
helm delete myapp-prod-73
helm del --purge myapp-prod-73
Not every time, but more frequently, I'm seeing the underlying Azure File Shares also being deleted as I'm working through this process. There's very little around the web on this, but I've also seen an article outlining similar issues over at: https://winterdom.com/2018/07/26/kubernetes-azureFile-dynamic-volumes-deleting.
Has anyone in the community come across this issue?
Credit goes to https://twitter.com/tomasrestrepo here on pointing me in the right direction (the author of the article I mentioned above).
The behaviour here was a consequence of having the Reclaim Policy on the Storage Class & Persistent Volume set to "Delete". When switching over to Helm, I began following their commands to Delete / Purge the releases as I was testing. What I didn't realise, was that deleting the release would also mean that Helm / K8s would also reach out and delete the underlying Volume (in this case an Azure Fileshare). This is documented over at: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete
I'll leave this Q & A here for anyone else that misses this subtly with the way in which the Storage Classes, Persistent Volumes (PVs) & underlying storage operates under K8s / Helm.
Note: I think this issue was made slightly more obscure by the fact I was manually creating the Azure Fileshare (through the Azure Portal) and trying to mount that as a static volume (as per https://learn.microsoft.com/en-us/azure/aks/azure-files-volume) within my Helm Chart, but that the underlying volume wasn't immediately being deleted when the release was deleted (sometimes an hour later?).
Is there any configuration snapshot mechanism on kubernetes?
The goal is to take a snapshot of all deployments/services/config-maps etc and apply them to a kubernetes cluster.
The steps that should be taken.
Take a configuration snapshot
Delete the cluster
Create a new cluster
Apply the configuration snapshot to the new cluster
New cluster works like the old one
These are the 3 that spring to mind, with kubed being, at least according to their readme, the closest to your stated goals:
Ark
kubed
kube-backup
I run Ark in my cluster, but (to my discredit) I have not yet attempted to do a D.R. drill using it; I only checked that it is, in fact, making config backups.
State of the kubernetes is stored on etcd, so back up etcd data and restore would be able to restore cluster. But this would not backup any information stored in persistent volumes, that needs to be handled separately.
backup operater provided by coreos is a good option:
https://coreos.com/operators/etcd/docs/latest/user/walkthrough/backup-operator.html
Taking backups with etcdctl :
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
https://github.com/coreos/etcd/blob/master/etcdctl/README.md
Heptio ark has capability to backup config and also volumes :
https://github.com/heptio/ark
if you want a UI based option, these would be good :
https://github.com/kaptaind/kaptaind
https://github.com/mhausenblas/reshifter
I am using Kubernetes 1.4 persistent volume support, iSCSI/NFS PV and PVC successfully, in my containers. However it needs to first provision the storage by specifying the capacity both at PV creation and during claiming the storage.
My requirement is to just provide storage to cluster(and don't want to mention the capacity of storage) and let users/developers claim the storage based on their requirements. So need to use dynamic provisioning using StorageClass. Just declare the storage with details and let developers claim it based on their needs.
However got confused about using dynamic volume provisioning for iSCSI and NFS using Storage class and not getting exact steps to follow. As per documentation i need to use external volume plugin for both these types and it has already been made available as a part of incubator project - https://github.com/kubernetes-incubator/external-storage/. But i am not getting how to load/run that external provisioner(i need to run it as a container itself??i guess) and then write storage class with details of iSCSI/NFS storage.
Can somebody who has already done/used it can guide/provide pointers on this?
Thanks in advance,
picku
The project you pointed to is specific to iSCSI targets running targetd. You basically download the YAML files here https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd/kubernetes, modify them with your storage provider's parameters and deploy the pods using kubectl create. In your pods you need to specify the a storageclass. The storageclass then specifies a the iSCSI provisioner. There are more steps but that's the gist of it.
See this link for more detailed instructions https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd
the OpenEBS community has folks running this way afaik. There is a blog here for example explaining one approach supporting WordPress: https://blog.openebs.io/setting-up-persistent-volumes-in-rwx-mode-using-openebs-142632244cb2