Kasten K10 does not support backup for Ceph RGW storage provisioner - kubernetes

As the title says (and also here https://docs.kasten.io/latest/restrictions.html).
My company is using K10 latest (v5.0.2) as a backup tool for our Openshift cluster.
We are required to use a S3 compatible storage provisioner.
We moved from MinIO to Ceph because of some issues with MinIO (excessive memory usage, MinIO Pods handling, ...) yet we found out that Ceph RGW is not supported from k10 and it seems this makes our backup fail: from the Kasten console it appears that only the ObjectBucketClaim manifest is backed up but not the data contained within the bucket.
Also, when restoring, the ObjectBucketClaims remain in "pending" status.
I am stuck and I don't know what to suggest to my storage department: I told them to giveup using MinIO and start using Ceph but its RGW is not supported by K10.
Any suggestions on how I can handle this situation?
Thanks in advance.

Related

How to backup PVC regularly

What can be done to backup kubernetes PVC regularly for GCP and AWS?
GCP has VolumeSnapshot but I'm not sure how to schedule it, like every hour or every day.
I also tried Gemini/fairwinds but I get the following error when for GCP. I installed the charts as mentioned in README.MD and I can't find anyone else encountering the same error.
error: unable to recognize "backup-test.yml": no matches for kind "SnapshotGroup" in version "gemini.fairwinds.com/v1beta1"
You can implement Velero, which gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes.
Unfortunately, Velero only allows you to backup & restore PV, not PVCs.
Velero’s restic integration backs up data from volumes by accessing the node’s filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
Might wanna look into stash.run
Agree with #hdhruna - Velero is really the most popular tool for doing that task.
However, you can also try miracle2k/k8s-snapshots
Automatic Volume Snapshots on Kubernetes
How is it useful? Simply add
an annotation to your PersistentVolume or PersistentVolumeClaim
resources, and let this tool create and expire snapshots according to
your specifications.
Supported Environments:
Google Compute Engine disks,
AWS EBS disks.
I evaluated multiple solutions including k8s CSI VolumeSnapshots, https://stash.run/, https://github.com/miracle2k/k8s-snapshots and CGP disks snapshots.
The best one in my opinion, is using k8s native implementation of snapshots via CSI driver, that is if you have a cluster version > = 1.17. This allows snapshoting volumes while in use, doesn't require having a read many or write many volume like stash.
I chose gemini by fairwinds also to automate backup creation and deletion and restoration and it works like a charm.
I believe your problem is caused by that missing CRD from gemini in your cluster. Verify that the CRD is installed correctly and also that the version installed is indeed the version you are trying to use.
My installation went flawlessly using their install guide with Helm.

Unable to make elasticsearch as persistent volume in kubernetes cluster

I want to setup Elasticsearch on Kubernetes Cluster using Helm. I can setup Elasticsearch on Kubernetes Cluster without persistence. I am using below helm chart.
helm install --name elasticsearch incubator/elasticsearch \
--set master.persistence.enabled=false \
--set data.persistence.enabled=false \
--set image.tag=6.4.2 \
--namespace logging
However, i am not able to use it with Persistence. Moreover i am confused as i am using neither cloud based storage(aws,gce) nor nfs. I am using Local VM storage.
I added disk in my VM environment formated it under ext4. And now i am trying to use it as a persistent disk for my elasticsearch Deployment.
I tried lots of ways, not working much.
For any data if you need i would be helpful to provide.
But kindly get a solution which will work.
I just need help..
I don't believe this chart will support local storage.
Looking at the volumeClaimTemplate such as on the master-statefulset.yaml shows that it's missing key parameters for a local volume setup (such as path, nodeAffinity, volumeBindingMode) described here. If you are using a cloud deployment, just use a cloud volume claim. If you have deployed a cluster on a on-prem or just onto your computer, then you should fork the chart and adjust the volume claims to meet the requirements for local storage.
Either way on your future posts you should include relevant logs. With kubernetes errors it's helpful to see from all parts of the stack such as: kubernetes control plane logs, object events (like the output from describing the volume claim), helm logs, elasticsearch pod logs failing to discover a volume, etc etc.

Does OpenEBS Jiva support if underlying storagepool fs is ZFS?

When using jiva datadirhost on a ZFS mountpoint(with xattr enabled) I got this
time="2018-12-02T20:39:48Z" level=fatal msg="Error running start replica command: failed to find extents, error: invalid argument".
If we create an ext4 zvol based storagepool it works. Is this expected behaviour? I am using Kubernetes 1.9.7 on ubuntu 16.04 with ZFS.
OpenEBS has two storage engines currently supported,
cStor (Recommended)
Jiva
Jiva volumes are created from local filesystem or mounted file-system, and can't consume the block device directly. This makes Jiva will work only with those file systems which provides extent mapping. ZFS does not support filefrag as of now that's why it gives above error.
On Other hand, cStor storage engine, volumes are created on a pool created on block devices. Storage pool create by cStor is native zpool(ZFS). You can get more details Concepts-> CASEngines from openEBS doc site.

Kubernetes: Configuration snapshoting

Is there any configuration snapshot mechanism on kubernetes?
The goal is to take a snapshot of all deployments/services/config-maps etc and apply them to a kubernetes cluster.
The steps that should be taken.
Take a configuration snapshot
Delete the cluster
Create a new cluster
Apply the configuration snapshot to the new cluster
New cluster works like the old one
These are the 3 that spring to mind, with kubed being, at least according to their readme, the closest to your stated goals:
Ark
kubed
kube-backup
I run Ark in my cluster, but (to my discredit) I have not yet attempted to do a D.R. drill using it; I only checked that it is, in fact, making config backups.
State of the kubernetes is stored on etcd, so back up etcd data and restore would be able to restore cluster. But this would not backup any information stored in persistent volumes, that needs to be handled separately.
backup operater provided by coreos is a good option:
https://coreos.com/operators/etcd/docs/latest/user/walkthrough/backup-operator.html
Taking backups with etcdctl :
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
https://github.com/coreos/etcd/blob/master/etcdctl/README.md
Heptio ark has capability to backup config and also volumes :
https://github.com/heptio/ark
if you want a UI based option, these would be good :
https://github.com/kaptaind/kaptaind
https://github.com/mhausenblas/reshifter

kubernetes : dynamic persistent volume provisioning using iSCSI and NFS

I am using Kubernetes 1.4 persistent volume support, iSCSI/NFS PV and PVC successfully, in my containers. However it needs to first provision the storage by specifying the capacity both at PV creation and during claiming the storage.
My requirement is to just provide storage to cluster(and don't want to mention the capacity of storage) and let users/developers claim the storage based on their requirements. So need to use dynamic provisioning using StorageClass. Just declare the storage with details and let developers claim it based on their needs.
However got confused about using dynamic volume provisioning for iSCSI and NFS using Storage class and not getting exact steps to follow. As per documentation i need to use external volume plugin for both these types and it has already been made available as a part of incubator project - https://github.com/kubernetes-incubator/external-storage/. But i am not getting how to load/run that external provisioner(i need to run it as a container itself??i guess) and then write storage class with details of iSCSI/NFS storage.
Can somebody who has already done/used it can guide/provide pointers on this?
Thanks in advance,
picku
The project you pointed to is specific to iSCSI targets running targetd. You basically download the YAML files here https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd/kubernetes, modify them with your storage provider's parameters and deploy the pods using kubectl create. In your pods you need to specify the a storageclass. The storageclass then specifies a the iSCSI provisioner. There are more steps but that's the gist of it.
See this link for more detailed instructions https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd
the OpenEBS community has folks running this way afaik. There is a blog here for example explaining one approach supporting WordPress: https://blog.openebs.io/setting-up-persistent-volumes-in-rwx-mode-using-openebs-142632244cb2