When using jiva datadirhost on a ZFS mountpoint(with xattr enabled) I got this
time="2018-12-02T20:39:48Z" level=fatal msg="Error running start replica command: failed to find extents, error: invalid argument".
If we create an ext4 zvol based storagepool it works. Is this expected behaviour? I am using Kubernetes 1.9.7 on ubuntu 16.04 with ZFS.
OpenEBS has two storage engines currently supported,
cStor (Recommended)
Jiva
Jiva volumes are created from local filesystem or mounted file-system, and can't consume the block device directly. This makes Jiva will work only with those file systems which provides extent mapping. ZFS does not support filefrag as of now that's why it gives above error.
On Other hand, cStor storage engine, volumes are created on a pool created on block devices. Storage pool create by cStor is native zpool(ZFS). You can get more details Concepts-> CASEngines from openEBS doc site.
Related
As the title says (and also here https://docs.kasten.io/latest/restrictions.html).
My company is using K10 latest (v5.0.2) as a backup tool for our Openshift cluster.
We are required to use a S3 compatible storage provisioner.
We moved from MinIO to Ceph because of some issues with MinIO (excessive memory usage, MinIO Pods handling, ...) yet we found out that Ceph RGW is not supported from k10 and it seems this makes our backup fail: from the Kasten console it appears that only the ObjectBucketClaim manifest is backed up but not the data contained within the bucket.
Also, when restoring, the ObjectBucketClaims remain in "pending" status.
I am stuck and I don't know what to suggest to my storage department: I told them to giveup using MinIO and start using Ceph but its RGW is not supported by K10.
Any suggestions on how I can handle this situation?
Thanks in advance.
I have a StatefulSet application in IBM Cloud Kubernetes Services with a PVC attached. I use the ibmc-vpc-block-retain-5iops-tier storage class. I have vpc-block-csi-driver 4.0 driver. I accessed to the PV and changed the capacity from 10Gb to 20Gb. The same for the PVC and in the describe I see:
Normal FileSystemResizeSuccessful 17m kubelet MountVolume.NodeExpandVolume succeeded for volume "pvc-976fdde8-06c6-4854-a3c5-3d65adbe640f"
and using the commands:
kubectl get pv
kubectl get pvc
I correctly see the new size. The problem is that when I try to change the StatefulSet volumeClaimTemplates from 10Gb to 20Gb it doesn't accept the change.
If I access the pod I see now two driver of 10Gb:
252 48 10485760 vdd
252 64 10485760 vde
and if I run df -h myfilesystem /dev/vdd is always 10Gb.
Now I think there could be two possible cause (but I am not sure):
PVC resize is only supported on 4.2 driver and not 4.0
I need to run a resize2fs to expand my filesystem but I need admin privileges and this is not possible on IBM Kubernetes Services.
Any idea how I can solve the issue?
Environment: external NFS share for persistent storage, accessible to all, R/W, Centos7 VMs (NFS share and K8s cluster), NFS utils installed on all workers.
Mount on a VM, e.g. a K8s worker node, works correctly, the share is R/W
Deployed in the K8s cluster: PV, PVC, Deployment (Volumes - referenced to PVC, VolumeMount)
The structure of the YAML files corresponds to the various instructions and postings, including the postings here on the site.
The pod starts, the share is mounted. Unfortunately, it is read-only. All the suggestions from the postings I have found about this did not work so far.
Any idea what else I could look out for, what else I could try?
Thanks. Thomas
After digging deep, I found the cause of the problem. Apparently, the syntax for the NFS export is very sensitive. One more space can be problematic.
On the NFS server, two export entries were stored in the kernel tables. The first R/O and the second R/W. I don't know whether this is a Centos bug because of the syntax in /etc/exports.
On another Centos machine I was able to mount the share without any problems (r/w). In the container (Debian-based image), however, not (only r/o). I have not investigated whether this is due to Kubernetes or Debian behaves differently.
After correcting the /etc/exports file and restarting the NFS server, there was only one correct entry in the kernel table. After that, mounting R/W worked on a Centos machine as well as in the Debian-based container inside K8s.
Here are the files / table:
privious /etc/exports:
/nfsshare 172.16.6.* (rw,sync,no_root_squash,no_all_squash,no_acl)
==> kernel:
/nfsshare 172.16.6.*(ro, ...
/nfsshare *(rw, ...
corrected /etc/exports (w/o the blank):
/nfsshare *(rw,sync,no_root_squash,no_all_squash,no_acl)
In principle, the idea of using an init container is a good one. Thank you for reminding me of this.
I have tried it.
Unfortunately, it doesn't change the basic problem. The file system is mounted "read-only" by Kubernetes. The init container returns the following error message (from the log):
chmod: /var/opt/dilax/enumeris: Read-only file system
What can be done to backup kubernetes PVC regularly for GCP and AWS?
GCP has VolumeSnapshot but I'm not sure how to schedule it, like every hour or every day.
I also tried Gemini/fairwinds but I get the following error when for GCP. I installed the charts as mentioned in README.MD and I can't find anyone else encountering the same error.
error: unable to recognize "backup-test.yml": no matches for kind "SnapshotGroup" in version "gemini.fairwinds.com/v1beta1"
You can implement Velero, which gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes.
Unfortunately, Velero only allows you to backup & restore PV, not PVCs.
Velero’s restic integration backs up data from volumes by accessing the node’s filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
Might wanna look into stash.run
Agree with #hdhruna - Velero is really the most popular tool for doing that task.
However, you can also try miracle2k/k8s-snapshots
Automatic Volume Snapshots on Kubernetes
How is it useful? Simply add
an annotation to your PersistentVolume or PersistentVolumeClaim
resources, and let this tool create and expire snapshots according to
your specifications.
Supported Environments:
Google Compute Engine disks,
AWS EBS disks.
I evaluated multiple solutions including k8s CSI VolumeSnapshots, https://stash.run/, https://github.com/miracle2k/k8s-snapshots and CGP disks snapshots.
The best one in my opinion, is using k8s native implementation of snapshots via CSI driver, that is if you have a cluster version > = 1.17. This allows snapshoting volumes while in use, doesn't require having a read many or write many volume like stash.
I chose gemini by fairwinds also to automate backup creation and deletion and restoration and it works like a charm.
I believe your problem is caused by that missing CRD from gemini in your cluster. Verify that the CRD is installed correctly and also that the version installed is indeed the version you are trying to use.
My installation went flawlessly using their install guide with Helm.
I am using Kubernetes 1.4 persistent volume support, iSCSI/NFS PV and PVC successfully, in my containers. However it needs to first provision the storage by specifying the capacity both at PV creation and during claiming the storage.
My requirement is to just provide storage to cluster(and don't want to mention the capacity of storage) and let users/developers claim the storage based on their requirements. So need to use dynamic provisioning using StorageClass. Just declare the storage with details and let developers claim it based on their needs.
However got confused about using dynamic volume provisioning for iSCSI and NFS using Storage class and not getting exact steps to follow. As per documentation i need to use external volume plugin for both these types and it has already been made available as a part of incubator project - https://github.com/kubernetes-incubator/external-storage/. But i am not getting how to load/run that external provisioner(i need to run it as a container itself??i guess) and then write storage class with details of iSCSI/NFS storage.
Can somebody who has already done/used it can guide/provide pointers on this?
Thanks in advance,
picku
The project you pointed to is specific to iSCSI targets running targetd. You basically download the YAML files here https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd/kubernetes, modify them with your storage provider's parameters and deploy the pods using kubectl create. In your pods you need to specify the a storageclass. The storageclass then specifies a the iSCSI provisioner. There are more steps but that's the gist of it.
See this link for more detailed instructions https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd
the OpenEBS community has folks running this way afaik. There is a blog here for example explaining one approach supporting WordPress: https://blog.openebs.io/setting-up-persistent-volumes-in-rwx-mode-using-openebs-142632244cb2