Kubernetes Ceph StorageClass with dynamic provisioning - kubernetes

I'm trying to setup my Kubernetescluster with a Ceph Cluster using a storageClass, so that with each PVC a new PV is created automatically inside the ceph cluster.
But it doesn't work. I've tried a lot and read a lot of documentation and tutorials and can't figure out, what went wrong.
I've created 2 secrets, for the ceph admin user and an other user kube, which I created with this command to grant access to a ceph osd pool.
Creating the pool:
sudo ceph osd pool create kube 128
Creating the user:
sudo ceph auth get-or-create client.kube mon 'allow r' \
osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' \
-o /etc/ceph/ceph.client.kube.keyring
After that I exported both the keys and converted them to Base64 with:
sudo ceph auth get-key client.admin | base64 and sudo ceph auth get-key client.kube | base64
I used those values inside my secret.yaml to create kubernetes secrets.
apiVersion: v1
kind: Secret
type: "kubernetes.io/rbd"
metadata:
name: ceph-secret
data:
key: QVFCb3NxMVpiMVBITkJBQU5ucEEwOEZvM1JlWHBCNytvRmxIZmc9PQo=
And another one named ceph-user-secret.
Then I created a storage class to use the ceph cluster
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: publicIpofCephMon1:6789,publicIpofCephMon2:6789
adminId: admin
adminSecretName: ceph-secret
pool: kube
userId: kube
userSecretName: ceph-kube-secret
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
To test my setup I created a PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-eng
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
But it gets stuck in the pending state:
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc-eng Pending standard 25m
Also, no images are created inside the ceph kube pool.
Do you have any recommendations how to debug this problem?
I tried to install the ceph-common ubuntu package on all kubernetes nodes. I switched the kube-controller-manager docker image with an image provided by AT&T which includes the ceph-common package.
https://github.com/att-comdev/dockerfiles/tree/master/kube-controller-manager
Network is fine, I can access my ceph cluster from inside a pod and from every kubernetes host.
I would be glad if anyone has any ideas!

You must use annotation: ReadWriteOnce.
As you can see https://kubernetes.io/docs/concepts/storage/persistent-volumes/ (Persistent Volumes section) RBD devices does not support ReadWriteMany mode. Choose different volume plugin (CephFS, for example) if you need read and write data from PV by several pods.

As an expansion on the accepted answer.....
RBD is a remote block device. ie an external hard drive like iSCSI. The filesystem is interpreted by the client container so can only be written by a single user or corruption will happen.
CephFS is a network aware filesystem similar to NFS or SMB/CIFS. That allows multiple writes to different files. The filesystem is interpreted by the Ceph server so can accept writes from multiple clients.

Related

Initializing a dynamically provisioned shared volume with ReadOnlyMany access mode

My GKE deployment consists of N pods (possibly on different nodes) and a shared volume, which is dynamically provisioned by pd.csi.storage.gke.io and is a Persistent Disk in GCP. I need to initialize this disk with data before the pods go live.
My problem is I need to set accessModes to ReadOnlyMany and be able to mount it to all pods across different nodes in read-only mode, which I assume effectively would make it impossible to mount it in write mode to the initContainer.
Is there a solution to this issue? Answer to this question suggests a good solution for a case when each pod has their own disk mounted, but I need to have one disk shared among all pods since my data is quite large.
With GKE 1.21 and later, you can enable the managed Filestore CSI driver in your clusters. You can enable the driver for new clusters
gcloud container clusters create CLUSTER_NAME \
--addons=GcpFilestoreCsiDriver ...
or update existing clusters:
gcloud container clusters update CLUSTER_NAME \
--update-addons=GcpFilestoreCsiDriver=ENABLED
Once you've done that, create a storage class (or have or platform admin do it):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: filestore-example
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
tier: standard
network: default
After that, you can use PVCs and dynamic provisioning:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteMany
storageClassName: filestore-example
resources:
requests:
storage: 1Ti
...I need to have one disk shared among all pods
You can try Filestore. First your create a FileStore instance and save your data on a FileStore volume. Then you install FileStore driver on your cluster. Finally you share the data with pods that needs to read the data using a PersistentVolume referring the FileStore instance and volume above.

how to connect vmware storage to kuberentes built using rancher 2.8

The cluster nodes are on-prem vmware servers, we used rancher just to build the k8s cluster.
Built was successful, when trying to host apps that are using PVC we have problems, the dynamic volume provisioning isn't happening and pvc are stuck in 'pending' state.
VMWare storage class is being used, we got confirmed from our vsphere admins that the VM's have visibility to the datastores and ideally it should work.
While configuring the cluster we have used the cloud provider credentials according the rancher docs.
cloud_provider:
name: vsphere
vsphereCloudProvider:
disk:
scsicontrollertype: pvscsi
global:
datacenters: nxs
insecure-flag: true
port: '443'
soap-roundtrip-count: 0
user: k8s_volume_svc#vsphere.local
Storage class yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nxs01-k8s-0004
parameters:
datastore: ds1_K8S_0004
diskformat: zeroedthick
reclaimPolicy: Delete
PVC yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: arango
namespace: arango
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nxs01-k8s-0004
Now wanted understand why my PVC are stuck under pending state? is there any other steps missed out.
I saw in the rancher documentation saying Storage Policy has to be given as an input
https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/#creating-a-storageclass
In a vmware document it referred that as an optional parameter, and also had a statement on the top stating it doesn't apply to the tools that use CSI(container storage Interface)
https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html
I found that rancher is using an CSI driver called rshared.
So now is this storage policy a mandatory one? is this one that stopping me from provisioning a VMDK file?
I gave the documentation of creating the storage policy to the vsphere admins, they said this is for VSAN and the datastores are in VMax. I couldn't understand the difference or find an diff doc for VMax.
It would be a great help!! if fixed :)
The whole thing is about just the path defined for the storage end, in the cloud config yaml the PATH was wrong. The vpshere admins gave us the PATH where the vm
's residing instead they should have given the path where the Storage Resides.
Once this was corrected the PVC came to bound state.

Kubernetes pod scheduling with replication and local persistent volumes?

Hi i have this problem in kubernetes. I install a deployment with helm that consists of 2 pods i have to put them in two nodes but i want that the persistent volume used by the pods are in the same nodes as the pod are deployed. Is feasible in helm? Thanks
I think you can use Local Persistent Volume
See more: local-pv, local-pv-comparision.
Usage of Local Persistent Volumes:
The local volumes must still first be set up and mounted on the local
node by an administrator. The administrator needs to mount the local
volume into a configurable “discovery directory” that the local volume
manager recognizes. Directories on a shared file system are supported,
but they must be bind-mounted into the discovery directory.
This local volume manager monitors the discovery directory, looking
for any new mount points. The manager creates a PersistentVolume
object with the appropriate storageClassName, path, nodeAffinity, and
capacity for any new mount point that it detects. These
PersistentVolume objects can eventually be claimed by
PersistentVolumeClaims, and then mounted in Pods.
After a Pod is done using the volume and deletes the
PersistentVolumeClaim for it, the local volume manager cleans up the
local mount by deleting all files from it, then deleting the
PersistentVolume object. This triggers the discovery cycle: a new
PersistentVolume is created for the volume and can be reused by a new
PersistentVolumeClaim.
Local volume can be requested in exactly the same way as any other
PersistentVolume type: through a PersistentVolumeClaim. Just specify
the appropriate StorageClassName for local volumes in the
PersistentVolumeClaim object, and the system takes care of the rest!
In your case I will create manually storageclass and use it during chart installation.
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: example-local-storage
local:
path: /mnt/disks/v1
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-local-storage
provisioner: kubernetes.io/no-provisioner
and override default PVC storageClassName configuration during helm install like this:
$ helm install --name chart-name --set persistence.storageClass=example-local-storage
Take a look: using-local-pv, pod-local-pv, kubernetes-1.19-lv.

AZDATA BDC CREATE stuck. Control containers pending. Scheduling error on NFS PVC

I am very new to Linux, Docker and Kubernetes. I need to setup an on-premises POC to showcase BDC.
What I have installed.
1. Ubuntu 19.10
2. Kubernetes Cluster
3. Docker
4. NFS
5. Settings and prerequisites but obviously missing stuff.
This has been done with stitched together tutorials. I am stuck on "AZDATA BDC Create". Error below.
Scheduling error on POD PVC.
Some more information.
NFS information
Storage class info
More Info 20191220:
PV & PVcs bound NFS side
Dynamic Volume Provisioning alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand. In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
First make sure you have defined StorageClass properly. You have defined nfs-dynamic class but it is not defined as default storage class, that's why your claims cannot bound volumes to it. You have two options:
1. Execute command below:
$ kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Another option is to define in pvc configuration file storageclass you have used:
Here is an example cofiguration of such file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}'
Simple add line storageClassName: nfs-dynamic.
Then make sure you have followed steps from this instruction: nfs-kubernetes.

How to use OpenStack Cinder to create storage class and dynamically provision persistent volume in Kubernetes Cluster

Recently when practicing kubernetes , I found there is no doc and example specifically explaining how to use cinder correctly in kubernetes.
So how to setup cinder to be used in kubernetes ?
I did some experiment and worked out how to setup cinder with kubernetes. Just find a suitable to document and share.
Preparation
kubernetes cluster
openstack environment and make sure cinder service is available
Background
From my investigation, component kube-controller-manager is responsible for loading volume plugins and related in Kubernetes. So we could make cinder available by adjusting kube-controller-manager configuration.
Steps
Prepare cloud.conf file to contain your openstack creds
Prepare your openstack creds and saved as a file , for example /etc/kubernetes/cloud.conf in kubernetes control panel which kube-controller-manager locates. The following is example for cloud.conf
[Global]
auth-url=$your_openstack_auth_url
username=$your_openstack_user
password=$your_user_pw
region=$your_openstack_reigon
tenant-name=$your_project_name
domain-name=$your_domain_name
ca-file=$your_openstack_ca
Most could be found from your stackrc file. And ca-file item is optional, depending on if your openstack auth url is http or https
Adjust kube-controller-manager start configuration
This link is a full detail options for kube-controller-manager (https://kubernetes.io/docs/admin/kube-controller-manager/)
Actually we should add two extra parameters based on your current one
--cloud-provider=openstack
--cloud-config=/etc/kubernetes/cloud.conf
There are mainly two ways to start kube-controller-manager : 1) using systemd 2) using static pod .
Just one tips, if you are using static pod for kube-controller-manager , make sure you have mount all files such as cloud.conf or openstack ca file into your container.
Verification
We will create a storageclass, and use this storageclass to create persistent volume dynamically.
Create a storageclass named standard:
demo-sc.yml:
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder
Using command kubectl create -f demo-sc.yml to create and using command kubectl get sc to verify if it created correctly
NAME TYPE
standard (default) kubernetes.io/cinder
Create a PersistentVolumeClaim to use StorageClass provison a Persistent Volume in Cinder:
demo-pvc.yml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cinder-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Creating PVC by kubectl create -f demo-pvc.yml
And now checking by command kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
cinder-claim Bound pvc-5dd3d62e-9204-11e7-bc43-fa163e0e0379 1Gi RWO standard 23h
And in openstack environment, checking by command cinder list | grep pvc-5dd3d62e-9204-11e7-bc43-fa163e0e0379
root#ds0114:~# cinder list | grep pvc-5dd3d62e-9204-11e7-bc43- fa163e0e0379
| ddd8066d-2e16-4cb2-a89e-cd9d5b99ef1b | available | kubernetes-dynamic- pvc-5dd3d62e-9204-11e7-bc43-fa163e0e0379 | 1 | CEPH_SSD | false | |
So now StorageClass is working well using Cinder in Kubernetes.
Thanks a lot for your great share!
The solution works for me (K8S 1.14.3, OpenStack Queen), and I just added snippets of parameter/volumeMounts/volume like below:
Parameter:
- --cloud-provider=openstack
- --cloud-config=/etc/kubernetes/cloud-config
volumeMounts:
-- mountPath: /etc/kubernetes/cloud-config
name: cloud
readOnly: true
volume:
-- hostPath:
path: /etc/kubernetes/cloud.conf
type: FileOrCreate
name: cloud