Iscsi as persistent storage statefulset kubernetes - kubernetes

I have a use case where there will be telecom application running within the several pods(every pod will host some configured service for billing for specific client) and this expects the service to store the state so obvious choice is statefulset .
Now the problem is I need to use iscsi as storage in the backend for these pods, Can you please point me to some reference to such use case - I am looking for Yaml for PV PVC and statefulset and how to link them . These PV PVC shall use iscsi as storage option.

Yes you are right statefulset is option however you might can also use the deployment.
You have not mentioned which cloud provider you will be using but still sharing one note : iscsi storage is not optimized with GKE cotnainer OS running K8s nodes so if you are no GCP GKE change OS or would suggest using the Ubuntu image first.
You can start the iscsi service on the Ubuntu first.
You can use the Minio or OpenEBS for the storage option also.
Sharing the details if for OpenEBS
Create GCP disks for attaching nodes as a mount or you can dynamically provision it using the YAML as per need.
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
name: disk-pool
annotations:
cas.openebs.io/config: |
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
spec:
name: disk-pool
type: disk
poolSpec:
poolType: striped
blockDevices:
blockDeviceList:
- blockdevice-<Number>
- blockdevice-<Number>
- blockdevice-<Number>
Stoage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-sc-rep1
annotations:
openebs.io/cas-type: cstor
cas.openebs.io/config: |
- name: StoragePoolClaim
value: "disk-pool"
- name: ReplicaCount
value: "1"
provisioner: openebs.io/provisioner-iscsi
Application workload
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storageClassName: openebs-sc-rep1
if you are on AWS EBS using the iscsi.
For testing you can also checkout
https://cloud.ibm.com/catalog/content/minio
Few links :
https://support.zadarastorage.com/hc/en-us/articles/360039289891-Kubernetes-CSI-Getting-started-with-Zadara-CSI-on-GKE-Google-Kubernetes-Engine-

Related

Having trouble deploying database to kubernetes cluster

I am able to deploy the database service itself, but when I try to deploy with a persistent volume claim as well, the deployment silently fails. Below is the deployment.yaml file I am using. The service deploys fine if I remove the first 14 lines that define the persistent volume claim.
apiVersion: apps/v1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: timescale-pvc-1
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: standard
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: timescale
spec:
selector:
matchLabels:
app: timescale
replicas: 1
template:
metadata:
labels:
app: timescale
spec:
containers:
- name: timescale
image: timescale/timescaledb:2.3.0-pg11
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: "password"
- name: POSTGRES_DB
value: "metrics"
volumes:
- name: timescaledb-pv
persistentVolumeClaim:
claimName: timescale-pvc-1
Consider StatefulSet for running stateful apps like databases. Deployment is preferred for stateless services.
You are using the below storage class in the PVC.
storageClassName: standard
Ensure the storage class supports dynamic storage provisioning.
Are you creating a PV along with PVC and Deployment? A Deployment, Stateful set or a Pod can only use PVC if there is a PV available.
If you're creating the PV as well then there's a possibility of a different issue. Please share the logs of your Deployment and PVC

How to sync a folder in kubernetes statefulset

We are trying to create a solution where we want to replicate changes made to a folder inside any of the pods of the Statefulset. Any file changes inside that folder on any POD should also reflect in other pods. Is there a sidecar solution for this requirement? Because we know that Statefulset will create separate PVs for each POD and there won't be any common mount across the pods of the Statefulset.
You can try using the NFS or file system like EFS using that you will be able to implement the ReadWritemany.
For ref Azure File.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-azurefile
labels:
k8s-app: nginx
version: v1
spec:
serviceName: statefulset-azurefile
replicas: 1
template:
metadata:
labels:
k8s-app: nginx
version: v1
spec:
containers:
- name: statefulset-azurefile
image: nginx
volumeMounts:
- name: persistent-storage
mountPath: /mnt/azurefile
volumeClaimTemplates:
- metadata:
name: persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: azurefile
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
Demo : https://github.com/andyzhangx/demo/tree/master/linux/statefulset
If volumeClaimTemplates not work as expected use the persistentVolumeClaim
Article to read about ReadWriteMany access Mode :https://learn.microsoft.com/en-us/azure/aks/azure-files-volume
If you are on other cloud providers like GCP, AWS, Oracle(OCI) provides different file services.
GCP- Filestore
AWS- EFS
OCI- Filestorage
OCI article if you want to explore : https://enabling-cloud.github.io/oci-learning/manual/StaticPersistentVolumeOnOCI.html

MongoDB kubernetes local storage two nodes

I am using kubeadm localy at two physical machines. I don't have any cloud resources, and i want to build a mongodb auto scaling (localy for start, maybe later at cloud). So i have to use the local storage of my two physical machines. I suppose i have to create a local storage class and volumes. I am very new to kubernetes so dont judge me hard. As i read here https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/ local persisent volumes are only for one node? Is there any way to take advance of my both physical machines storages and build a simple mongo db scaling, using kubernetes mongo operator and ops manager? I made a few tests here, but i could achieve my goal. pod has unbound immediate PersistentVolumeClaims ops manager
What i was thinking in first place, was to "break" my two hard drives into many piecies, and use sharding for mongo dv scaling
thanks in advace.
Well, you can use a NFS Server with the same volume mounted in both nodes sharing the same mount point.
Please be aware this approach is not recommended for production.
There are tons of howtos of how configure nfs server, example:
https://www.tecmint.com/install-nfs-server-on-ubuntu/
https://www.tecmint.com/how-to-setup-nfs-server-in-linux/
With NFS working you can use the hostPath to mount the nfs in you pods:
Create the PV and the PVC:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And use the volume in your deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pv
spec:
replicas: 1
selector:
matchLabels:
app: test-pv
template:
metadata:
labels:
app: test-pv
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /data
name: pv-storage
volumes:
- name: pv-storage
persistentVolumeClaim:
claimName: pv-claim

Minio data does not persist through reboot

I deployed Minio on Kubernetes on an Ubuntu desktop. It works fine, except that whenever I reboot the machine, everything that was stored in Minio mysteriously disappears (if I create several buckets with files in them, I come back to a completely blanks slate after the reboot - the buckets, and all their files, are completely gone).
When I set up Minio, I created a persistent volume in Kubernetes which mounts to a folder (/mnt/minio/minio - I have a 4 TB HDD mounted at /mnt/minio with a folder named minio inside that). I noticed that this folder seems to be empty even when I store stuff in Minio, so perhaps Minio is ignoring the persistent volume and using the container storage? However, I don't know why this would be happening; I have both a PV and a PV claim, and kubectl shows that they are bound to each other.
Below are the yaml files I applied to deploy my minio installation:
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/minio/minio"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 99Gi
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/mnt/minio/minio"
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
you need to mount container's /storage directory in the directory you are mounting on the container /mnt/minio/minio/;
args:
- server
- /mnt/minio/minio/storage
But consider deploying using StatefulSet, so when your pod restarts it will retain everything of the previous pod.

Bind several Persistent Volume Claims to one mount path

I am working on an application on Kubernetes in GCP and I need a really huge SSD storage for it.
So I created a StorageClass recourse, a PersistentVolumeClaim that requests 500Gi of space and then a Deployment recourse.
StorageClass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
PVC.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-volume
spec:
storageClassName: faster
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 2
selector:
matchLabels:
app: mongo
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: mongo-volume
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-volume
When I applied the PVC, it stuck in Pending... state for hours. I found out experimentally that it binds correctly with maximum 200Gi of requested storage space.
However, I can create several 200Gi PVCs. Is there a way to bind them to one path to work as one big PVC in Deployment.yaml? Or maybe the 200Gi limit can be expanded?
I have just tested it on my own env and it works perfectly. So the problem is in Quotas.
For this check:
IAM & admin -> Quotas -> Compute Engine API Local SSD (GB) "your region"
Amount which you used.
I've created the situation when I`m run out of Quota and it stack in pending status the same as your.
It happens because you create PVC for each pod for 500GB each.