how to create st1 storageclass within a cluster deployed by kops? - kubernetes

I deployed a cluster with kops, and then I listed the storage class:
kubectl get storageclass --all-namespaces
NAME PROVISIONER AGE
default kubernetes.io/aws-ebs 2h
gp2 (default) kubernetes.io/aws-ebs 2h
I want to make a PVC of type st1, how shall I do that?

You can create storage classes like any other Kubernetes resource. For the st1 storage class, the following should work:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: st1
provisioner: kubernetes.io/aws-ebs
parameters:
type: st1
You can find more information on storage classes in the documentation, and also in particular on using the kubenetes.io/aws-ebs provisioner.
If you then want to dynamically provision a volume using that class, use the storageClassName: st1 property when creating a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: your-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: st1
resources:
requests:
storage: 500Gi

Related

RabbitMQ cluster-operator does not work in Kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

What Persistent Volume storage solutions I can use when I run my Kubernetes cluster on GCP?

I have deployed my Kubernetes cluster on GCP Compute Engines and having 3 Master Nodes and 3 Worker Nodes (It's not a GKE Cluster). Can anybody suggest me what storage options I can use for my cluster? If I create a virtual disk on GCP, can I use that disk as a persistent storage?
You can use GCE Persistent Disk Storage Class.
Here is how you create the storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
Then you do the following to create the PV & PVC and to attach to your pod.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gce-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: webserver-pd
spec:
containers:
- image: httpd
name: webserver
volumeMounts:
- mountPath: /data
name: dynamic-volume
volumes:
- name: dynamic-volume
persistentVolumeClaim:
claimName: gce-claim
Example taken from this blog post
There are two types of provisioning Persistent Volumes: Static Provisioning and Dynamic Provisioning.
I will briefly describe each of these types.
Static Provisioning
Using this approach you need to create Disk, PersistentVolume and PersistentVolumeClaim manually.
I've create simple example for you to illustrate how it works.
First I created disk, on GCP we can use gcloud command:
$ gcloud compute disks create --size 10GB --region europe-west3-c test-disk
NAME ZONE SIZE_GB TYPE STATUS
test-disk europe-west3-c 10 pd-standard READY
Next I created PV and PVC using this manifest files:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
gcePersistentDisk:
pdName: test-disk # This GCE PD must already exist.
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After applaying this manifest files, we can check status of PV and PVC:
root#km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-test 10Gi RWO Retain Bound default/claim-test 12m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pv-test 10Gi RWO 12m
Finally I used above claim as volume:
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: web
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx"
name: vol-test
volumes:
- name: vol-test
persistentVolumeClaim:
claimName: claim-test
We can inspect created Pod to check if it works as expected:
root#km:~# kubectl exec -it web -- bash
root#web:/# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sdb 9.8G 37M 9.8G 1% /usr/share/nginx
...
Dynamic Provisioning
In this case volume is provisioned automatically when application requires it.
First you need to create StorageClass object to define a provisioner such e.g. kubernetes.io/gce-pd.
We don't need to create PersistenVolume anymore, it's created automatically by StorageClass for us.
I've also created simple example for you to illustrate how it works.
First I created StorageClass as default storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
And then PVC (the same as in the previous example) - but in this case PV was created automatically:
root#km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO Delete Bound default/claim-test standard 3m10s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO standard 3m12s
In more advanced cases it may be useful to create multiple StorageClasses with differnet persistent disks types.

How to use a storage class for StateFulSet? Do I have to create a PVC?

How to use a storage class for statefulset? I've created the StorageClass. I also created PVC but i'm a bit confused if PVC needs to be create since PVC already requests storage and volumeClaimTemplates also requests storage. Eitherway its not working with or with out pvc.
I get the following error:
create Pod dbhost001-0 in StatefulSet dbhost001 failed error: failed to create PVC mysql-dev-dbhost001-0: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
create Claim mysql-dev-dbhost001-0 for Pod dbhost001-0 in StatefulSet dbhost001 failed error: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
storageClass.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
Statefultset.yml:
apiVersion: apps/v1
kind: StatefulSet
....
....
volumeClaimTemplates:
- metadata:
name: mysql-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
stroage: 2Gi
I'm not sure if pvc is needed? I was using this for a normal replicaset deployment. But not sure if Statefulset needs this.
PersistentVolumeClaim.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-dev
namespace: test-db-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 2Gi
Figured it out.
First there was a typo in Statefultset.yml it should be storage instead of stroage.
Second there is no need for PersistentVolumeClaim since volumeClaimTemplates is the same thing which claims from storage class.

Can I use EBS in EKS on fargate?

I want to using prometheus in EKS on AWS fargate
I follow this.
https://aws.amazon.com/jp/blogs/containers/monitoring-amazon-eks-on-aws-fargate-using-prometheus-and-grafana/
but I can't create persistent volume claims.
this is prometheus-storageclass.yaml.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
namespace: prometheus
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
debug
Can I Use aws-ebs in provisioner field?
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-server
namespace: prometheus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
storageClassName: prometheus
I apply sc and pvc
When I apply the PVC, the PVC is still pending and I get the following message
Failed to provision volume with StorageClass "prometheus": error finding candidate zone for pvc: no instances returned
I forgot to create the node group.
eksctl create nodegroup --cluster=myClusterName

Snapshotting on google cloud/Kubernetes when using storageClass persistent volumes

StorageClasses are the new method of specifying dynamic persistent volume claim (PVC) dependencies within Kubernetes. This avoids the need to explicitly provision one directly with the cloud provider (in my case Google Container Engine (GKE)).
Definition for the StorageClasses (GKE already has a default for standard class)
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: europe-west1-b
Definition for the actual PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-server-pvc
namespace: staging
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
storageClassName: "standard"
Here is the result of kubernetes get storageclass:
NAME TYPE
fast kubernetes.io/gce-pd
standard (default) kubernetes.io/gce-pd
Here is the result of kubernetes get pvc:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
nfs-pvc Bound nfs 1Mi RWX 119d
nfs-server-pvc Bound pvc-905a810b-3f13-11e7-82f9-42010a840072 100Gi RWO standard 81d
I would like to continue taking snapshots of the volumes but the dynamic nature of the volume names created (in this case pvc-905a810b-3f13-11e7-82f9-42010a840072), mean i cannot continue with the following command that i had been using via cron (note the "nfs" name is now incorrect):
gcloud compute --project "XXX-XXX" disks snapshot "nfs" --zone "europe-west1-b" --snapshot-names "nfs-${DATE}"
I guess this boils down to Kubernetes allowing explicit naming through StorageClass-based PVC. The docs don't seem to allow this. Any ideas?
One approach is to manually create the PV and give it a stable name that you can use in your scripts. You can use gcloud commands to create the underlying PD disks. When you create the PV, give it a label:
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: my-pv-0
labels:
pdName: my-pv-0
spec:
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteOnce"
storageClassName: fast
gcePersistentDisk:
fsType: "ext4"
pdName: "my-pd-0"
Then attach it to the PVC using a selector:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast
selector:
matchLabels:
pdName: my-pv-0