Can I use EBS in EKS on fargate? - kubernetes

I want to using prometheus in EKS on AWS fargate
I follow this.
https://aws.amazon.com/jp/blogs/containers/monitoring-amazon-eks-on-aws-fargate-using-prometheus-and-grafana/
but I can't create persistent volume claims.
this is prometheus-storageclass.yaml.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
namespace: prometheus
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
debug
Can I Use aws-ebs in provisioner field?
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-server
namespace: prometheus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
storageClassName: prometheus
I apply sc and pvc
When I apply the PVC, the PVC is still pending and I get the following message
Failed to provision volume with StorageClass "prometheus": error finding candidate zone for pvc: no instances returned

I forgot to create the node group.
eksctl create nodegroup --cluster=myClusterName

Related

RabbitMQ cluster-operator does not work in Kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

How to use a storage class for StateFulSet? Do I have to create a PVC?

How to use a storage class for statefulset? I've created the StorageClass. I also created PVC but i'm a bit confused if PVC needs to be create since PVC already requests storage and volumeClaimTemplates also requests storage. Eitherway its not working with or with out pvc.
I get the following error:
create Pod dbhost001-0 in StatefulSet dbhost001 failed error: failed to create PVC mysql-dev-dbhost001-0: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
create Claim mysql-dev-dbhost001-0 for Pod dbhost001-0 in StatefulSet dbhost001 failed error: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
storageClass.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
Statefultset.yml:
apiVersion: apps/v1
kind: StatefulSet
....
....
volumeClaimTemplates:
- metadata:
name: mysql-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
stroage: 2Gi
I'm not sure if pvc is needed? I was using this for a normal replicaset deployment. But not sure if Statefulset needs this.
PersistentVolumeClaim.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-dev
namespace: test-db-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 2Gi
Figured it out.
First there was a typo in Statefultset.yml it should be storage instead of stroage.
Second there is no need for PersistentVolumeClaim since volumeClaimTemplates is the same thing which claims from storage class.

kubernetes volume mount issue for google PD (kubeadm installed)

I have Kubernetes installed via kubeadm in google cloud. Trying to mount google PD.
storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
volumeBindingMode: WaitForFirstConsumer
pvc-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: standard
but getting the following error:
error while running "VolumeBinding" prebind plugin for pod "test-pd": Failed to bind volumes: provisioning failed for PVC
Upon testing on a GKE cluster I was able to successfully deploy your StorageClass, PersistentVolumeClaim and a deployment of one Pod using it.
To discard the possibility of this being specific to vanilla Kubernetes it might help to also provide the YAML for your test-pd Pod/Deployment.

how to create st1 storageclass within a cluster deployed by kops?

I deployed a cluster with kops, and then I listed the storage class:
kubectl get storageclass --all-namespaces
NAME PROVISIONER AGE
default kubernetes.io/aws-ebs 2h
gp2 (default) kubernetes.io/aws-ebs 2h
I want to make a PVC of type st1, how shall I do that?
You can create storage classes like any other Kubernetes resource. For the st1 storage class, the following should work:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: st1
provisioner: kubernetes.io/aws-ebs
parameters:
type: st1
You can find more information on storage classes in the documentation, and also in particular on using the kubenetes.io/aws-ebs provisioner.
If you then want to dynamically provision a volume using that class, use the storageClassName: st1 property when creating a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: your-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: st1
resources:
requests:
storage: 500Gi

Kubernetes - PersistentVolumeClaim failed

I have a GKE based Kubernetes setup and a POD that requires a storage volume. I attempt to use the config below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: standard
This PVC is not provisioned. I get the below error:
Failed to provision volume with StorageClass "standard": googleapi: Error 503: The zone 'projects/p01/zones/europe-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
Looking at GKE quotas page, I don't see any issues. Deleting other PVCs also is not solving the issue. Can anyone help? Thanks.
There is no configuration problem at your side - there are actually not enough resources in the europe-west2-b zone to create a 2T persistent disk. Either try for a smaller volume or use a different zone.
There is an example for GCE in the docs. Create a new StorageClass specifying say the europe-west1-b zone (which is actually cheaper than europe-west2-b) like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gce-pd-europe-west1-b
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: europe-west1-b
And modify your PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: gce-pd-europe-west1-b