External provisioner "alicloud/disk fails to create volume - kubernetes

I use managed k8s silution on AliCloud.
I created storageClass such as:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: alicloud-pv-class
parameters:
type: cloud_ssd
regionid: cn-beijing
zoneid: cn-beijing-g
provisioner: alicloud/disk
reclaimPolicy: Retain
volumeBindingMode: Immediate
When I try to create pvc:
apiVersion: v1
kind: List
items:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: node-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: alicloud-pv-class
resources:
requests:
storage: 8Gi
I get:
Name: node-pv
Namespace: default
StorageClass: alicloud-pv-class
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: alicloud/disk
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 11s (x6 over 75s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "alicloud/disk" or manually created by system administrator
Eventhough I create manually pvc and bind it to pv and install helm chart for zookeeeper I am getting from the pod:
mkdir: cannot create directory '/bitnami/zookeeper/data': Permission denied
Any ideas?

I was not able to solve the issue but the problems I had was related to Aliyun Managed Serverless K8s. Even Aliyun Support admitted that such configuration is difficult. They have not provided any solution. We decided to go with Managed K8s (non-serveless). We use Terraform scripts. Everything works
out of the box including Ingress, LogTrail, PvC which were a real pain with serverless managed k8s.
The point is - dont waste your time with Managed Serveless K8s if you need logs and pvc. It doesnt work - at least hasn't worked for us and not much help from Aliyun support for that matter.

If you are using alicloud ASK, this file https://github.com/AliyunContainerService/serverless-k8s-examples/blob/master/volumes/alicloud-disk-controller.yaml is working for me.

Related

RabbitMQ cluster-operator does not work in Kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

Persistent volume claim not claiming the volume

I have this persistent volume claim
$ kubectl get pvc -ngitlab-managed-apps
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-prometheus-server Pending 0s
$ kubectl describe pvc prometheus-prometheus-server -ngitlab-managed-apps
Name: prometheus-prometheus-server
Namespace: gitlab-managed-apps
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-9.5.2
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: prometheus-prometheus-server-78bdf8f5b7-pkvcr
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 14s (x5 over 60s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
And I have created this persistent volume
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
prometheus-prometheus-server 8Gi RWO Retain Released gitlab-managed-apps/prometheus-prometheus-server manual 17m
$ kubectl describe pv prometheus-prometheus-server
Name: prometheus-prometheus-server
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"prometheus-prometheus-server"}...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Released
Claim: gitlab-managed-apps/prometheus-prometheus-server
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 8Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /var/prometheus-server
HostPathType:
Events: <none>
Why is the claim not claiming the volume? Besides the name, is there anything else that needs to match? Are there any logs I should look into? For now I only see "no persistent volumes available for this claim and no storage class is set"
To understand the error message, you need to understand how static and dynamic provisioning work.
When none of the static PVs the administrator created matches a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume especially for the PVC. This provisioning is based on StorageClasses.
After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
Your error says,"Your PVC hasn't found a matching PV and you also haven't mentioned any storageClass name".
Your PV has StorageClass: manual, but your PVC doesn't have any storageClass (StorageClass: ""). Adding StorageClass: manual to your PVC should solve your problem.
You must need to choose one of the following provisioning options.
Static provisioning:
create PV
create PVC with the label selector, so that it can find the PV with labels mentioned.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
selector: # <----- here
matchLabels:
release: "stable"
... ...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
label: # <---- here
release: "stable"
Dynamic provisioning:
create storageClass with provisioner mentioned
create PVC with the storageClass name mentioned
Or
create both PV and PVC with the same storageClass name mentioned.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: some-provisioner
... ... ...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
storageClassName: slow # <---- here
... ... ...
This is a short description, please visit the official doc for more details.

dynamic storage class on gcp kubernetes

I am trying to set k8 cluster on centos7 spinned on gcp cloud. i created 3 masters and 2 worker nodes. installation and checks are fine. now i am trying to create dynamic storage class to use gcp disk, but somehow it goes in pending state and no error messages found. can anyone point me to correct doc or steps to make sure it works.
[root#master1 ~]# cat slowdisk.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: us-east1-b
[root#master1 ~]# cat pclaim-slow.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
[root#master1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim2 Pending 5m58s
[root#master1 ~]# kubectl describe pvc
Name: claim2
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"claim2","namespace":"default"},"spec":{"accessModes...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
You haven't mentioned the storage class name. The value storageClassName: standard needs to be added on to PVC manifest file pclaim-slow.yaml.
You are missing the storageClassName in PVC yaml. Below is the update pvc.yaml. Check the documentation as well.
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 2G
It's kind of weird that you are not getting any error events from your PersistenVolumeClaim.
You can read Provisioning regional Persistent Disks, which says:
It's in Beta
This is a Beta release of local PersistentVolumes. This feature is not covered by any SLA or deprecation policy and might be subject to backward-incompatible changes.
Also pd-standard type, which you have configured is supposed to be used for drives of at least 200Gi.
Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.
You can read more regarding Dynamic Persistent Volumes.
If you encounter this error PVC pending Failed to get GCE GCECloudProvider that means your cluster was not configured to use GCE as Cloud Provider. It can be fixed by adding cloud-provider: gce to the controller manager or just applying the yaml:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
controllerManager:
extraArgs:
cloud-provider: gce
cloud-config: /etc/kubernetes/cloud
extraVolumes:
- name: cloud-config
hostPath: /etc/kubernetes/cloud-config
mountPath: /etc/kubernetes/cloud
pathType: FileOrCreate
kubeadm upgrade apply --config gce.yaml
Let me know if that helps, if not I'll try to be more helpful.

WaitForFirstConsumer PersistentVolumeClaim waiting for first consumer to be created before binding

I setup a new k8s in a single node, which is tainted. But the PersistentVolume can not be created successfully, when I am trying to create a simple PostgreSQL.
There is some detail information below.
The StorageClass is copied from the official page: https://kubernetes.io/docs/concepts/storage/storage-classes/#local
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The StatefulSet is:
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
...
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
About the running StorageClass:
$ kubectl describe storageclasses.storage.k8s.io
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
About the running PersistentVolumeClaim:
$ kubectl describe pvc
Name: postgres-data-postgres-0
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=postgres
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer <invalid> (x2 over <invalid>) persistentvolume-controller waiting for first consumer to be created before binding
K8s versions:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
The app is waiting for the Pod, while the Pod is waiting for a PersistentVolume by a PersistentVolumeClaim.
However, the PersistentVolume should be prepared by the user before using.
My previous YAMLs are lack of a PersistentVolume like this:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-data
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
local:
path: /data/postgres
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteOnce
storageClassName: local-storage
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- postgres
The local path /data/postgres should be prepared before using.
Kubernetes will not create it automatically.
I just ran into this myself and was completely thrown for a loop until I realized that the StorageClass's VolumeBindingMode was set to WaitForFirstConsumer vice my intended value of Immediate. This value is immutable so you will have to:
Get the storage class yaml:
kubectl get storageclasses.storage.k8s.io gp2 -o yaml > gp2.yaml
or you can also just copy the example from the docs here (make sure the metadata names match). Here is what I have configured:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
And delete the old StorageClass before recreating it with the new volumeBindingMode set to Immediate.
Note: The EKS clsuter may need perms to create cloud resources like EBS or EFS. Assuming EBS you should be good with arn:aws:iam::aws:policy/AmazonEKSClusterPolicy.
After doing this you should have no problem creating and using dynamically provisioned PVs.
In my case, I had claimRef without specified namespace.
Correct syntax is:
claimRef:
namespace: default
name: my-claim
StatefulSet also prevented initialization, I had to replace it with a deployment
This was a f5g headache
For me the problem was mismatched accessModes fields in the PV and PVC. PVC was requesting RWX/ReadWriteMany while PV was offering RWO/ReadWriteOnce.
The accepted answer didn't work for me. I think it's because the app key won't be set before the the StatefulSet's Pods are deployed, preventing the PersistentVolumeClaim to match the nodeSelector (preventing the Pods to start with the error didn't find available persistent volumes to bind.). To fix this deadlock, I defined one PersistentVolume for each node (this may not be ideal but it worked):
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-data-node1
labels:
type: local
spec:
[…]
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
I'm stuck in this vicious loop myself.
I'm trying to create a kubegres cluster (which relies on dynamic provisioning as per my understanding).
I'm using RKE on a local-servers-like setup.
and I have the same scheduling issue as the one initially mentioned.
noting that the accessmode of the PVC (created by kubegres) is set to nothing as per the below output.
[rke#rke-1 manifests]$ kubectl get pv,PVC
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/local-vol 20Gi RWO Delete Available local-storage 40s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/local-vol-mypostgres-1-0 Pending local-storage 6m42s
persistentvolumeclaim/postgres-db-mypostgres-1-0 Pending local-storage 6m42s
As an update, the issue in my case was that the PVC was not finding a Proper PV which was supposed to be dynamically provisioned. But for local storage classes, this feature is not yet supported therefore I had to use a third-party solution which solved my issue.
https://github.com/rancher/local-path-provisioner
This issue mainly happens with WaitForFirstConsumer when you define the nodeName in the Deployment/Pod specifications. Please make sure you don't define nodeName and hardbind the pod through it. The should be resolved once you remove nodeName.
I believe this can be a valid message that means that there are no containers started that have volumes that are bound to the persistent volume claim.
I experienced this issue on rancher desktop. It turned out the problem was caused by rancher not running properly after a macOS upgrade. The containers were not starting and would stay in a pending state.
After reseting the rancher desktop (using the UI), the containers were able to start well and the message disappeared.
waitforfirstconsumer-persistentvolumeclaim i.e. POD which requires this PVC is not scheduled. describe pods may give some more clue. In my case node was not able to schedule this POD since pod limit in node was 110 and deployment was exceeding it. Hope it helps to identify issue faster. increased the pod limit , restart kubelet in node solves it.

storageclass.storage.k8s.io "standard" not found for pvc on bare metal Kubernetes cluster

I've created a persistentVolumeClaim on my custom Kubernetes cluster, however it seems to be stuck in pending...
Do I need to install/configure some additional something? OR is this functionality only available on GCP / AWS?
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
metadata:
name: testingchris
describe pvc:
Name: testingchris
Namespace: diyclientapps
StorageClass: standard
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"testingchris","namespace":"diyclientapps"},"spec":{"accessModes"...
Finalizers: []
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 8s (x3 over 36s) persistentvolume-controller storageclass.storage.k8s.io "standard" not found
PVC is just a Claim, a declaration of ones requirements for persistent storage.
For PVC to bind, a PV that is matching PVC requirements must show up, and that can happen in two ways : manual provisioning (adding a PV from ie. kubectl) or with Dynamic Volume Provisioning
What you experience is that your current setup did not auto provision for your PVC