Kubernetes OpenSearch Deployment | "no persistent volumes available for this claim and no storage class is set" error - kubernetes

We deployed OpenSearch using Kubernetes according documentation instructions on 3 nodes cluster (https://opensearch.org/docs/latest/opensearch/install/helm/) , after deployment pods are on Pending state and when checking it, we see following msg:
"
persistentvolume-controller no persistent volumes available for this claim and no storage class is set
"
Can you please advise what could be wrong in our OpenSearch/Kubernetes deployment or what can be missing from configuration perspective?
sharing some info:
Cluster nodes:
[root#I***-M1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ir***-m1 Ready control-plane,master 4h34m v1.23.4
ir***-w1 Ready 3h41m v1.23.4
ir***-w2 Ready 3h19m v1.23.4
Pods State:
[root#I****1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
opensearch-cluster-master-0 0/1 Pending 0 80m
opensearch-cluster-master-1 0/1 Pending 0 80m
opensearch-cluster-master-2 0/1 Pending 0 80m
[root#I****M1 ~]# kubectl describe pvc
Name: opensearch-cluster-master-opensearch-cluster-master-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-deployment
app.kubernetes.io/name=opensearch
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: opensearch-cluster-master-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m24s (x18125 over 3d3h) persistentvolume-controller **no persistent
volumes available for this claim and no storage class is set**
.....
[root#IR****M1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM
POLICY STATUS CLAIM STORAGECLASS REASON AGE
opensearch-cluster-master-opensearch-cluster-master-0 30Gi RWO Retain Available manual 6h24m
opensearch-cluster-master-opensearch-cluster-master-1 30Gi RWO Retain Available manual 6h22m
opensearch-cluster-master-opensearch-cluster-master-2 30Gi RWO Retain Available manual 6h23m
task-pv-volume 60Gi RWO Retain Available manual 7h48m
[root#I****M1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
opensearch-cluster-master-opensearch-cluster-master-0 Pending 3d3h
opensearch-cluster-master-opensearch-cluster-master-1 Pending 3d3h
opensearch-cluster-master-opensearch-cluster-master-2 Pending 3d3h

...no storage class is set...
Try upgrade your deployment with storage class, presumed you run on AWS EKS: helm upgrade my-deployment opensearch/opensearch --set persistence.storageClass=gp2
If you are running on GKE, change gp2 to standard. On AKS change to default.

Related

Why Velero and minion backup restore not working for PGO cluster?

I have set up Minio and Velero backup for my k8s cluster. Everything works fine as I can take backups and I can see them in Minio. I have a PGO operator cluster hippo running with load balancer service. When I restore a backup via Velero, everything seems okay. It creates namespaces and all the deployments and pods in running state.
However I am not able to connect to my database via PGadmin. When I delete the pod it is not recreating it but shows an error of unbound PVC.
This is the output.
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
master#masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get PV
error: the server doesn't have a resource type "PV"
master#masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-4bhf-pgdata openebs-hostpath 16m
pvc-2dd12937-a70e-40b4-b1ad-be1c9f7b39ec 5G RWO Delete Bound default/local-hostpath-pvc openebs-hostpath 6d9h
pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-xvhq-pgdata openebs-hostpath 16m
pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO Delete Bound postgres-operator/hippo-instance2- p4ct-pgdata openebs-hostpath 7m32s
pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO Delete Bound postgres-operator/hippo-instance2- s6fs-pgdata openebs-hostpath 7m33s
pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-c4rt-pgdata openebs-hostpath 16m
pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO Delete Bound postgres-operator/hippo-instance2- 29gh-pgdata openebs-hostpath 7m32s
pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO Delete Bound postgres-operator/hippo-repo2 openebs-hostpath 7m30s
master#masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-instance2-29gh-pgdata Bound pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-p4ct-pgdata Bound pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-s6fs-pgdata Bound pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO openebs-hostpath 7m51s
hippo-repo2 Bound pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO openebs-hostpath 7m51s
hippo-s3-instance2-4bhf-pgdata Bound pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-c4rt-pgdata Bound pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-xvhq-pgdata Bound pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO openebs-hostpath 16m
hippo-s3-repo1 Pending pgo 16m
master#masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator NAME READY STATUS RESTARTS AGE
hippo-backup-txk9-rrk4m 0/1 Completed 0 7m43s
hippo-instance2-29gh-0 4/4 Running 0 8m5s
hippo-instance2-p4ct-0 4/4 Running 0 8m5s
hippo-instance2-s6fs-0 4/4 Running 0 8m5s
hippo-repo-host-0 2/2 Running 0 8m5s
hippo-s3-instance2-c4rt-0 3/4 Running 0 16m
hippo-s3-repo-host-0 0/2 Pending 0 16m
pgo-7c867985c-kph6l 1/1 Running 0 16m
pgo-upgrade-69b5dfdc45-6qrs8 1/1 Running 0 16m
master#masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl delete pods hippo-s3-repo-host-0 -n postgres-operator
pod "hippo-s3-repo-host-0" deleted
master#masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator NAME READY STATUS RESTARTS AGE
hippo-backup-txk9-rrk4m 0/1 Completed 0 7m57s
hippo-instance2-29gh-0 4/4 Running 0 8m19s
hippo-instance2-p4ct-0 4/4 Running 0 8m19s
hippo-instance2-s6fs-0 4/4 Running 0 8m19s
hippo-repo-host-0 2/2 Running 0 8m19s
hippo-s3-instance2-c4rt-0 3/4 Running 0 17m
hippo-s3-repo-host-0 0/2 Pending 0 2s
pgo-7c867985c-kph6l 1/1 Running 0 17m
pgo-upgrade-69b5dfdc45-6qrs8 1/1 Running 0 17m
master#masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-instance2-29gh-pgdata Bound pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO openebs-hostpath 8m45s
hippo-instance2-p4ct-pgdata Bound pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO openebs-hostpath 8m45s
hippo-instance2-s6fs-pgdata Bound pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO openebs-hostpath 8m45s
hippo-repo2 Bound pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO openebs-hostpath 8m45s
hippo-s3-instance2-4bhf-pgdata Bound pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO openebs-hostpath 17m
hippo-s3-instance2-c4rt-pgdata Bound pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO openebs-hostpath 17m
hippo-s3-instance2-xvhq-pgdata Bound pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO openebs-hostpath 17m
hippo-s3-repo1 Pending pgo 17m
What Do I Want?
I want Velero to restore the full backup and I should be able to get access to my databases as I can before restore. It seems like Velero is not able to perform full backups.
Any suggestion will be appreciated
Velero is a backup and restore solution for Kubernetes clusters and their associated persistent volumes. While Velero does not currently support full backup and restore of databases Refer these limitations. It does support snapshotting and restoring persistent volumes. This means that, while you may not be able to directly restore a full database, you can restore the persistent volumes associated with the database and then use the appropriate tools to restore the data from the snapshots. Additionally, Velero's plugin architecture allows you to extend the capabilities of Velero with custom plugins that can add custom backup and restore functionality.
Refer to this blog from digital ocean by Hanif Jetha and Jamon Camisso for more information on backup and restore.
You setup is missing the PVC or PVC based on the error you have shared.
Velero can take backup of PVC and PV in general snapshot if using AWS, GCP plugin and when you restore it create the PVC and PV for you with that also.
i have migrated the Elasticsearch database with velero along with PVC and worked well in my case, however not are you using the same Cloud provider or storageclass in both cluster ? Why PVC is pending for hippo-s3-repo ? Did you the reason for that ?
Here is my article however i was using the plugin and bucket as storage : https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8

Default Grafana K8s app PV issue: FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set

I am simply trying to deploy this Grafana app as-is, no changes to the YAML have been made: https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/
VMs are Ubuntu 20.04 LTS. The Kubernetes cluster is made up of the Control-Plane/Mstr & 3x Worker nodes:
root#k8s-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 35d v1.24.2
k8s-worker1 Ready worker 4h24m v1.24.2
k8s-worker2 Ready worker 4h24m v1.24.2
k8s-worker3 Ready worker 4h24m v1.24.2v
Other K8s Pods such as NGINX run without issue.
However, the Grafana pod cannot start and is stuck in a Pending state:
root#k8s-master:~# kubectl create -f grafana.yaml
persistentvolumeclaim/grafana-pvc created
deployment.apps/grafana created
service/grafana created
# time passed here...
root#k8s-master:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
grafana-9bd5bbd6b-k7ljz 0/1 Pending 0 3h39m
Troubleshooting this, I found there is an issue with the storage PersistentVolumeClaim (the pvc):
root#k8s-master:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
grafana-pvc Pending 2m22s
root#k8s-master:~#
root#k8s-master:~# kubectl describe pvc grafana-pvc
Name: grafana-pvc
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: grafana-9bd5bbd6b-k7ljz
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 6s (x11 over 2m30s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
UPDATE:
I created a StorageClass and set it as default:
root#k8s-master:~# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
generic (default) no-provisioner Delete Immediate false 19m
I also created a PersistentVolume:
root#k8s-master:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Released default/task-pv-claim manual 12m
However, now when I try to deploy the Grafana PVC it is still stuck - why?
root#k8s-master:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
grafana-pvc Pending generic 4m16s
root#k8s-master:~# kubectl describe pvc grafana-pvc
Name: grafana-pvc
Namespace: default
StorageClass: generic
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: no-provisioner
volume.kubernetes.io/storage-provisioner: no-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: grafana-9bd5bbd6b-mmqs6
grafana-9bd5bbd6b-pvhtm
grafana-9bd5bbd6b-rtwgj
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 12s (x19 over 4m27s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "no-provisioner" or manually created by system administrator
I have tried creating a Grafana configuration file from the documentation, and was able to create successfully. The pod has a Running state, also the PVC(PersistentVolumeClaim) shows the Storage class as standard.
The below is the output of PVC:
$ kubectl describe pvc grafana-pvc
Name: grafana-pvc
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-ee20cc5d-6ca5-4075-b5f3-d1a6323a5241
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: grafana-75789d79d4-wbgtv
Events: <none>
But in your use case the StorageClass field is showing as empty. So, try deleting the existing one and recreate the Grafana configuration file. If you were not able to create and are still facing the same error message which is “no persistent volumes available for this claim and no storage class is set” then you will have to create PV(PersistentVolume).
Because, your error says, "Your PVC hasn't found a matching PV and you also haven't mentioned any storageClass name". After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
In order to resolve your issue you will need to create a StorageClass with no-provisioner and then create a PV(PersistentVolume) by defining this storageClassName. Then you have to create PVC and Pod/Deployment .
Refer to stackpost1 and stackpost2 for more information.

When does Ansible AWX install postgreSQL?

I tried installing Ansible AWX. However, AWX also installs PostgreSQL on the system (I am using kubernetes for AWX btw). I understand that PostgreSQL is one of the requirements for AWX.
Now, for another project, I have to install PostgreSQL (on Kubernetes itself). I looked up a method online and it is working. However, is there some way I can do it automatically, just like the installation of AWX?
Thanks,
Suhas
This can be achieved by using the awx-operator. Below is a Demo installation of Helm. By default awx and PG db are located on the same worker node, but this requires a default SC
Helm Deployment
Configuring Helm sources for awx-operator
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm repo add awx-operator https://ansible.github.io/awx-operator/
"awx-operator" has been added to your repository.
┌──[root#vms81.liruilongs.github.io]──[~/AWK]
└─$helm repo update
Grab the latest from your diagram repository as we grab it...
... Successfully get updates from the "liruilong_repo" chart repository
... Successfully get updates from the "elastic" chart library
... Successfully obtained updates from the "prometheus-community" chart repository
... Successfully obtained updates from the "azure" chart repository
... Unable to get updates from "ali" chart repository (https://apphub.aliyuncs.com).
Failed to fetch https://apphub.aliyuncs.com/index.yaml: 504 gateway timeout
... Successfully getting updates from the "awx-operator" chart library
... Successfully fetching updates from the "stable" chart library
Update completed. ⎈ Have fun! ⎈
Search awx-operator for Chart
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm search repo awx-operator
NAME CHART VERSION APP VERSION DESCRIPTION
awx-operator/awx-operator 0.30.0 0.30.0 A Helm chart for the AWX Operator
Custom parameter installation helm install my-awx-operator awx-operator/awx-operator -n awx --create-namespace -f myvalues.yaml.
If you use a custom installation, you need to enable the corresponding switches in myvalues.yaml, you can configure HTTPS, standalone PG database, LB, LDAP authentication, etc. The file template can be found in the chart package under pull, and use the value.yaml inside for the template.
We use the default configuration here to install, no need to specify a configuration file.
┌──[root#vms81.liruilongs.github.io]-[~/AWK]
└─$helm install -n awx --create-namespace my-awx-operator awx-operator/awx-operator
Name: my-awx-operator
Last deployed. mon oct 10 16:29:24 2022
namespace: awx
Status: Deployed
Revision: 1
Test suite: none
Notes.
AWX operator is installed in Helm Chart version 0.30.0.
┌──[root#vms81.liruilongs.github.io]──[~/AWK]
└─$
After looking at the POD status
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-postgres-13-0 0/1 Pending 0 105s
awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 128m
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 5m48s
awx-operator-controller-manager-metrics-service ClusterIP 10.107.17.167 <none> 8443/TCP 132m
pg corresponding pod: awx-demo-postgres-13-0 pending now, look at the events
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pods awx-demo-postgres-13-0 | grep -i -A 10 event
Event.
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x8 over 7m31s) default-scheduler 0/3 nodes are available: 3 pods have unbound direct PersistentVolumeClaims.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
name status volume capacity access mode storage class age
postgres-13-awx-demo-postgres-13-0 Pending 10m
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Event.
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 82s (x42 over 11m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
No resources found
OK ,the reason for Pending is that there is no default SC
For stateful applications, we need to create a default SC (dynamic volume provisioning) before generating a statefulset, which will dynamically handle the creation of PVs and PVCs and generate data storage for PGs, so we need to create a SC here.
Here, for convenience, we use local storage as the back-end storage. In general, PV can only be network storage and does not belong to any Node, so it is a bit more by way of NFS, and the SC will specify the allocator through the provisioner field. After the storageClass is created, the user uses the default SC's allocation storage when defining the pvc.
To confirm successful creation
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
name provisioner reclaimpolicy volumebindingmode allowvolumeexpansion age
local-path rancher.io/local-path delete WaitForFirstConsumer false 2m6s
Set to default SC:
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl patch storageclass local-path -p '{"metadata": {"comments":{"storageclass.kubernetes.io/is-default-class": "true"}}'
storageclass.storage.k8s.io/local-path patched
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-postgres-13-0 0/1 Pending 0 46m
awx-operator-controller-manager-79ff9599d8-2v5fn 2/2 Running 0 173m
Export yaml file, delete and recreate
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc postgres-13-awx-demo-postgres-13-0 -o yaml > postgres-13-awx-demo-postgres-13-0.yaml
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl delete -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim "postgres-13-awx-demo-postgres-13-0" deleted
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply -f postgres-13-awx-demo-postgres-13-0.yaml
persistentvolumeclaim/postgres-13-awx-demo-postgres-13-0 created
Check the status of the pvc, here you need to wait a while, Bound means it has been bound.
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-13-awx-demo-postgres-13-0 Pending local-path 3s
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl describe pvc postgres-13-awx-demo-postgres-13-0 | grep -i -A 10 event
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForPodScheduled 42s persistentvolume-controller waiting for pod awx-demo-postgres-13-0 to be scheduled
Normal ExternalProvisioning 41s persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
Normal Provisioning 41s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 External provisioner is provisioning volume for claim "awx/postgres-13-awx-demo-postgres-13-0"
Normal ProvisioningSucceeded 39s rancher.io/local-path_local-path-provisioner-7c795b5576-gmrx4_d69ca393-bcbe-4abb-8b22-cd8db3b26bf8 Successfully provisioned volume pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-13-awx-demo-postgres-13-0 Bound pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO local-path 53s
┌──[root#vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-44b7687c-de18-45d2-bef6-8fb2d1c415d3 8Gi RWO Delete Bound awx/postgres-13-awx-demo-postgres-13-0 local-path 54s
Look at the status of the POD, here the PG-DB related POD is created successfully
Here you need to wait a while, you will see the Pods are normal
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-demo-65d9bf775b-hc58x 4/4 Running 0 79m
awx-demo-postgres-13-0 1/1 Running 0 143m
awx-operator-controller-manager-79ff9599d8-m7t8k 2/2 Running 0 81m
View SVC Access Test
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-demo-postgres-13 ClusterIP None <none> 5432/TCP 143m
awx-demo-service NodePort 10.104.176.210 <none> 80:30066/TCP 79m
awx-operator-controller-manager-metrics-service ClusterIP 10.108.71.67 <none> 8443/TCP 82m
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$curl 192.168.26.82:30066
<!doctype html><html lang="en"><head><script nonce="cw6jhvbF7S5bfKJPsimyabathhaX35F5hIyR7emZNT0=" type="text/javascript">window.....
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$
Get Password
┌──[root#vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get secrets
NAME TYPE DATA AGE
awx-demo-admin-password Opaque 1 146m
awx-demo-app-credentials Opaque 3 82m
awx-demo-broadcast-websocket Opaque 1 146m
awx-demo-postgres-configuration Opaque 6 146m
awx-demo-receptor-ca kubernetes.io/tls 2 82m
awx-demo-receptor-work-signing Opaque 2 82m
awx-demo-secret-key Opaque 1 146m
awx-demo-token-sc92t kubernetes.io/service-account-token 3 82m
awx-operator-controller-manager-token-tpv2m kubernetes.io/service-account-token 3 84m
default-token-864fk kubernetes.io/service-account-token 3 4h32m
redhat-operators-pull-secret Opaque 1 146m
sh.helm.release.v1.my-awx-operator.v1 helm.sh/release.v1 1 84m
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$echo $(kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode)
tP59YoIWSS6NgCUJYQUG4cXXJIaIc7ci
┌──[root#vms81.liruilongs.github.io]-[~/awx-operator/crds]
└─$
Access test
The default service is published as NodePort, so we can access it from any subnet IP via node plus port:http://192.168.26.82:30066/#/login

Disable persistent does not work in redis enterprise cluster on kubernetes

ITNOA
I try to creating redis enterprise cluster with redis operator.
For declaration of my cluster I write something like below
apiVersion: "app.redislabs.com/v1"
kind: "RedisEnterpriseCluster"
metadata:
name: "harbor-cluster"
spec:
nodes: 3
persistentSpec:
enabled: false
redisEnterpriseNodeResources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 1000m
memory: 1Gi
But my problem is even I set presistentSpec to false, I see kubectl describe pvc redis-enterprise-storage-harbor-cluster-0 show redis try to claim pv and my bootstrapping of my pods is failed.
Name: redis-enterprise-storage-harbor-cluster-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=redis-enterprise
redis.io/cluster=harbor-cluster
redis.io/role=node
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: harbor-cluster-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 108s (x1321 over 5h31m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
If I run kubectl get pods you can see harbor-cluster-0 does not ready (because bootstrapping of redis pod is failed)
NAME READY STATUS RESTARTS AGE
harbor-cluster-0 1/2 Running 0 72s
harbor-cluster-services-rigger-557b6f75c8-hgfzj 1/1 Running 0 73s
redis-enterprise-operator-7f8d8548c5-qvd48 2/2 Running 0 6h16m
my question is how to resolve it?
Posting comment as the community wiki answer for better visibility
Is it possible that you had previously created a Redis Enterprise Cluster with the same name before? I am thinking the PVC could be from a previous run. Can you check if the PVC is older than the REC by comparing their creation timestamp?

deploying Portainer on Kubernetes Cluster failed

after deploying Portainer on Kubernetes Cluster (1 master, 2 workers), following https://documentation.portainer.io/v2.0/deploy/ceinstallk8s/, by
helm install --create-namespace -n portainer portainer portainer/portainer --set persistence.storageClass=slow
I got the status:
kubectl get all -n portainer
NAME READY STATUS RESTARTS AGE
pod/portainer-6cb48f955f-qmtdq 0/1 Pending 0 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/portainer NodePort 10.97.158.200 <none> 9000:30777/TCP,30776:30776/TCP 2d3h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/portainer 0/1 1 0 2d
NAME DESIRED CURRENT READY AGE
replicaset.apps/portainer-6cb48f955f 1 1 0 2d
So,
The pod is not READY, with STATUS Pending.
The service is up but has no EXTERNAL-IP.
The deployment is not READY or AVAILABLE.
The ReplicaSet is not READY.
And I can't access the instance on port 30777.
i.e. http://20.199.64.113:30777/
More 'kubectl describe' info:
root#kubemaster:/home/kubemaster# kubectl describe pod portainer -n portainer
Name: portainer-7b94d88f67-plz9d
Namespace: portainer
Priority: 0
Node: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 129m default-scheduler 0/3 nodes are available: 3 pod has unbound immediate Persiste
root#kubemaster:/home/kubemaster# kubectl describe pvc portainer -n portainer
Name: portainer
Namespace: portainer
StorageClass: slow
Status: Pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 2m22s (x259 over 9h) persistentvolume-controller Failed to provision volume with S
root#kubemaster:/home/kubemaster# kubectl describe pv portainer -n portainer
Error from server (NotFound): persistentvolumes "portainer" not found
I did researched the below error/warning:
Warning FailedScheduling 129m default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Warning ProvisioningFailed 2m22s (x259 over 9h) persistentvolume-controller Failed to provision volume with StorageClass "slow": AzureDisk - failed to get Azure Cloud Provider. GetCloudProvider returned <nil> instead
But still wasn't able to enable Portainer instance.
Is there anything i missed out or anyway to debug
thanks ahead
If you are using PersistentVolumeClaim you need a volume provisioner for Dynamic Volume Provisioning. The bigger cloud providers typically has this.
If you don't have a volume provisioner in your cluster, you have to create a PersistentVolume resource and possibly also a StorageClass and declare how to use your storage system.
Take a look: portainer-on-kubernetes.
So in your case as you have mentioned you can install external volume provisioner - NFS subdir external provisioner.