field label not supported: NAME - postgresql
I use https://github.com/zalando/postgres-operator to deploy PostgreSQL instances and have the following running instances:
kubectl get postgresqls.acid.zalan.do
NAME TEAM VERSION PODS VOLUME CPU-REQUEST MEMORY-REQUEST AGE STATUS
acid-minimal-cluster acid 12 2 1Gi 2d18h SyncFailed
acid-userdb acid 12 2 5Gi 100m 100Mi 2d18h SyncFailed
databaker-userdb databaker 12 2 2Gi 100m 100Mi 2d18h SyncFailed
databaker-users-db databaker 12 2 2Gi 100m 100Mi 2d17h SyncFailed
I try to get the instance as follows:
kubectl get postgresql --field-selector NAME=databaker-userdb
Error from server (BadRequest): Unable to find "acid.zalan.do/v1, Resource=postgresqls" that match label selector "", field selector "NAME=databaker-userdb": field label not supported: NAME
as you can see, I've got error message.
What am I doing wrong?
Update
The yaml file
apiVersion: acid.zalan.do/v1
kind: postgresql
metadata:
annotations:
meta.helm.sh/release-name: user-db
meta.helm.sh/release-namespace: default
creationTimestamp: "2020-06-16T15:58:28Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
team: databaker
managedFields:
- apiVersion: acid.zalan.do/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
f:team: {}
f:spec:
.: {}
f:databases:
.: {}
f:users: {}
f:numberOfInstances: {}
f:postgresql:
.: {}
f:version: {}
f:resources:
.: {}
f:limits:
.: {}
f:cpu: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:memory: {}
f:teamId: {}
f:users:
.: {}
f:databaker: {}
f:volume:
.: {}
f:size: {}
manager: Go-http-client
operation: Update
time: "2020-06-16T15:58:28Z"
- apiVersion: acid.zalan.do/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:PostgresClusterStatus: {}
manager: postgres-operator
operation: Update
time: "2020-06-16T15:58:53Z"
name: databaker-users-db
namespace: default
resourceVersion: "68486"
selfLink: /apis/acid.zalan.do/v1/namespaces/default/postgresqls/databaker-users-db
uid: 8bc3b591-4346-4cca-a1ae-682a1ad16615
spec:
databases:
users: databaker
numberOfInstances: 2
postgresql:
version: "12"
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
teamId: databaker
users:
databaker:
- superuser
- createdb
volume:
size: 2Gi
status:
PostgresClusterStatus: Running
From the documentation
Field selectors let you select Kubernetes resources based on the value of one or more resource fields. Here are some examples of field selector queries:
metadata.name=my-service
metadata.namespace!=default
status.phase=Pending
NAME is not resource field, You can use like following
$ kubectl get postgresql --field-selector metadata.name=databaker-userdb
According to documentation
Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error.
To get according to status, you can run
$ kubectl get postgresql | grep "SyncFailed"
Related
Kubernetes replica does not receive traffic
I am trying to understand how kubernetes replicas work, and I am getting an unexpected (?) behavior. If I understand correctly, when a service selects a deployment it will distribute the requests accross all pods. My deployment has 3 replicas and the pods are being selected properly by the service but requests only go to one of then (the other two remain unused) I followed this tutorial and after scaling the deployment I made multiple get requests to the service and I was expecting that requests will be distributed accross replicas but only one replica received and handled all the requests. I am not sure if that's how it works or maybe I need to do something else? I would like to add that my first test was a very simple endpoint that will resolve the request immediatly. I tested adding load to the endpoint (a delay before resolving) and it did started sending requests to the others replicas. I would love to understand how it works I haven't being able to find any docs about it This is the deployment yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2022-11-23T15:26:36Z" generation: 2 labels: app: express-echo managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:replicas: {} manager: kubectl operation: Update subresource: scale - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:spec: f:progressDeadlineSeconds: {} f:revisionHistoryLimit: {} f:selector: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:spec: f:containers: k:{"name":"express-echo"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: kubectl-create operation: Update time: "2022-11-23T15:26:36Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:deployment.kubernetes.io/revision: {} f:status: f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:readyReplicas: {} f:replicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update subresource: status time: "2022-11-23T15:28:18Z" name: express-echo namespace: default resourceVersion: "5192" uid: 32288873-1e30-44a1-9226-0214c1becd35 spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: express-echo strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: express-echo spec: containers: - image: gcr.io/gcp-project/express-echo:1.0.0 imagePullPolicy: IfNotPresent name: express-echo resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 3 conditions: - lastTransitionTime: "2022-11-23T15:26:36Z" lastUpdateTime: "2022-11-23T15:27:01Z" message: ReplicaSet "express-echo-547f8bcfb5" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing - lastTransitionTime: "2022-11-23T15:28:18Z" lastUpdateTime: "2022-11-23T15:28:18Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available observedGeneration: 2 readyReplicas: 3 replicas: 3 And this is the service apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/neg: '{"ingress":true}' creationTimestamp: "2022-11-23T15:26:48Z" finalizers: - service.kubernetes.io/load-balancer-cleanup labels: app: express-echo managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:spec: f:allocateLoadBalancerNodePorts: {} f:externalTrafficPolicy: {} f:internalTrafficPolicy: {} f:ports: .: {} k:{"port":80,"protocol":"TCP"}: .: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: {} f:sessionAffinity: {} f:type: {} manager: kubectl-expose operation: Update time: "2022-11-23T15:26:48Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"service.kubernetes.io/load-balancer-cleanup": {} f:status: f:loadBalancer: f:ingress: {} manager: kube-controller-manager operation: Update subresource: status time: "2022-11-23T15:27:24Z" name: express-echo namespace: default resourceVersion: "4765" uid: 99346a8a-1e89-476e-a21f-0d9c98d86b7d spec: allocateLoadBalancerNodePorts: true clusterIP: 10.0.8.195 clusterIPs: - 10.0.8.195 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 31123 port: 80 protocol: TCP targetPort: 3001 selector: app: express-echo sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 1.1.1.1
Kubernetes Cassandra Datacenter deletes PVC while deleting Datacenter
I have cassandra operator installed and I setup cassandra datacenter/cluster with 3 nodes. I have created sample keyspace, table and inserted the data. I see it has created 3 PVC's in my storage section. When I deleting the dataceneter its delete associated PVC's as well ,So when I setup same configuration Datacenter/cluster , its completely new , No earlier keyspace or tables. How can I make them persistence for future use? I am using sample yaml from below https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x I don't find any persistentVolumeClaim configuration in it , Its having storageConfig: cassandraDataVolumeClaimSpec: Is anyone came across such scenario? Edit: Storage class details: allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: description: Provides RWO and RWX Filesystem volumes with Retain Policy storageclass.kubernetes.io/is-default-class: "false" name: ocs-storagecluster-cephfs-retain parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Retain volumeBindingMode: Immediate Here is Cassandra cluster YAML: apiVersion: cassandra.datastax.com/v1beta1 kind: CassandraDatacenter metadata: name: dc generation: 2 spec: size: 3 config: cassandra-yaml: authenticator: AllowAllAuthenticator authorizer: AllowAllAuthorizer role_manager: CassandraRoleManager jvm-options: additional-jvm-opts: - '-Ddse.system_distributed_replication_dc_names=dc1' - '-Ddse.system_distributed_replication_per_dc=1' initial_heap_size: 800M max_heap_size: 800M resources: {} clusterName: cassandra systemLoggerResources: {} configBuilderResources: {} serverVersion: 3.11.7 serverType: cassandra storageConfig: cassandraDataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: ocs-storagecluster-cephfs-retain managementApiAuth: insecure: {} EDIT: PV Details: oc get pv pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: "2022-02-23T20:52:54Z" finalizers: - kubernetes.io/pv-protection managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:spec: f:accessModes: {} f:capacity: .: {} f:storage: {} f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:csi: .: {} f:controllerExpandSecretRef: .: {} f:name: {} f:namespace: {} f:driver: {} f:nodeStageSecretRef: .: {} f:name: {} f:namespace: {} f:volumeAttributes: .: {} f:clusterID: {} f:fsName: {} f:storage.kubernetes.io/csiProvisionerIdentity: {} f:subvolumeName: {} f:volumeHandle: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: csi-provisioner operation: Update time: "2022-02-23T20:52:54Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2022-02-23T20:52:54Z" name: pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 resourceVersion: "51684941" selfLink: /api/v1/persistentvolumes/pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 uid: 8ded2de5-6d4e-45a1-9b89-a385d74d6d4a spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: server-data-cstone-cassandra-cstone-dc-default-sts-1 namespace: dv01-cornerstone resourceVersion: "51684914" uid: 15def0ca-6cbc-4569-a560-7b9e89a7b7a7 csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1645064620191-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-92d5e07d-94ea-11ec-92e8-0a580a20028c volumeHandle: 0001-0011-openshift-storage-0000000000000001-92d5e07d-94ea-11ec-92e8-0a580a20028c persistentVolumeReclaimPolicy: Retain storageClassName: ocs-storagecluster-cephfs-retain volumeMode: Filesystem status: phase: Bound
According to the spec: The storage configuration. This sets up a 100GB volume at /var/lib/cassandra on each server pod. The user is left to create the server-storage storage class by following these directions... https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd Before you deploy the Cassandra spec, first ensure your cluster already have the CSI driver installed and working properly, then proceed to create the StorageClass as the spec required: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: server-storage provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Retain parameters: type: pd-ssd Re-deploy your Cassandra now should have the data disk retain upon deletion.
why the postgersql kubernetes statefulset did not claim the PVC
Today I want to change the PostgreSQL statefulset PVC name, to my surprise, I did not found any clain about the PVC in the kubernetes deployment define, this is the kubernetes deployment define of PostgreSQL: apiVersion: apps/v1 kind: StatefulSet metadata: name: reddwarf-postgresql-postgresql namespace: reddwarf-storage uid: 787a18c8-f6fb-4deb-bb07-3c3d123cf6f9 resourceVersion: '21931453' generation: 30 creationTimestamp: '2021-08-05T05:29:03Z' labels: app.kubernetes.io/component: primary app.kubernetes.io/instance: reddwarf-postgresql app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.9.1 annotations: kubectl.kubernetes.io/last-applied-configuration: > {"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"meta.helm.sh/release-name":"reddwarf-postgresql","meta.helm.sh/release-namespace":"reddwarf-storage"},"creationTimestamp":"2021-08-05T05:29:03Z","generation":12,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1"},"managedFields":[{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{}}},"f:spec":{"f:podManagementPolicy":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:serviceName":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{},"f:role":{}},"f:name":{}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"BITNAMI_DEBUG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"PGDATA\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_CLIENT_MIN_MESSAGES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_LDAP\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_TLS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_CONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_DISCONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_HOSTNAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PGAUDIT_LOG_CATALOG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PORT_NUMBER\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_SHARED_PRELOAD_LIBRARIES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_VOLUME_DIR\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRES_PASSWORD\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:secretKeyRef":{".":{},"f:key":{},"f:name":{}}}},"k:{\"name\":\"POSTGRES_USER\"}":{".":{},"f:name":{},"f:value":{}}},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5432,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/bitnami/postgresql\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/dev/shm\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:fsGroup":{}},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"dshm\"}":{".":{},"f:emptyDir":{".":{},"f:medium":{}},"f:name":{}}}}},"f:updateStrategy":{"f:type":{}},"f:volumeClaimTemplates":{}}},"manager":"Go-http-client","operation":"Update","time":"2021-08-05T05:29:03Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:image":{}}}}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2021-08-10T16:50:45Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:args":{}}}}}}},"manager":"kubectl","operation":"Update","time":"2021-08-11T01:46:21Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:collisionCount":{},"f:currentReplicas":{},"f:currentRevision":{},"f:observedGeneration":{},"f:replicas":{},"f:updateRevision":{},"f:updatedReplicas":{}}},"manager":"kube-controller-manager","operation":"Update","time":"2021-08-11T02:24:07Z"}],"name":"reddwarf-postgresql-postgresql","namespace":"reddwarf-storage","selfLink":"/apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql","uid":"787a18c8-f6fb-4deb-bb07-3c3d123cf6f9"},"spec":{"podManagementPolicy":"OrderedReady","replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql","role":"primary"}},"serviceName":"reddwarf-postgresql-headless","template":{"metadata":{"creationTimestamp":null,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1","role":"primary"},"name":"reddwarf-postgresql"},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql"}},"namespaces":["reddwarf-storage"],"topologyKey":"kubernetes.io/hostname"},"weight":1}]}},"automountServiceAccountToken":false,"containers":[{"env":[{"name":"BITNAMI_DEBUG","value":"false"},{"name":"POSTGRESQL_PORT_NUMBER","value":"5432"},{"name":"POSTGRESQL_VOLUME_DIR","value":"/bitnami/postgresql"},{"name":"PGDATA","value":"/bitnami/postgresql/data"},{"name":"POSTGRES_USER","value":"postgres"},{"name":"POSTGRES_PASSWORD","valueFrom":{"secretKeyRef":{"key":"postgresql-password","name":"reddwarf-postgresql"}}},{"name":"POSTGRESQL_ENABLE_LDAP","value":"no"},{"name":"POSTGRESQL_ENABLE_TLS","value":"no"},{"name":"POSTGRESQL_LOG_HOSTNAME","value":"false"},{"name":"POSTGRESQL_LOG_CONNECTIONS","value":"false"},{"name":"POSTGRESQL_LOG_DISCONNECTIONS","value":"false"},{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG","value":"off"},{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES","value":"error"},{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES","value":"pgaudit"}],"image":"docker.io/bitnami/postgresql:13.3.0-debian-10-r75","imagePullPolicy":"IfNotPresent","livenessProbe":{"exec":{"command":["/bin/sh","-c","exec pg_isready -U \"postgres\" -h 127.0.0.1 -p 5432"]},"failureThreshold":6,"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"name":"reddwarf-postgresql","ports":[{"containerPort":5432,"name":"tcp-postgresql","protocol":"TCP"}],"readinessProbe":{"exec":{"command":["/bin/sh","-c","-e","exec pg_isready -U \"postgres\" -h 127.0.0.1 -p 5432\n[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]\n"]},"failureThreshold":6,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"resources":{"requests":{"cpu":"250m","memory":"256Mi"}},"securityContext":{"runAsUser":1001},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/dev/shm","name":"dshm"},{"mountPath":"/bitnami/postgresql","name":"data"}]}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{"fsGroup":1001},"terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{"medium":"Memory"},"name":"dshm"}]}},"updateStrategy":{"type":"RollingUpdate"},"volumeClaimTemplates":[{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"creationTimestamp":null,"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"8Gi"}},"volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}} meta.helm.sh/release-name: reddwarf-postgresql meta.helm.sh/release-namespace: reddwarf-storage managedFields: - manager: Go-http-client operation: Update apiVersion: apps/v1 time: '2021-08-05T05:29:03Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': .: {} 'f:meta.helm.sh/release-name': {} 'f:meta.helm.sh/release-namespace': {} 'f:labels': .: {} 'f:app.kubernetes.io/component': {} 'f:app.kubernetes.io/instance': {} 'f:app.kubernetes.io/managed-by': {} 'f:app.kubernetes.io/name': {} 'f:helm.sh/chart': {} 'f:spec': 'f:podManagementPolicy': {} 'f:replicas': {} 'f:revisionHistoryLimit': {} 'f:selector': {} 'f:serviceName': {} 'f:template': 'f:metadata': 'f:labels': .: {} 'f:app.kubernetes.io/component': {} 'f:app.kubernetes.io/instance': {} 'f:app.kubernetes.io/managed-by': {} 'f:app.kubernetes.io/name': {} 'f:helm.sh/chart': {} 'f:role': {} 'f:name': {} 'f:spec': 'f:affinity': .: {} 'f:podAntiAffinity': .: {} 'f:preferredDuringSchedulingIgnoredDuringExecution': {} 'f:automountServiceAccountToken': {} 'f:containers': 'k:{"name":"reddwarf-postgresql"}': .: {} 'f:env': .: {} 'k:{"name":"BITNAMI_DEBUG"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"PGDATA"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_ENABLE_LDAP"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_ENABLE_TLS"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_LOG_CONNECTIONS"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_LOG_DISCONNECTIONS"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_LOG_HOSTNAME"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_PORT_NUMBER"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRESQL_VOLUME_DIR"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"POSTGRES_PASSWORD"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:secretKeyRef': .: {} 'f:key': {} 'f:name': {} 'k:{"name":"POSTGRES_USER"}': .: {} 'f:name': {} 'f:value': {} 'f:imagePullPolicy': {} 'f:livenessProbe': .: {} 'f:exec': .: {} 'f:command': {} 'f:failureThreshold': {} 'f:initialDelaySeconds': {} 'f:periodSeconds': {} 'f:successThreshold': {} 'f:timeoutSeconds': {} 'f:name': {} 'f:ports': .: {} 'k:{"containerPort":5432,"protocol":"TCP"}': .: {} 'f:containerPort': {} 'f:name': {} 'f:protocol': {} 'f:readinessProbe': .: {} 'f:exec': .: {} 'f:command': {} 'f:failureThreshold': {} 'f:initialDelaySeconds': {} 'f:periodSeconds': {} 'f:successThreshold': {} 'f:timeoutSeconds': {} 'f:resources': .: {} 'f:requests': .: {} 'f:cpu': {} 'f:memory': {} 'f:securityContext': .: {} 'f:runAsUser': {} 'f:terminationMessagePath': {} 'f:terminationMessagePolicy': {} 'f:volumeMounts': .: {} 'k:{"mountPath":"/bitnami/postgresql"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/dev/shm"}': .: {} 'f:mountPath': {} 'f:name': {} 'f:dnsPolicy': {} 'f:restartPolicy': {} 'f:schedulerName': {} 'f:securityContext': .: {} 'f:fsGroup': {} 'f:terminationGracePeriodSeconds': {} 'f:volumes': .: {} 'k:{"name":"dshm"}': .: {} 'f:emptyDir': .: {} 'f:medium': {} 'f:name': {} 'f:updateStrategy': 'f:type': {} 'f:volumeClaimTemplates': {} - manager: kubectl-client-side-apply operation: Update apiVersion: apps/v1 time: '2021-08-10T16:50:45Z' fieldsType: FieldsV1 fieldsV1: 'f:spec': 'f:template': 'f:spec': 'f:containers': 'k:{"name":"reddwarf-postgresql"}': 'f:image': {} - manager: kubectl operation: Update apiVersion: apps/v1 time: '2021-08-11T02:29:20Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': 'f:kubectl.kubernetes.io/last-applied-configuration': {} - manager: kube-controller-manager operation: Update apiVersion: apps/v1 time: '2021-11-27T03:07:58Z' fieldsType: FieldsV1 fieldsV1: 'f:status': 'f:collisionCount': {} 'f:currentRevision': {} 'f:observedGeneration': {} 'f:replicas': {} 'f:updateRevision': {} selfLink: >- /apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql status: observedGeneration: 30 replicas: 0 currentRevision: reddwarf-postgresql-postgresql-5695cb9676 updateRevision: reddwarf-postgresql-postgresql-5695cb9676 collisionCount: 0 spec: replicas: 0 selector: matchLabels: app.kubernetes.io/instance: reddwarf-postgresql app.kubernetes.io/name: postgresql role: primary template: metadata: name: reddwarf-postgresql creationTimestamp: null labels: app.kubernetes.io/component: primary app.kubernetes.io/instance: reddwarf-postgresql app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.9.1 role: primary spec: volumes: - name: dshm emptyDir: medium: Memory containers: - name: reddwarf-postgresql image: 'docker.io/bitnami/postgresql:13.3.0-debian-10-r75' ports: - name: tcp-postgresql containerPort: 5432 protocol: TCP env: - name: BITNAMI_DEBUG value: 'false' - name: POSTGRESQL_PORT_NUMBER value: '5432' - name: POSTGRESQL_VOLUME_DIR value: /bitnami/postgresql - name: PGDATA value: /bitnami/postgresql/data - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: reddwarf-postgresql key: postgresql-password - name: POSTGRESQL_ENABLE_LDAP value: 'no' - name: POSTGRESQL_ENABLE_TLS value: 'no' - name: POSTGRESQL_LOG_HOSTNAME value: 'false' - name: POSTGRESQL_LOG_CONNECTIONS value: 'false' - name: POSTGRESQL_LOG_DISCONNECTIONS value: 'false' - name: POSTGRESQL_PGAUDIT_LOG_CATALOG value: 'off' - name: POSTGRESQL_CLIENT_MIN_MESSAGES value: error - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES value: pgaudit resources: requests: cpu: 250m memory: 256Mi volumeMounts: - name: dshm mountPath: /dev/shm - name: data mountPath: /bitnami/postgresql livenessProbe: exec: command: - /bin/sh - '-c' - exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432 initialDelaySeconds: 30 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 6 readinessProbe: exec: command: - /bin/sh - '-c' - '-e' - > exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432 [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] initialDelaySeconds: 5 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 6 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent securityContext: runAsUser: 1001 restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst automountServiceAccountToken: false securityContext: fsGroup: 1001 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: primary app.kubernetes.io/instance: reddwarf-postgresql app.kubernetes.io/name: postgresql namespaces: - reddwarf-storage topologyKey: kubernetes.io/hostname schedulerName: default-scheduler volumeClaimTemplates: - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: data creationTimestamp: null spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi volumeMode: Filesystem status: phase: Pending serviceName: reddwarf-postgresql-headless podManagementPolicy: OrderedReady updateStrategy: type: RollingUpdate revisionHistoryLimit: 10 this statefulset bind the PVC named data-reddwarf-postgresql-postgresql-0 right now, but I did not found the PVC define in this statefulset yaml. where is the PVC bind define? what should I do to change the PVC to bind to a new one? I install this PostgreSQL into kubernetes from helm chart.
pvc thats gets created as a part of statefulset will have a name which is an amalgamation of 3 components joined by - : Name defined in the volumeClaimTemplates section data Name of the statefulset in the metadata section which is reddwarf-postgresql-postgresql Its replica number , if it is first replica then it would be 0 So finally the name of the pvc that gets created when you create this statefulset is data-reddwarf-postgresql-postgresql-0.which is the pvc name that you also seeing in your setup. please note when you delete the statefulset , pvc does not deleted automatically we need to pvc separately. When you recreate/scaleup the stateful set and if the pvc which matches above naming convention& spec does not exists then it will create a pvc. From kubernetes documentation
Kubernetes pods are getting failed with volume attach issue
We are seeing some of the pods getting failed while mounting/attaching volume to pods. this is happening intermittently and after bouncing kubelet service pods are able to reattach the volumes and succeeding. seeing the below error when pod get struck. I created single PV and corresponding PVC to use volume mounts for pods: Error: Warning FailedMount 16m (x4 over 43m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[default-token-pwvpc podmetadata docker-sock workdir]: timed out waiting for the condition Warning FailedMount 7m32s (x5 over 41m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[docker-sock workdir default-token-pwvpc podmetadata]: timed out waiting for the condition Warning FailedMount 3m2s (x10 over 45m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[podmetadata docker-sock workdir default-token-pwvpc]: timed out waiting for the condition Warning FailedMount 45s (x2 over 21m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[workdir default-token-pwvpc podmetadata docker-sock]: timed out waiting for the condition Version: Client Version: v1.17.2 Server Version: v1.17.2 Host OS: Centos 7.7 CNI: Weave apiVersion: v1 items: - apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: "2020-07-17T21:55:47Z" generation: 1 labels: logicmonitor.com/collectorset: kubernetes-03 managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:logicmonitor.com/collectorset: {} f:spec: f:podManagementPolicy: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: f:matchLabels: .: {} f:logicmonitor.com/collectorset: {} f:template: f:metadata: f:labels: .: {} f:logicmonitor.com/collectorset: {} f:namespace: {} f:spec: f:affinity: .: {} f:podAntiAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: {} f:containers: k:{"name":"collector"}: .: {} f:env: .: {} k:{"name":"COLLECTOR_IDS"}: .: {} f:name: {} f:value: {} k:{"name":"access_id"}: .: {} f:name: {} f:valueFrom: .: {} f:secretKeyRef: .: {} f:key: {} f:name: {} f:optional: {} k:{"name":"access_key"}: .: {} f:name: {} f:valueFrom: .: {} f:secretKeyRef: .: {} f:key: {} f:name: {} f:optional: {} k:{"name":"account"}: .: {} f:name: {} f:valueFrom: .: {} f:secretKeyRef: .: {} f:key: {} f:name: {} f:optional: {} k:{"name":"collector_size"}: .: {} f:name: {} f:value: {} k:{"name":"collector_version"}: .: {} f:name: {} f:value: {} k:{"name":"kubernetes"}: .: {} f:name: {} f:value: {} k:{"name":"use_ea"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:limits: .: {} f:memory: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:updateStrategy: f:type: {} f:status: f:replicas: {} manager: collectorset-controller operation: Update time: "2020-08-22T03:42:35Z" name: kubernetes-03 namespace: default resourceVersion: "10831902" selfLink: /apis/apps/v1/namespaces/default/statefulsets/kubernetes-03 uid: 1296654b-77bc-4af8-9537-04f0a00bdd0c spec: podManagementPolicy: Parallel replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: logicmonitor.com/collectorset: kubernetes-03 serviceName: "" template: metadata: creationTimestamp: null labels: logicmonitor.com/collectorset: kubernetes-03 namespace: default spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: logicmonitor.com/collectorset: kubernetes-03 topologyKey: kubernetes.io/hostname containers: - env: - name: account valueFrom: secretKeyRef: key: account name: collectorset-controller optional: false - name: access_id valueFrom: secretKeyRef: key: accessID name: collectorset-controller optional: false - name: access_key valueFrom: secretKeyRef: key: accessKey name: collectorset-controller optional: false - name: kubernetes value: "true" - name: collector_size value: small - name: collector_version value: "0" - name: use_ea value: "false" - name: COLLECTOR_IDS value: "205" image: logicmonitor/collector:latest imagePullPolicy: Always name: collector resources: limits: memory: 2Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: collector serviceAccountName: collector terminationGracePeriodSeconds: 30 updateStrategy: type: RollingUpdate status: collisionCount: 0 currentReplicas: 1 currentRevision: kubernetes-03-655b46ff69 observedGeneration: 1 readyReplicas: 1 replicas: 1 updateRevision: kubernetes-03-655b46ff69 updatedReplicas: 1 kind: List metadata: resourceVersion: "" selfLink: "" kubectl get pv -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/bound-by-controller: "yes" creationTimestamp: "2020-07-10T00:21:49Z" finalizers: - kubernetes.io/pv-protection labels: type: local managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:type: {} f:spec: f:accessModes: {} f:capacity: .: {} f:storage: {} f:nfs: .: {} f:path: {} f:server: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: kubectl operation: Update time: "2020-07-10T00:21:49Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/bound-by-controller: {} f:spec: f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2020-07-10T00:21:52Z" name: pv-modeldata resourceVersion: "15764" selfLink: /api/v1/persistentvolumes/pv-data uid: 68521e84-0aa9-4643-a047-441a61451599 spec: accessModes: - ReadWriteMany capacity: storage: 5999Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-modeldata namespace: default resourceVersion: "15762" uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7 nfs: path: /k8fs03 server: kubemaster01.rms.com persistentVolumeReclaimPolicy: Retain storageClassName: manual volumeMode: Filesystem status: phase: Bound kind: List metadata: resourceVersion: "" selfLink: "" # kubectl get pvc -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" creationTimestamp: "2020-07-10T00:21:52Z" finalizers: - kubernetes.io/pvc-protection managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/bind-completed: {} f:pv.kubernetes.io/bound-by-controller: {} f:spec: f:volumeName: {} f:status: f:accessModes: {} f:capacity: .: {} f:storage: {} f:phase: {} manager: kube-controller-manager operation: Update time: "2020-07-10T00:21:52Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:accessModes: {} f:resources: f:requests: .: {} f:storage: {} f:storageClassName: {} f:volumeMode: {} manager: kubectl operation: Update time: "2020-07-10T00:21:52Z" name: pvc-modeldata namespace: default resourceVersion: "15766" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/pvc-data uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7 spec: accessModes: - ReadWriteMany resources: requests: storage: 5999Gi storageClassName: manual volumeMode: Filesystem volumeName: pv-data status: accessModes: - ReadWriteMany capacity: storage: 5999Gi phase: Bound kind: List metadata: resourceVersion: "" selfLink: "" # kubectl get sc -o yaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" selfLink: "" Thanks, Chittu
Why the selector status.PostgresClusterStatus=Running does not work?
I have displayed the following resource as yaml: kubectl get postgresql databaker-users-db -o yaml apiVersion: acid.zalan.do/v1 kind: postgresql metadata: annotations: meta.helm.sh/release-name: user-db meta.helm.sh/release-namespace: default creationTimestamp: "2020-06-16T15:58:28Z" generation: 1 labels: app.kubernetes.io/managed-by: Helm team: databaker managedFields: - apiVersion: acid.zalan.do/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:labels: .: {} f:app.kubernetes.io/managed-by: {} f:team: {} f:spec: .: {} f:databases: .: {} f:users: {} f:numberOfInstances: {} f:postgresql: .: {} f:version: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:teamId: {} f:users: .: {} f:databaker: {} f:volume: .: {} f:size: {} manager: Go-http-client operation: Update time: "2020-06-16T15:58:28Z" - apiVersion: acid.zalan.do/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:PostgresClusterStatus: {} manager: postgres-operator operation: Update time: "2020-06-16T15:58:53Z" name: databaker-users-db namespace: default resourceVersion: "68486" selfLink: /apis/acid.zalan.do/v1/namespaces/default/postgresqls/databaker-users-db uid: 8bc3b591-4346-4cca-a1ae-682a1ad16615 spec: databases: users: databaker numberOfInstances: 2 postgresql: version: "12" resources: limits: cpu: 500m memory: 500Mi requests: cpu: 100m memory: 100Mi teamId: databaker users: databaker: - superuser - createdb volume: size: 2Gi status: PostgresClusterStatus: Running When I try to select with: kubectl get postgresql --field-selector status.PostgresClusterStatus=Running it shows error message: Error from server (BadRequest): Unable to find "acid.zalan.do/v1, Resource=postgresqls" that match label selector "", field selector "status.PostgresClusterStatus=Running": field label not supported: status.PostgresClusterStatus What am I doing wrong?