Kubernetes Cassandra Datacenter deletes PVC while deleting Datacenter - kubernetes

I have cassandra operator installed and I setup cassandra datacenter/cluster with 3 nodes.
I have created sample keyspace, table and inserted the data. I see it has created 3 PVC's in my storage section. When I deleting the dataceneter its delete associated PVC's as well ,So when I setup same configuration Datacenter/cluster , its completely new , No earlier keyspace or tables.
How can I make them persistence for future use? I am using sample yaml from below
https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x
I don't find any persistentVolumeClaim configuration in it , Its having storageConfig:
cassandraDataVolumeClaimSpec:
Is anyone came across such scenario?
Edit: Storage class details:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
description: Provides RWO and RWX Filesystem volumes with Retain Policy
storageclass.kubernetes.io/is-default-class: "false"
name: ocs-storagecluster-cephfs-retain
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
provisioner: openshift-storage.cephfs.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate
Here is Cassandra cluster YAML:
apiVersion: cassandra.datastax.com/v1beta1
kind: CassandraDatacenter
metadata:
name: dc
generation: 2
spec:
size: 3
config:
cassandra-yaml:
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
jvm-options:
additional-jvm-opts:
- '-Ddse.system_distributed_replication_dc_names=dc1'
- '-Ddse.system_distributed_replication_per_dc=1'
initial_heap_size: 800M
max_heap_size: 800M
resources: {}
clusterName: cassandra
systemLoggerResources: {}
configBuilderResources: {}
serverVersion: 3.11.7
serverType: cassandra
storageConfig:
cassandraDataVolumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-cephfs-retain
managementApiAuth:
insecure: {}
EDIT: PV Details:
oc get pv pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
creationTimestamp: "2022-02-23T20:52:54Z"
finalizers:
- kubernetes.io/pv-protection
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/provisioned-by: {}
f:spec:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:claimRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:csi:
.: {}
f:controllerExpandSecretRef:
.: {}
f:name: {}
f:namespace: {}
f:driver: {}
f:nodeStageSecretRef:
.: {}
f:name: {}
f:namespace: {}
f:volumeAttributes:
.: {}
f:clusterID: {}
f:fsName: {}
f:storage.kubernetes.io/csiProvisionerIdentity: {}
f:subvolumeName: {}
f:volumeHandle: {}
f:persistentVolumeReclaimPolicy: {}
f:storageClassName: {}
f:volumeMode: {}
manager: csi-provisioner
operation: Update
time: "2022-02-23T20:52:54Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2022-02-23T20:52:54Z"
name: pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
resourceVersion: "51684941"
selfLink: /api/v1/persistentvolumes/pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
uid: 8ded2de5-6d4e-45a1-9b89-a385d74d6d4a
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: server-data-cstone-cassandra-cstone-dc-default-sts-1
namespace: dv01-cornerstone
resourceVersion: "51684914"
uid: 15def0ca-6cbc-4569-a560-7b9e89a7b7a7
csi:
controllerExpandSecretRef:
name: rook-csi-cephfs-provisioner
namespace: openshift-storage
driver: openshift-storage.cephfs.csi.ceph.com
nodeStageSecretRef:
name: rook-csi-cephfs-node
namespace: openshift-storage
volumeAttributes:
clusterID: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
storage.kubernetes.io/csiProvisionerIdentity: 1645064620191-8081-openshift-storage.cephfs.csi.ceph.com
subvolumeName: csi-vol-92d5e07d-94ea-11ec-92e8-0a580a20028c
volumeHandle: 0001-0011-openshift-storage-0000000000000001-92d5e07d-94ea-11ec-92e8-0a580a20028c
persistentVolumeReclaimPolicy: Retain
storageClassName: ocs-storagecluster-cephfs-retain
volumeMode: Filesystem
status:
phase: Bound

According to the spec:
The storage configuration. This sets up a 100GB volume at /var/lib/cassandra
on each server pod. The user is left to create the server-storage storage
class by following these directions...
https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd
Before you deploy the Cassandra spec, first ensure your cluster already have the CSI driver installed and working properly, then proceed to create the StorageClass as the spec required:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: server-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
type: pd-ssd
Re-deploy your Cassandra now should have the data disk retain upon deletion.

Related

Getting "response 404 (backend NotFound), service rules for the path non-existent" Using Ingress Google Cloud

I want to my backend service which is deployed on kubernetes service to access using ingress with path /sso-dev/, for that i have deployed my service on kubernetes container the deployment, service and ingress manifest is mentioned below, but while accessing the ingress load balancer api with path /sso-dev/ it throws "response 404 (backend NotFound), service rules for the path non-existent" error
I required a help just to access the backend service which is working fine with kubernetes container load balance ip.
here is my ingress configure
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30969--6d0e236a1c7d6409":"HEALTHY","k8s1-6d0e236a-default-sso-dev-service-80-849fdb46":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/target-proxy: k8s2-tp-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/url-map: k8s2-um-uwdva40x-default-my-ingress-h98d0sfl
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/backend-protocol":"HTTP","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"sso-dev-service","port":{"number":80}}},"path":"/sso-dev/*","pathType":"ImplementationSpecific"}]}}]}}
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-06-22T12:30:49Z"
finalizers:
- networking.gke.io/ingress-finalizer-V2
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:nginx.ingress.kubernetes.io/backend-protocol: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-22T12:30:49Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:ingress.kubernetes.io/backends: {}
f:ingress.kubernetes.io/forwarding-rule: {}
f:ingress.kubernetes.io/target-proxy: {}
f:ingress.kubernetes.io/url-map: {}
f:finalizers:
.: {}
v:"networking.gke.io/ingress-finalizer-V2": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:32:13Z"
name: my-ingress
namespace: default
resourceVersion: "13073497"
uid: 253e067f-0711-4d24-a706-497692dae4d9
spec:
rules:
- http:
paths:
- backend:
service:
name: sso-dev-service
port:
number: 80
path: /sso-dev/*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 34.111.49.35
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-06-22T08:52:11Z"
generation: 1
labels:
app: sso-dev
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"cent-sha256-1"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:52:11Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T11:51:22Z"
name: sso-dev
namespace: default
resourceVersion: "13051665"
uid: c8732885-b7d8-450c-86c4-19769638eb2a
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: sso-dev
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sso-dev
spec:
containers:
- image: us-east4-docker.pkg.dev/centegycloud-351515/sso/cent#sha256:64b50553219db358945bf3cd6eb865dd47d0d45664464a9c334602c438bbaed9
imagePullPolicy: IfNotPresent
name: cent-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-22T08:52:11Z"
lastUpdateTime: "2022-06-22T08:52:25Z"
message: ReplicaSet "sso-dev-8566f4bc55" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-06-22T11:51:22Z"
lastUpdateTime: "2022-06-22T11:51:22Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6d0e236a-default-sso-dev-service-80-849fdb46"},"zones":["us-central1-c"]}'
creationTimestamp: "2022-06-22T08:53:32Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: sso-dev
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:53:32Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T08:53:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.google.com/neg-status: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:30:49Z"
name: sso-dev-service
namespace: default
resourceVersion: "13071362"
uid: 03b0cbe6-1ed8-4441-b2c5-93ae5803a582
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.32.6.103
clusterIPs:
- 10.32.6.103
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30584
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sso-dev
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 104.197.93.226
You need to change the pathType to Prefix as follows, in your ingress:
pathType: Prefix
Because I noted that you are using the pathType: ImplementationSpecific . With this value, the matching depends on the IngressClass, so I think for your case the pathType Prefix should be more helpful. Additionally, you can find more information about the ingress path types supported in kubernetes in in this link.

Reason behind "Successfully reconciled" event on EKS \ K8S Cluster

Received 130 events in the last 3 days
I see that event on a new cluster, it functions properly, manages to pass successfully all health/liveness/functional requests
Is it a normal event that runs every X minutes?
I suspect it's related to the AWS load balancer controller but not sure how to proceed to explore that issue.
Here's the event object (Changed a bit the unique ids)
kind: Event
apiVersion: v1
metadata:
name: k8s-default-proxyngi-21j23klsu.16322252fc4d27866
namespace: default
selfLink: >-
/api/v1/namespaces/default/events/k8s-default-proxyngi-21j23klsu.16322252fc4d27866
uid: e6e56ba2-82b6-76aafb51c753
resourceVersion: '1578355'
creationTimestamp: '2021-02-21T12:52:52Z'
managedFields:
- manager: controller
operation: Update
apiVersion: v1
time: '2021-02-21T12:52:52Z'
fieldsType: FieldsV1
fieldsV1:
'f:count': {}
'f:firstTimestamp': {}
'f:involvedObject':
'f:apiVersion': {}
'f:kind': {}
'f:name': {}
'f:namespace': {}
'f:resourceVersion': {}
'f:uid': {}
'f:lastTimestamp': {}
'f:message': {}
'f:reason': {}
'f:source':
'f:component': {}
'f:type': {}
involvedObject:
kind: TargetGroupBinding
namespace: default
name: k8s-default-proxyngi-1c76e22ad3
uid: e6e56ba2-82b6-76aafb51c753-f4a4d9812632
apiVersion: elbv2.k8s.aws/v1beta1
resourceVersion: '238786'
reason: SuccessfullyReconciled
message: Successfully reconciled
source:
component: targetGroupBinding
firstTimestamp: '2021-02-16T15:50:37Z'
lastTimestamp: '2021-02-21T12:52:52Z'
count: 131
type: Normal
eventTime: null
reportingComponent: ''
reportingInstance: ''

Installing database in Kubernetes

I'm trying to install CockroachDB with Rancher and getting some problems, showing:
FailedBinding (5) 14 sec ago no persistent volumes available for this claim and no storage class is set
How can this be solved?
Here are the configurations in my local machine:
PersistentVolumeClaim: datadir-cockroachdb-0
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-01-07T23:50:42Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/component: cockroachdb
app.kubernetes.io/instance: cockroachdb
app.kubernetes.io/name: cockroachdb
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/name: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: k3s
operation: Update
time: "2021-01-07T23:50:41Z"
name: datadir-cockroachdb-0
namespace: default
resourceVersion: "188922"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/datadir-cockroachdb-0
uid: ef83d3c7-0309-44a8-b379-0134835d97a9
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
volumeMode: Filesystem
status:
phase: Pending
CockroachDB
clusterDomain: cluster.local
conf:
attrs: []
cache: 25%
cluster-name: ''
disable-cluster-name-verification: false
http-port: 8080
join: []
locality: ''
logtostderr: INFO
max-disk-temp-storage: 0
max-offset: 500ms
max-sql-memory: 25%
port: 26257
single-node: false
sql-audit-dir: ''
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach
tag: v20.1.3
ingress:
annotations: {}
enabled: false
hosts: []
labels: {}
paths:
- /
tls: []
init:
affinity: {}
annotations: {}
labels:
app.kubernetes.io/component: init
nodeSelector: {}
resources: {}
tolerations: []
labels: {}
networkPolicy:
enabled: false
ingress:
grpc: []
http: []
service:
discovery:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
ports:
grpc:
external:
name: grpc
port: 26257
internal:
name: grpc-internal
port: 26257
http:
name: http
port: 8080
public:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
type: ClusterIP
statefulset:
annotations: {}
args: []
budget:
maxUnavailable: 1
env: []
labels:
app.kubernetes.io/component: cockroachdb
nodeAffinity: {}
nodeSelector: {}
podAffinity: {}
podAntiAffinity:
type: soft
weight: 100
podManagementPolicy: Parallel
priorityClassName: ''
replicas: 3
resources: {}
secretMounts: []
tolerations: []
updateStrategy:
type: RollingUpdate
storage:
hostPath: ''
persistentVolume: volume1
annotations: {}
enabled: true
labels: {}
size: 1Gi
storageClass: local-storage ''
tls:
certs:
clientRootSecret: cockroachdb-root
nodeSecret: cockroachdb-node
provided: false
tlsSecret: false
enabled: false
init:
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach-k8s-request-cert
tag: '0.4'
serviceAccount:
create: true
name: ''
Storage: 1Gi
PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"volume1"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"10Gi"},"hostPath":{"path":"/data/volume1"}}}'
creationTimestamp: "2021-01-07T23:11:43Z"
finalizers:
- kubernetes.io/pv-protection
labels:
type: local
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:phase: {}
manager: k3s
operation: Update
time: "2021-01-07T23:11:43Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations: {}
f:labels:
.: {}
f:type: {}
f:spec:
f:accessModes: {}
f:capacity: {}
f:hostPath:
.: {}
f:path: {}
f:type: {}
f:persistentVolumeReclaimPolicy: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2021-01-07T23:11:43Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:capacity:
f:storage: {}
manager: Go-http-client
operation: Update
time: "2021-01-07T23:12:11Z"
name: volume1
resourceVersion: "173783"
selfLink: /api/v1/persistentvolumes/volume1
uid: 6e76984c-22cd-4219-9ff6-ba7f67c1ca72
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 4Gi
hostPath:
path: /data/volume1
type: ""
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
status:
phase: Available
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2021-01-07T23:29:17Z"
managedFields:
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:provisioner: {}
f:reclaimPolicy: {}
f:volumeBindingMode: {}
manager: rancher
operation: Update
time: "2021-01-07T23:29:17Z"
name: local-storage
resourceVersion: "180190"
selfLink: /apis/storage.k8s.io/v1/storageclasses/local-storage
uid: 0a5f8b75-7fb5-4965-91ee-91b0a087339a
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
With provided details looks like your storage class is missing on rancher.
Without storage class respective PVC won't get created so it's giving an error. Storage classes may change with cloud providers and also based on the requirement of the type of disk SSD, HDD.
You can get more idea : https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/
check first your PV is available and after that check for storage class and PVC.
It looks like the issue was with Rancher this time (Thank you #Harsh Manvar for answering). If you have more questions about CockroachDB you can also join the CockroachDB community slack channel where you will find loads of experts who can answer your questions in a timely manner. (And be sure to join the #community channel also to have some FUN!) :) https://go.crdb.dev/p/slack

Kubernetes pods are getting failed with volume attach issue

We are seeing some of the pods getting failed while mounting/attaching volume to pods. this is happening intermittently and after bouncing kubelet service pods are able to reattach the volumes and succeeding. seeing the below error when pod get struck.
I created single PV and corresponding PVC to use volume mounts for pods:
Error:
Warning FailedMount 16m (x4 over 43m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[default-token-pwvpc podmetadata docker-sock workdir]: timed out waiting for the condition
Warning FailedMount 7m32s (x5 over 41m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[docker-sock workdir default-token-pwvpc podmetadata]: timed out waiting for the condition
Warning FailedMount 3m2s (x10 over 45m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[podmetadata docker-sock workdir default-token-pwvpc]: timed out waiting for the condition
Warning FailedMount 45s (x2 over 21m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[workdir default-token-pwvpc podmetadata docker-sock]: timed out waiting for the condition
Version:
Client Version: v1.17.2
Server Version: v1.17.2
Host OS:
Centos 7.7
CNI:
Weave
apiVersion: v1
items:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2020-07-17T21:55:47Z"
generation: 1
labels:
logicmonitor.com/collectorset: kubernetes-03
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:logicmonitor.com/collectorset: {}
f:spec:
f:podManagementPolicy: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:logicmonitor.com/collectorset: {}
f:template:
f:metadata:
f:labels:
.: {}
f:logicmonitor.com/collectorset: {}
f:namespace: {}
f:spec:
f:affinity:
.: {}
f:podAntiAffinity:
.: {}
f:requiredDuringSchedulingIgnoredDuringExecution: {}
f:containers:
k:{"name":"collector"}:
.: {}
f:env:
.: {}
k:{"name":"COLLECTOR_IDS"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"access_id"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"access_key"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"account"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"collector_size"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"collector_version"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"kubernetes"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"use_ea"}:
.: {}
f:name: {}
f:value: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources:
.: {}
f:limits:
.: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:serviceAccount: {}
f:serviceAccountName: {}
f:terminationGracePeriodSeconds: {}
f:updateStrategy:
f:type: {}
f:status:
f:replicas: {}
manager: collectorset-controller
operation: Update
time: "2020-08-22T03:42:35Z"
name: kubernetes-03
namespace: default
resourceVersion: "10831902"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/kubernetes-03
uid: 1296654b-77bc-4af8-9537-04f0a00bdd0c
spec:
podManagementPolicy: Parallel
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
logicmonitor.com/collectorset: kubernetes-03
serviceName: ""
template:
metadata:
creationTimestamp: null
labels:
logicmonitor.com/collectorset: kubernetes-03
namespace: default
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
logicmonitor.com/collectorset: kubernetes-03
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: account
valueFrom:
secretKeyRef:
key: account
name: collectorset-controller
optional: false
- name: access_id
valueFrom:
secretKeyRef:
key: accessID
name: collectorset-controller
optional: false
- name: access_key
valueFrom:
secretKeyRef:
key: accessKey
name: collectorset-controller
optional: false
- name: kubernetes
value: "true"
- name: collector_size
value: small
- name: collector_version
value: "0"
- name: use_ea
value: "false"
- name: COLLECTOR_IDS
value: "205"
image: logicmonitor/collector:latest
imagePullPolicy: Always
name: collector
resources:
limits:
memory: 2Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: collector
serviceAccountName: collector
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
status:
collisionCount: 0
currentReplicas: 1
currentRevision: kubernetes-03-655b46ff69
observedGeneration: 1
readyReplicas: 1
replicas: 1
updateRevision: kubernetes-03-655b46ff69
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
kubectl get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-07-10T00:21:49Z"
finalizers:
- kubernetes.io/pv-protection
labels:
type: local
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:type: {}
f:spec:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:nfs:
.: {}
f:path: {}
f:server: {}
f:persistentVolumeReclaimPolicy: {}
f:storageClassName: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2020-07-10T00:21:49Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/bound-by-controller: {}
f:spec:
f:claimRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-10T00:21:52Z"
name: pv-modeldata
resourceVersion: "15764"
selfLink: /api/v1/persistentvolumes/pv-data
uid: 68521e84-0aa9-4643-a047-441a61451599
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5999Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-modeldata
namespace: default
resourceVersion: "15762"
uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7
nfs:
path: /k8fs03
server: kubemaster01.rms.com
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
status:
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
# kubectl get pvc -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-07-10T00:21:52Z"
finalizers:
- kubernetes.io/pvc-protection
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/bind-completed: {}
f:pv.kubernetes.io/bound-by-controller: {}
f:spec:
f:volumeName: {}
f:status:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-10T00:21:52Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:storageClassName: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2020-07-10T00:21:52Z"
name: pvc-modeldata
namespace: default
resourceVersion: "15766"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/pvc-data
uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5999Gi
storageClassName: manual
volumeMode: Filesystem
volumeName: pv-data
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5999Gi
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
# kubectl get sc -o yaml
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Thanks,
Chittu

field label not supported: NAME

I use https://github.com/zalando/postgres-operator to deploy PostgreSQL instances and have the following running instances:
kubectl get postgresqls.acid.zalan.do
NAME TEAM VERSION PODS VOLUME CPU-REQUEST MEMORY-REQUEST AGE STATUS
acid-minimal-cluster acid 12 2 1Gi 2d18h SyncFailed
acid-userdb acid 12 2 5Gi 100m 100Mi 2d18h SyncFailed
databaker-userdb databaker 12 2 2Gi 100m 100Mi 2d18h SyncFailed
databaker-users-db databaker 12 2 2Gi 100m 100Mi 2d17h SyncFailed
I try to get the instance as follows:
kubectl get postgresql --field-selector NAME=databaker-userdb
Error from server (BadRequest): Unable to find "acid.zalan.do/v1, Resource=postgresqls" that match label selector "", field selector "NAME=databaker-userdb": field label not supported: NAME
as you can see, I've got error message.
What am I doing wrong?
Update
The yaml file
apiVersion: acid.zalan.do/v1
kind: postgresql
metadata:
annotations:
meta.helm.sh/release-name: user-db
meta.helm.sh/release-namespace: default
creationTimestamp: "2020-06-16T15:58:28Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
team: databaker
managedFields:
- apiVersion: acid.zalan.do/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
f:team: {}
f:spec:
.: {}
f:databases:
.: {}
f:users: {}
f:numberOfInstances: {}
f:postgresql:
.: {}
f:version: {}
f:resources:
.: {}
f:limits:
.: {}
f:cpu: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:memory: {}
f:teamId: {}
f:users:
.: {}
f:databaker: {}
f:volume:
.: {}
f:size: {}
manager: Go-http-client
operation: Update
time: "2020-06-16T15:58:28Z"
- apiVersion: acid.zalan.do/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:PostgresClusterStatus: {}
manager: postgres-operator
operation: Update
time: "2020-06-16T15:58:53Z"
name: databaker-users-db
namespace: default
resourceVersion: "68486"
selfLink: /apis/acid.zalan.do/v1/namespaces/default/postgresqls/databaker-users-db
uid: 8bc3b591-4346-4cca-a1ae-682a1ad16615
spec:
databases:
users: databaker
numberOfInstances: 2
postgresql:
version: "12"
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
teamId: databaker
users:
databaker:
- superuser
- createdb
volume:
size: 2Gi
status:
PostgresClusterStatus: Running
From the documentation
Field selectors let you select Kubernetes resources based on the value of one or more resource fields. Here are some examples of field selector queries:
metadata.name=my-service
metadata.namespace!=default
status.phase=Pending
NAME is not resource field, You can use like following
$ kubectl get postgresql --field-selector metadata.name=databaker-userdb
According to documentation
Supported field selectors vary by Kubernetes resource type. All resource types support the metadata.name and metadata.namespace fields. Using unsupported field selectors produces an error.
To get according to status, you can run
$ kubectl get postgresql | grep "SyncFailed"