Installing database in Kubernetes - kubernetes

I'm trying to install CockroachDB with Rancher and getting some problems, showing:
FailedBinding (5) 14 sec ago no persistent volumes available for this claim and no storage class is set
How can this be solved?
Here are the configurations in my local machine:
PersistentVolumeClaim: datadir-cockroachdb-0
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-01-07T23:50:42Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/component: cockroachdb
app.kubernetes.io/instance: cockroachdb
app.kubernetes.io/name: cockroachdb
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/name: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: k3s
operation: Update
time: "2021-01-07T23:50:41Z"
name: datadir-cockroachdb-0
namespace: default
resourceVersion: "188922"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/datadir-cockroachdb-0
uid: ef83d3c7-0309-44a8-b379-0134835d97a9
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
volumeMode: Filesystem
status:
phase: Pending
CockroachDB
clusterDomain: cluster.local
conf:
attrs: []
cache: 25%
cluster-name: ''
disable-cluster-name-verification: false
http-port: 8080
join: []
locality: ''
logtostderr: INFO
max-disk-temp-storage: 0
max-offset: 500ms
max-sql-memory: 25%
port: 26257
single-node: false
sql-audit-dir: ''
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach
tag: v20.1.3
ingress:
annotations: {}
enabled: false
hosts: []
labels: {}
paths:
- /
tls: []
init:
affinity: {}
annotations: {}
labels:
app.kubernetes.io/component: init
nodeSelector: {}
resources: {}
tolerations: []
labels: {}
networkPolicy:
enabled: false
ingress:
grpc: []
http: []
service:
discovery:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
ports:
grpc:
external:
name: grpc
port: 26257
internal:
name: grpc-internal
port: 26257
http:
name: http
port: 8080
public:
annotations: {}
labels:
app.kubernetes.io/component: cockroachdb
type: ClusterIP
statefulset:
annotations: {}
args: []
budget:
maxUnavailable: 1
env: []
labels:
app.kubernetes.io/component: cockroachdb
nodeAffinity: {}
nodeSelector: {}
podAffinity: {}
podAntiAffinity:
type: soft
weight: 100
podManagementPolicy: Parallel
priorityClassName: ''
replicas: 3
resources: {}
secretMounts: []
tolerations: []
updateStrategy:
type: RollingUpdate
storage:
hostPath: ''
persistentVolume: volume1
annotations: {}
enabled: true
labels: {}
size: 1Gi
storageClass: local-storage ''
tls:
certs:
clientRootSecret: cockroachdb-root
nodeSecret: cockroachdb-node
provided: false
tlsSecret: false
enabled: false
init:
image:
credentials: {}
pullPolicy: IfNotPresent
repository: cockroachdb/cockroach-k8s-request-cert
tag: '0.4'
serviceAccount:
create: true
name: ''
Storage: 1Gi
PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"volume1"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"10Gi"},"hostPath":{"path":"/data/volume1"}}}'
creationTimestamp: "2021-01-07T23:11:43Z"
finalizers:
- kubernetes.io/pv-protection
labels:
type: local
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:phase: {}
manager: k3s
operation: Update
time: "2021-01-07T23:11:43Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations: {}
f:labels:
.: {}
f:type: {}
f:spec:
f:accessModes: {}
f:capacity: {}
f:hostPath:
.: {}
f:path: {}
f:type: {}
f:persistentVolumeReclaimPolicy: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2021-01-07T23:11:43Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:capacity:
f:storage: {}
manager: Go-http-client
operation: Update
time: "2021-01-07T23:12:11Z"
name: volume1
resourceVersion: "173783"
selfLink: /api/v1/persistentvolumes/volume1
uid: 6e76984c-22cd-4219-9ff6-ba7f67c1ca72
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 4Gi
hostPath:
path: /data/volume1
type: ""
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
status:
phase: Available
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2021-01-07T23:29:17Z"
managedFields:
- apiVersion: storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:provisioner: {}
f:reclaimPolicy: {}
f:volumeBindingMode: {}
manager: rancher
operation: Update
time: "2021-01-07T23:29:17Z"
name: local-storage
resourceVersion: "180190"
selfLink: /apis/storage.k8s.io/v1/storageclasses/local-storage
uid: 0a5f8b75-7fb5-4965-91ee-91b0a087339a
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

With provided details looks like your storage class is missing on rancher.
Without storage class respective PVC won't get created so it's giving an error. Storage classes may change with cloud providers and also based on the requirement of the type of disk SSD, HDD.
You can get more idea : https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/
check first your PV is available and after that check for storage class and PVC.

It looks like the issue was with Rancher this time (Thank you #Harsh Manvar for answering). If you have more questions about CockroachDB you can also join the CockroachDB community slack channel where you will find loads of experts who can answer your questions in a timely manner. (And be sure to join the #community channel also to have some FUN!) :) https://go.crdb.dev/p/slack

Related

Kubernetes replica does not receive traffic

I am trying to understand how kubernetes replicas work, and I am getting an unexpected (?) behavior. If I understand correctly, when a service selects a deployment it will distribute the requests accross all pods. My deployment has 3 replicas and the pods are being selected properly by the service but requests only go to one of then (the other two remain unused)
I followed this tutorial and after scaling the deployment I made multiple get requests to the service and I was expecting that requests will be distributed accross replicas but only one replica received and handled all the requests. I am not sure if that's how it works or maybe I need to do something else?
I would like to add that my first test was a very simple endpoint that will resolve the request immediatly. I tested adding load to the endpoint (a delay before resolving) and it did started sending requests to the others replicas. I would love to understand how it works I haven't being able to find any docs about it
This is the deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-11-23T15:26:36Z"
generation: 2
labels:
app: express-echo
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:replicas: {}
manager: kubectl
operation: Update
subresource: scale
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"express-echo"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-create
operation: Update
time: "2022-11-23T15:26:36Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-11-23T15:28:18Z"
name: express-echo
namespace: default
resourceVersion: "5192"
uid: 32288873-1e30-44a1-9226-0214c1becd35
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: express-echo
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: express-echo
spec:
containers:
- image: gcr.io/gcp-project/express-echo:1.0.0
imagePullPolicy: IfNotPresent
name: express-echo
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-11-23T15:26:36Z"
lastUpdateTime: "2022-11-23T15:27:01Z"
message: ReplicaSet "express-echo-547f8bcfb5" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-11-23T15:28:18Z"
lastUpdateTime: "2022-11-23T15:28:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 3
replicas: 3
And this is the service
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
creationTimestamp: "2022-11-23T15:26:48Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: express-echo
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl-expose
operation: Update
time: "2022-11-23T15:26:48Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-11-23T15:27:24Z"
name: express-echo
namespace: default
resourceVersion: "4765"
uid: 99346a8a-1e89-476e-a21f-0d9c98d86b7d
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.0.8.195
clusterIPs:
- 10.0.8.195
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 31123
port: 80
protocol: TCP
targetPort: 3001
selector:
app: express-echo
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 1.1.1.1

Getting "response 404 (backend NotFound), service rules for the path non-existent" Using Ingress Google Cloud

I want to my backend service which is deployed on kubernetes service to access using ingress with path /sso-dev/, for that i have deployed my service on kubernetes container the deployment, service and ingress manifest is mentioned below, but while accessing the ingress load balancer api with path /sso-dev/ it throws "response 404 (backend NotFound), service rules for the path non-existent" error
I required a help just to access the backend service which is working fine with kubernetes container load balance ip.
here is my ingress configure
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30969--6d0e236a1c7d6409":"HEALTHY","k8s1-6d0e236a-default-sso-dev-service-80-849fdb46":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s2-fr-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/target-proxy: k8s2-tp-uwdva40x-default-my-ingress-h98d0sfl
ingress.kubernetes.io/url-map: k8s2-um-uwdva40x-default-my-ingress-h98d0sfl
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/backend-protocol":"HTTP","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"sso-dev-service","port":{"number":80}}},"path":"/sso-dev/*","pathType":"ImplementationSpecific"}]}}]}}
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-06-22T12:30:49Z"
finalizers:
- networking.gke.io/ingress-finalizer-V2
generation: 1
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:nginx.ingress.kubernetes.io/backend-protocol: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-06-22T12:30:49Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:ingress.kubernetes.io/backends: {}
f:ingress.kubernetes.io/forwarding-rule: {}
f:ingress.kubernetes.io/target-proxy: {}
f:ingress.kubernetes.io/url-map: {}
f:finalizers:
.: {}
v:"networking.gke.io/ingress-finalizer-V2": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:32:13Z"
name: my-ingress
namespace: default
resourceVersion: "13073497"
uid: 253e067f-0711-4d24-a706-497692dae4d9
spec:
rules:
- http:
paths:
- backend:
service:
name: sso-dev-service
port:
number: 80
path: /sso-dev/*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 34.111.49.35
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-06-22T08:52:11Z"
generation: 1
labels:
app: sso-dev
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"cent-sha256-1"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:52:11Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T11:51:22Z"
name: sso-dev
namespace: default
resourceVersion: "13051665"
uid: c8732885-b7d8-450c-86c4-19769638eb2a
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: sso-dev
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sso-dev
spec:
containers:
- image: us-east4-docker.pkg.dev/centegycloud-351515/sso/cent#sha256:64b50553219db358945bf3cd6eb865dd47d0d45664464a9c334602c438bbaed9
imagePullPolicy: IfNotPresent
name: cent-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-22T08:52:11Z"
lastUpdateTime: "2022-06-22T08:52:25Z"
message: ReplicaSet "sso-dev-8566f4bc55" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2022-06-22T11:51:22Z"
lastUpdateTime: "2022-06-22T11:51:22Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6d0e236a-default-sso-dev-service-80-849fdb46"},"zones":["us-central1-c"]}'
creationTimestamp: "2022-06-22T08:53:32Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: sso-dev
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-06-22T08:53:32Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-06-22T08:53:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.google.com/neg-status: {}
manager: glbc
operation: Update
subresource: status
time: "2022-06-22T12:30:49Z"
name: sso-dev-service
namespace: default
resourceVersion: "13071362"
uid: 03b0cbe6-1ed8-4441-b2c5-93ae5803a582
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.32.6.103
clusterIPs:
- 10.32.6.103
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30584
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sso-dev
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 104.197.93.226
You need to change the pathType to Prefix as follows, in your ingress:
pathType: Prefix
Because I noted that you are using the pathType: ImplementationSpecific . With this value, the matching depends on the IngressClass, so I think for your case the pathType Prefix should be more helpful. Additionally, you can find more information about the ingress path types supported in kubernetes in in this link.

Kubernetes Cassandra Datacenter deletes PVC while deleting Datacenter

I have cassandra operator installed and I setup cassandra datacenter/cluster with 3 nodes.
I have created sample keyspace, table and inserted the data. I see it has created 3 PVC's in my storage section. When I deleting the dataceneter its delete associated PVC's as well ,So when I setup same configuration Datacenter/cluster , its completely new , No earlier keyspace or tables.
How can I make them persistence for future use? I am using sample yaml from below
https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x
I don't find any persistentVolumeClaim configuration in it , Its having storageConfig:
cassandraDataVolumeClaimSpec:
Is anyone came across such scenario?
Edit: Storage class details:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
description: Provides RWO and RWX Filesystem volumes with Retain Policy
storageclass.kubernetes.io/is-default-class: "false"
name: ocs-storagecluster-cephfs-retain
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
provisioner: openshift-storage.cephfs.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: Immediate
Here is Cassandra cluster YAML:
apiVersion: cassandra.datastax.com/v1beta1
kind: CassandraDatacenter
metadata:
name: dc
generation: 2
spec:
size: 3
config:
cassandra-yaml:
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
jvm-options:
additional-jvm-opts:
- '-Ddse.system_distributed_replication_dc_names=dc1'
- '-Ddse.system_distributed_replication_per_dc=1'
initial_heap_size: 800M
max_heap_size: 800M
resources: {}
clusterName: cassandra
systemLoggerResources: {}
configBuilderResources: {}
serverVersion: 3.11.7
serverType: cassandra
storageConfig:
cassandraDataVolumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-cephfs-retain
managementApiAuth:
insecure: {}
EDIT: PV Details:
oc get pv pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
creationTimestamp: "2022-02-23T20:52:54Z"
finalizers:
- kubernetes.io/pv-protection
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/provisioned-by: {}
f:spec:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:claimRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:csi:
.: {}
f:controllerExpandSecretRef:
.: {}
f:name: {}
f:namespace: {}
f:driver: {}
f:nodeStageSecretRef:
.: {}
f:name: {}
f:namespace: {}
f:volumeAttributes:
.: {}
f:clusterID: {}
f:fsName: {}
f:storage.kubernetes.io/csiProvisionerIdentity: {}
f:subvolumeName: {}
f:volumeHandle: {}
f:persistentVolumeReclaimPolicy: {}
f:storageClassName: {}
f:volumeMode: {}
manager: csi-provisioner
operation: Update
time: "2022-02-23T20:52:54Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2022-02-23T20:52:54Z"
name: pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
resourceVersion: "51684941"
selfLink: /api/v1/persistentvolumes/pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7
uid: 8ded2de5-6d4e-45a1-9b89-a385d74d6d4a
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: server-data-cstone-cassandra-cstone-dc-default-sts-1
namespace: dv01-cornerstone
resourceVersion: "51684914"
uid: 15def0ca-6cbc-4569-a560-7b9e89a7b7a7
csi:
controllerExpandSecretRef:
name: rook-csi-cephfs-provisioner
namespace: openshift-storage
driver: openshift-storage.cephfs.csi.ceph.com
nodeStageSecretRef:
name: rook-csi-cephfs-node
namespace: openshift-storage
volumeAttributes:
clusterID: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
storage.kubernetes.io/csiProvisionerIdentity: 1645064620191-8081-openshift-storage.cephfs.csi.ceph.com
subvolumeName: csi-vol-92d5e07d-94ea-11ec-92e8-0a580a20028c
volumeHandle: 0001-0011-openshift-storage-0000000000000001-92d5e07d-94ea-11ec-92e8-0a580a20028c
persistentVolumeReclaimPolicy: Retain
storageClassName: ocs-storagecluster-cephfs-retain
volumeMode: Filesystem
status:
phase: Bound
According to the spec:
The storage configuration. This sets up a 100GB volume at /var/lib/cassandra
on each server pod. The user is left to create the server-storage storage
class by following these directions...
https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd
Before you deploy the Cassandra spec, first ensure your cluster already have the CSI driver installed and working properly, then proceed to create the StorageClass as the spec required:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: server-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
type: pd-ssd
Re-deploy your Cassandra now should have the data disk retain upon deletion.

why the postgersql kubernetes statefulset did not claim the PVC

Today I want to change the PostgreSQL statefulset PVC name, to my surprise, I did not found any clain about the PVC in the kubernetes deployment define, this is the kubernetes deployment define of PostgreSQL:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: reddwarf-postgresql-postgresql
namespace: reddwarf-storage
uid: 787a18c8-f6fb-4deb-bb07-3c3d123cf6f9
resourceVersion: '21931453'
generation: 30
creationTimestamp: '2021-08-05T05:29:03Z'
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"meta.helm.sh/release-name":"reddwarf-postgresql","meta.helm.sh/release-namespace":"reddwarf-storage"},"creationTimestamp":"2021-08-05T05:29:03Z","generation":12,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1"},"managedFields":[{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{}}},"f:spec":{"f:podManagementPolicy":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:serviceName":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{},"f:role":{}},"f:name":{}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"BITNAMI_DEBUG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"PGDATA\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_CLIENT_MIN_MESSAGES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_LDAP\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_TLS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_CONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_DISCONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_HOSTNAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PGAUDIT_LOG_CATALOG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PORT_NUMBER\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_SHARED_PRELOAD_LIBRARIES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_VOLUME_DIR\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRES_PASSWORD\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:secretKeyRef":{".":{},"f:key":{},"f:name":{}}}},"k:{\"name\":\"POSTGRES_USER\"}":{".":{},"f:name":{},"f:value":{}}},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5432,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/bitnami/postgresql\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/dev/shm\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:fsGroup":{}},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"dshm\"}":{".":{},"f:emptyDir":{".":{},"f:medium":{}},"f:name":{}}}}},"f:updateStrategy":{"f:type":{}},"f:volumeClaimTemplates":{}}},"manager":"Go-http-client","operation":"Update","time":"2021-08-05T05:29:03Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:image":{}}}}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2021-08-10T16:50:45Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:args":{}}}}}}},"manager":"kubectl","operation":"Update","time":"2021-08-11T01:46:21Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:collisionCount":{},"f:currentReplicas":{},"f:currentRevision":{},"f:observedGeneration":{},"f:replicas":{},"f:updateRevision":{},"f:updatedReplicas":{}}},"manager":"kube-controller-manager","operation":"Update","time":"2021-08-11T02:24:07Z"}],"name":"reddwarf-postgresql-postgresql","namespace":"reddwarf-storage","selfLink":"/apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql","uid":"787a18c8-f6fb-4deb-bb07-3c3d123cf6f9"},"spec":{"podManagementPolicy":"OrderedReady","replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql","role":"primary"}},"serviceName":"reddwarf-postgresql-headless","template":{"metadata":{"creationTimestamp":null,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1","role":"primary"},"name":"reddwarf-postgresql"},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql"}},"namespaces":["reddwarf-storage"],"topologyKey":"kubernetes.io/hostname"},"weight":1}]}},"automountServiceAccountToken":false,"containers":[{"env":[{"name":"BITNAMI_DEBUG","value":"false"},{"name":"POSTGRESQL_PORT_NUMBER","value":"5432"},{"name":"POSTGRESQL_VOLUME_DIR","value":"/bitnami/postgresql"},{"name":"PGDATA","value":"/bitnami/postgresql/data"},{"name":"POSTGRES_USER","value":"postgres"},{"name":"POSTGRES_PASSWORD","valueFrom":{"secretKeyRef":{"key":"postgresql-password","name":"reddwarf-postgresql"}}},{"name":"POSTGRESQL_ENABLE_LDAP","value":"no"},{"name":"POSTGRESQL_ENABLE_TLS","value":"no"},{"name":"POSTGRESQL_LOG_HOSTNAME","value":"false"},{"name":"POSTGRESQL_LOG_CONNECTIONS","value":"false"},{"name":"POSTGRESQL_LOG_DISCONNECTIONS","value":"false"},{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG","value":"off"},{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES","value":"error"},{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES","value":"pgaudit"}],"image":"docker.io/bitnami/postgresql:13.3.0-debian-10-r75","imagePullPolicy":"IfNotPresent","livenessProbe":{"exec":{"command":["/bin/sh","-c","exec
pg_isready -U \"postgres\" -h 127.0.0.1 -p
5432"]},"failureThreshold":6,"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"name":"reddwarf-postgresql","ports":[{"containerPort":5432,"name":"tcp-postgresql","protocol":"TCP"}],"readinessProbe":{"exec":{"command":["/bin/sh","-c","-e","exec
pg_isready -U \"postgres\" -h 127.0.0.1 -p 5432\n[ -f
/opt/bitnami/postgresql/tmp/.initialized ] || [ -f
/bitnami/postgresql/.initialized
]\n"]},"failureThreshold":6,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"resources":{"requests":{"cpu":"250m","memory":"256Mi"}},"securityContext":{"runAsUser":1001},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/dev/shm","name":"dshm"},{"mountPath":"/bitnami/postgresql","name":"data"}]}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{"fsGroup":1001},"terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{"medium":"Memory"},"name":"dshm"}]}},"updateStrategy":{"type":"RollingUpdate"},"volumeClaimTemplates":[{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"creationTimestamp":null,"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"8Gi"}},"volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}}
meta.helm.sh/release-name: reddwarf-postgresql
meta.helm.sh/release-namespace: reddwarf-storage
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: apps/v1
time: '2021-08-05T05:29:03Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:meta.helm.sh/release-name': {}
'f:meta.helm.sh/release-namespace': {}
'f:labels':
.: {}
'f:app.kubernetes.io/component': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/name': {}
'f:helm.sh/chart': {}
'f:spec':
'f:podManagementPolicy': {}
'f:replicas': {}
'f:revisionHistoryLimit': {}
'f:selector': {}
'f:serviceName': {}
'f:template':
'f:metadata':
'f:labels':
.: {}
'f:app.kubernetes.io/component': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/name': {}
'f:helm.sh/chart': {}
'f:role': {}
'f:name': {}
'f:spec':
'f:affinity':
.: {}
'f:podAntiAffinity':
.: {}
'f:preferredDuringSchedulingIgnoredDuringExecution': {}
'f:automountServiceAccountToken': {}
'f:containers':
'k:{"name":"reddwarf-postgresql"}':
.: {}
'f:env':
.: {}
'k:{"name":"BITNAMI_DEBUG"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"PGDATA"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_ENABLE_LDAP"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_ENABLE_TLS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_CONNECTIONS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_DISCONNECTIONS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_HOSTNAME"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_PORT_NUMBER"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_VOLUME_DIR"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRES_PASSWORD"}':
.: {}
'f:name': {}
'f:valueFrom':
.: {}
'f:secretKeyRef':
.: {}
'f:key': {}
'f:name': {}
'k:{"name":"POSTGRES_USER"}':
.: {}
'f:name': {}
'f:value': {}
'f:imagePullPolicy': {}
'f:livenessProbe':
.: {}
'f:exec':
.: {}
'f:command': {}
'f:failureThreshold': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:successThreshold': {}
'f:timeoutSeconds': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":5432,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:name': {}
'f:protocol': {}
'f:readinessProbe':
.: {}
'f:exec':
.: {}
'f:command': {}
'f:failureThreshold': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:successThreshold': {}
'f:timeoutSeconds': {}
'f:resources':
.: {}
'f:requests':
.: {}
'f:cpu': {}
'f:memory': {}
'f:securityContext':
.: {}
'f:runAsUser': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:volumeMounts':
.: {}
'k:{"mountPath":"/bitnami/postgresql"}':
.: {}
'f:mountPath': {}
'f:name': {}
'k:{"mountPath":"/dev/shm"}':
.: {}
'f:mountPath': {}
'f:name': {}
'f:dnsPolicy': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext':
.: {}
'f:fsGroup': {}
'f:terminationGracePeriodSeconds': {}
'f:volumes':
.: {}
'k:{"name":"dshm"}':
.: {}
'f:emptyDir':
.: {}
'f:medium': {}
'f:name': {}
'f:updateStrategy':
'f:type': {}
'f:volumeClaimTemplates': {}
- manager: kubectl-client-side-apply
operation: Update
apiVersion: apps/v1
time: '2021-08-10T16:50:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:template':
'f:spec':
'f:containers':
'k:{"name":"reddwarf-postgresql"}':
'f:image': {}
- manager: kubectl
operation: Update
apiVersion: apps/v1
time: '2021-08-11T02:29:20Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:kubectl.kubernetes.io/last-applied-configuration': {}
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2021-11-27T03:07:58Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:collisionCount': {}
'f:currentRevision': {}
'f:observedGeneration': {}
'f:replicas': {}
'f:updateRevision': {}
selfLink: >-
/apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql
status:
observedGeneration: 30
replicas: 0
currentRevision: reddwarf-postgresql-postgresql-5695cb9676
updateRevision: reddwarf-postgresql-postgresql-5695cb9676
collisionCount: 0
spec:
replicas: 0
selector:
matchLabels:
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
role: primary
template:
metadata:
name: reddwarf-postgresql
creationTimestamp: null
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
role: primary
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
containers:
- name: reddwarf-postgresql
image: 'docker.io/bitnami/postgresql:13.3.0-debian-10-r75'
ports:
- name: tcp-postgresql
containerPort: 5432
protocol: TCP
env:
- name: BITNAMI_DEBUG
value: 'false'
- name: POSTGRESQL_PORT_NUMBER
value: '5432'
- name: POSTGRESQL_VOLUME_DIR
value: /bitnami/postgresql
- name: PGDATA
value: /bitnami/postgresql/data
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: reddwarf-postgresql
key: postgresql-password
- name: POSTGRESQL_ENABLE_LDAP
value: 'no'
- name: POSTGRESQL_ENABLE_TLS
value: 'no'
- name: POSTGRESQL_LOG_HOSTNAME
value: 'false'
- name: POSTGRESQL_LOG_CONNECTIONS
value: 'false'
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: 'false'
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: 'off'
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: error
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: pgaudit
resources:
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
livenessProbe:
exec:
command:
- /bin/sh
- '-c'
- exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/sh
- '-c'
- '-e'
- >
exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f
/bitnami/postgresql/.initialized ]
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: false
securityContext:
fsGroup: 1001
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
namespaces:
- reddwarf-storage
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data
creationTimestamp: null
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
serviceName: reddwarf-postgresql-headless
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
revisionHistoryLimit: 10
this statefulset bind the PVC named data-reddwarf-postgresql-postgresql-0 right now, but I did not found the PVC define in this statefulset yaml. where is the PVC bind define? what should I do to change the PVC to bind to a new one? I install this PostgreSQL into kubernetes from helm chart.
pvc thats gets created as a part of statefulset will have a name which is an amalgamation of 3 components joined by - :
Name defined in the volumeClaimTemplates section data
Name of the statefulset in the metadata section which is reddwarf-postgresql-postgresql
Its replica number , if it is first replica then it would be 0
So finally the name of the pvc that gets created when you create this statefulset is
data-reddwarf-postgresql-postgresql-0.which is the pvc name that you also seeing in your setup.
please note when you delete the statefulset , pvc does not deleted automatically we need to pvc separately. When you recreate/scaleup the stateful set and if the pvc which matches above naming convention& spec does not exists then it will create a pvc.
From kubernetes documentation

Kubernetes pods are getting failed with volume attach issue

We are seeing some of the pods getting failed while mounting/attaching volume to pods. this is happening intermittently and after bouncing kubelet service pods are able to reattach the volumes and succeeding. seeing the below error when pod get struck.
I created single PV and corresponding PVC to use volume mounts for pods:
Error:
Warning FailedMount 16m (x4 over 43m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[default-token-pwvpc podmetadata docker-sock workdir]: timed out waiting for the condition
Warning FailedMount 7m32s (x5 over 41m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[docker-sock workdir default-token-pwvpc podmetadata]: timed out waiting for the condition
Warning FailedMount 3m2s (x10 over 45m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[podmetadata docker-sock workdir default-token-pwvpc]: timed out waiting for the condition
Warning FailedMount 45s (x2 over 21m) kubelet, ca1md-k8w03-02 Unable to attach or mount volumes: unmounted volumes=[workdir], unattached volumes=[workdir default-token-pwvpc podmetadata docker-sock]: timed out waiting for the condition
Version:
Client Version: v1.17.2
Server Version: v1.17.2
Host OS:
Centos 7.7
CNI:
Weave
apiVersion: v1
items:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2020-07-17T21:55:47Z"
generation: 1
labels:
logicmonitor.com/collectorset: kubernetes-03
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:logicmonitor.com/collectorset: {}
f:spec:
f:podManagementPolicy: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:logicmonitor.com/collectorset: {}
f:template:
f:metadata:
f:labels:
.: {}
f:logicmonitor.com/collectorset: {}
f:namespace: {}
f:spec:
f:affinity:
.: {}
f:podAntiAffinity:
.: {}
f:requiredDuringSchedulingIgnoredDuringExecution: {}
f:containers:
k:{"name":"collector"}:
.: {}
f:env:
.: {}
k:{"name":"COLLECTOR_IDS"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"access_id"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"access_key"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"account"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"collector_size"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"collector_version"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"kubernetes"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"use_ea"}:
.: {}
f:name: {}
f:value: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources:
.: {}
f:limits:
.: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:serviceAccount: {}
f:serviceAccountName: {}
f:terminationGracePeriodSeconds: {}
f:updateStrategy:
f:type: {}
f:status:
f:replicas: {}
manager: collectorset-controller
operation: Update
time: "2020-08-22T03:42:35Z"
name: kubernetes-03
namespace: default
resourceVersion: "10831902"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/kubernetes-03
uid: 1296654b-77bc-4af8-9537-04f0a00bdd0c
spec:
podManagementPolicy: Parallel
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
logicmonitor.com/collectorset: kubernetes-03
serviceName: ""
template:
metadata:
creationTimestamp: null
labels:
logicmonitor.com/collectorset: kubernetes-03
namespace: default
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
logicmonitor.com/collectorset: kubernetes-03
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: account
valueFrom:
secretKeyRef:
key: account
name: collectorset-controller
optional: false
- name: access_id
valueFrom:
secretKeyRef:
key: accessID
name: collectorset-controller
optional: false
- name: access_key
valueFrom:
secretKeyRef:
key: accessKey
name: collectorset-controller
optional: false
- name: kubernetes
value: "true"
- name: collector_size
value: small
- name: collector_version
value: "0"
- name: use_ea
value: "false"
- name: COLLECTOR_IDS
value: "205"
image: logicmonitor/collector:latest
imagePullPolicy: Always
name: collector
resources:
limits:
memory: 2Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: collector
serviceAccountName: collector
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
status:
collisionCount: 0
currentReplicas: 1
currentRevision: kubernetes-03-655b46ff69
observedGeneration: 1
readyReplicas: 1
replicas: 1
updateRevision: kubernetes-03-655b46ff69
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
kubectl get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-07-10T00:21:49Z"
finalizers:
- kubernetes.io/pv-protection
labels:
type: local
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:type: {}
f:spec:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:nfs:
.: {}
f:path: {}
f:server: {}
f:persistentVolumeReclaimPolicy: {}
f:storageClassName: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2020-07-10T00:21:49Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/bound-by-controller: {}
f:spec:
f:claimRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-10T00:21:52Z"
name: pv-modeldata
resourceVersion: "15764"
selfLink: /api/v1/persistentvolumes/pv-data
uid: 68521e84-0aa9-4643-a047-441a61451599
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5999Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: pvc-modeldata
namespace: default
resourceVersion: "15762"
uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7
nfs:
path: /k8fs03
server: kubemaster01.rms.com
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
status:
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
# kubectl get pvc -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-07-10T00:21:52Z"
finalizers:
- kubernetes.io/pvc-protection
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/bind-completed: {}
f:pv.kubernetes.io/bound-by-controller: {}
f:spec:
f:volumeName: {}
f:status:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-10T00:21:52Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:storageClassName: {}
f:volumeMode: {}
manager: kubectl
operation: Update
time: "2020-07-10T00:21:52Z"
name: pvc-modeldata
namespace: default
resourceVersion: "15766"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/pvc-data
uid: e4a1309e-339d-4ed5-8fe1-9ed32f779ea7
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5999Gi
storageClassName: manual
volumeMode: Filesystem
volumeName: pv-data
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5999Gi
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
# kubectl get sc -o yaml
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Thanks,
Chittu