2 pod has unbound immediate PersistentVolumeClaims - Kubernetes - mongodb

I am trying to set PersistentVolumeClaims to my pods. Now the problem is when the deployment is success, the pods are in pending state. When I try to describe the pods, I get the error why they are not spinning up as below:
Warning FailedScheduling 20s (x3 over 22s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 pod has unbound immediate PersistentVolumeClaims.
This is the yaml for creating the persistent volume and refer it in the deployments
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
namespace: mongo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
namespace: mongo
labels:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data/db
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
namespace: mongo
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-url
key: database_url
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-password
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
---
.
.
.
I have removed the other yaml configurations from the above and keep the necessary ones only for easy-reading.
and when I try to see the status of the pvc using kubectl get pvc -n mongo I get the below pending status
my-pvc Pending 9m54s
Can someone tell me where I am doing wrong?

As described in answer to pod has unbound PersistentVolumeClaims, if you use a PersistentVolumeClaim you typically need a volume provisioner for Dynamic Volume Provisioning. The bigger cloud providers typically has this, and also Minikube has one that can be enabled.
Unless you have a volume provisioner in your cluster, you need to create a PersistentVolume resource and possibly also a StorageClass and declare how to use your storage system.
Configure a Pod to Use a PersistentVolume for Storage describes how to create a PersistentVolume with a hostPath that may be good for learning or development, but is typically not used in production by applications.

Related

K8s mounting persistentVolume failed, "timed out waiting for the condition" on docker-desktop

When trying to bind a pod to a NFS persistentVolume hosted on another pod, it fails to mount when using docker-desktop. It works perfectly fine elsewhere even with the exact same YAML.
The error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m59s default-scheduler Successfully assigned test-project/test-digit-5576c79688-zfg8z to docker-desktop
Warning FailedMount 2m56s kubelet Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[lagg-connection kube-api-access-h68w7]: timed out waiting for the condition
Warning FailedMount 37s kubelet Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[kube-api-access-h68w7 lagg-connection]: timed out waiting for the condition
The minified project which you can apply to test yourself:
apiVersion: v1
kind: Namespace
metadata:
name: test-project
labels:
name: test-project
---
apiVersion: v1
kind: Service
metadata:
labels:
environment: test
name: test-lagg
namespace: test-project
spec:
clusterIP: 10.96.13.37
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
app: nfs-server
environment: test
scope: backend
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
environment: test
name: test-lagg-volume
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
nfs:
path: /
server: 10.96.13.37
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
environment: test
name: test-lagg-claim
namespace: test-project
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: static
environment: test
scope: backend
name: test-digit
namespace: test-project
spec:
selector:
matchLabels:
app: static
environment: test
scope: backend
template:
metadata:
labels:
app: static
environment: test
scope: backend
spec:
containers:
- image: busybox
name: digit
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
volumeMounts:
- mountPath: /cache
name: lagg-connection
volumes:
- name: lagg-connection
persistentVolumeClaim:
claimName: test-lagg-claim
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
environment: test
name: test-lagg
namespace: test-project
spec:
selector:
matchLabels:
app: nfs-server
environment: test
scope: backend
template:
metadata:
labels:
app: nfs-server
environment: test
scope: backend
spec:
containers:
- image: gcr.io/google_containers/volume-nfs:0.8
name: lagg
ports:
- containerPort: 2049
name: lagg
- containerPort: 20048
name: mountd
- containerPort: 111
name: rpcbind
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: lagg-claim
volumes:
- emptyDir: {}
name: lagg-claim
As well as emptyDir I have also tried hostPath. This setup has worked before, and I'm not sure what I've changed if anything since it has stopped.
Updating my Docker for Windows installation from 4.0.1 to 4.1.1 has fixed this problem.

postgres on k8s with glusterfs as storage

I deploy a postgres database on k8s and glusterfs as volume.But every time I restart my pod all of data losses.Why is that?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
namespace: gitlab
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.1
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
env:
- name: POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_password
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_db
volumes:
- name: postgres
glusterfs:
endpoints: glusterfs-cluster
path: gv
Define PVC and PV objects. see below for reference.
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10GB
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GB
Then bind the PVC to the pod as shown below
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-pv-claim
volumes:
- name: postgres-pv-claim
persistentVolumeClaim:
claimName: postgres-pv-claim
As per the Kubernetes Documentation https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs, Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. I suggest to raise an issue at link https://github.com/kubernetes/kubernetes/issues/new/choose
If you want to install GitLab with PostgresSQL Backend, it will be easier to use below Helm Charts.
https://docs.gitlab.com/charts/
https://artifacthub.io/packages/helm/bitnami/postgresql
https://artifacthub.io/packages/helm/bitnami/postgresql-ha
You can do this:
1.For stateful set services such as databases, StatefulSet controllers should be used to deploy;
2.The storage data resources should be of a shared type, rather than using local volumes as storage, which may be scheduled to other nodes when creating POD objects;

kubernetes Deployment PodName setting

apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
I want to set the name of the pod Because it communicates internally based on the name of the pod
but when a pod is created, it cannot be used because the variable name is appended to the end.
How can I set the pod name?
It's better if you use, statefulset instead of deployment. Statefulset's pod name will be like <statefulsetName-0>,<statefulsetName-1>... And you will need a clusterIP service. with which you can bound your pods. see the doc for more details. Ref
apiVersion: v1
kind: Service
metadata:
name: test-svc
labels:
app: test
spec:
ports:
- port: 8080
name: web
clusterIP: None
selector:
app: test
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-StatefulSet
labels:
app: test
spec:
replicas: 1
serviceName: test-svc
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
Here, The pod name will be like this test-StatefulSet-0.
if you are using the kind: Deployment it won't be possible ideally in this scenario you can use kind: Statefulset.
Instead of POD to POD communication, you can use the Kubernetes service for communication.
Still, statefulset manage the pod name in the sequence
statefulsetname - 0
statefulsetname - 1
statefulsetname - 2
You can't.
It is the property of the pods of a Deployment that they do not have an identity associated with them.
You could have a look at Statefulset instead of a Deployment if you want the pods to have a state.
From the docs:
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
So, if you have a Statefulset object named myapp with two replicas, the pods will be named as myapp-0 and myapp-1.

Connecting to GKE POD running Postgres with client Postico 2

I want to connect to a Postgres instance that it is in a pod in GKE.
I think a way to achieve this can be with kubectl port forwarding.
In my local I have "Docker for desktop" and when I apply the yamls files I am able to connect to the database. The yamls I am using in GKE are almost identical
secrets.yaml
apiVersion: v1
kind: Secret
metadata:
namespace: staging
name: postgres-secrets
type: Opaque
data:
MYAPPAPI_DATABASE_NAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_USERNAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_PASSWORD: XXXENCODEDXXX
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: staging
name: db-data-pv
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/postgresql/data"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: staging
name: db-data-pvc
spec:
storageClassName: generic
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: staging
labels:
app: postgres-db
name: postgres-db
spec:
replicas: 1
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:12.4
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-db
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_PASSWORD
volumes:
- name: postgres-db
persistentVolumeClaim:
claimName: db-data-pvc
svc.yaml
apiVersion: v1
kind: Service
metadata:
namespace: staging
labels:
app: postgres-db
name: postgresdb-service
spec:
type: ClusterIP
selector:
app: postgres-db
ports:
- port: 5432
and it seems that everything is working
Then I execute kubectl port-forward postgres-db-podname 5433:5432 -n staging and when I try to connect it throws
FATAL: role "myappuserdb" does not exist
UPDATE 1
This is from GKE YAML
spec:
containers:
- env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_NAME
name: postgres-secrets
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_USERNAME
name: postgres-secrets
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_PASSWORD
name: postgres-secrets
UPDATE 2
I will explain what happened and how I solve this.
The first time I applied the files, kubectl apply -f k8s/, in the deployment, the environment variable POSTGRES_USER was referencing a wrong secret, MYAPPAPI_DATABASE_NAME and it should make reference to MYAPPAPI_DATABASE_USERNAME.
After this first time, everytime I did kubectl delete -f k8s/ the resources were deleted. However, when I created the resources again, the data that I created in the previous step was not cleaned.
I deleted the cluster and created a new cluster and everything worked. I need to check if there is a way to clean the data in kubernetes volume.
in your deployment's env spec you have assigned the wrong value for POSTGRES_USER. you have assigned the value POSTGRES_USER = MYAPPAPI_DATABASE_NAME.
but i think it should be POSTGRES_USER = MYAPPAPI_DATABASE_USERNAME .
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME #<<<this is the value need to change>>>
please try this one
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME

kubernetes StorageClass does not retain existing data

My Kubernetes StorageClass volume doesn't retain existing data when the pod is deleted and deployed back with my postgresql database. When I delete the pod, the new pod is created but the database is empty.
I have followed variations of the different versions of the tutorials (https://kubernetes.io/docs/concepts/storage/persistent-volumes/) but nothing seems to work.
I paste all the YAML files cause the problem might be in the combination.
storage-google.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: spingular-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 7Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-east4-a
jhipsterpress-postgresql.yml
apiVersion: v1
kind: Secret
metadata:
name: jhipsterpress-postgresql
namespace: default
labels:
app: jhipsterpress-postgresql
type: Opaque
data:
postgres-password: NjY0NXJxd24=
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: jhipsterpress-postgresql
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: spingular-pvc
containers:
- name: postgres
image: postgres:10.4
env:
- name: POSTGRES_USER
value: jhipsterpress
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/
---
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
selector:
app: jhipsterpress-postgresql
ports:
- name: postgresqlport
port: 5432
type: LoadBalancer
jhipsterpress-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jhipsterpress
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jhipsterpress
version: "v1"
template:
metadata:
labels:
app: jhipsterpress
version: "v1"
spec:
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 jhipsterpress-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: jhipsterpress-app
image: galore/jhipsterpress
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://jhipsterpress-postgresql.default.svc.cluster.local:5432/jhipsterpress
- name: SPRING_DATASOURCE_USERNAME
value: jhipsterpress
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
jhipsterpress-service.yml
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress
namespace: default
labels:
app: jhipsterpress
spec:
selector:
app: jhipsterpress
type: LoadBalancer
ports:
- name: http
port: 8080
When I included a Retain Policy I was getting this error:
#cloudshell:~ (academic-veld-230622)$ kubectl apply -f storage-google.yaml
error: error validating "storage-google.yaml": error validating data:
ValidationError(PersistentVolumeClaim.spec): unknown field "persistentVolumeReclaimPolicy" in io.k8s.api.core.v1.PersistentVolumeClaimSpec; if you choose to ignore these errors, turn validation off with --validate=false
Please, if you know of a complete example on a public image that works (in postgresql, I can make it work with Mongo), I will really appreciate it.
Thanks to all.
Note that for this to work you need to have your PVC dynamically provision a PV to satisfy its requirements, then there will be a permanent binding between the PVC and PV and every time your workload uses the PVC then it will use the same PV. Specifically indicated by this excerpt:
If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC
If in your case the Google Persistent Disk is being provisioned by the PVC, and you can verify that on GCP it's the same PV used every time, then it's probably an issue with the pod startup process where it's removing all the data. (Is there any reason why you are using /var/lib/postgresql/ vs /var/lib/postgresql?)
Also, persistentVolumeReclaimPolicy: Retain applies to a PV, not a PVC. For dynamically provisioned PVs the value is Delete. In your case, it wouldn't apply because your dynamically provisioned volume should be bound to your PVC. In other words, you are not reclaiming the volume.
Having said all that the recommended way to deploy a DB is using StatefulSets similar to this mysql example using a volumeClaimTemplate.