CreateContainerError while creating postgresql in k8s - postgresql

I'm trying to run postgresql db at k8s and there is no errors while creating all from file, but pod at the deployment cant create container.
There is my yaml code:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: adminpassword
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.18
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
Sevice:
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
after i'm using:
kubectl create -f filename
i got :
configmap/postgres-config created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
deployment.apps/postgres created
service/postgres created
But when i'm typing:
kubectl get pods
There is an error:
postgres-78496cc865-85kt7 0/1 CreateContainerError 0 13m
this is PV and PVC, no more space at the question to ad that as a code :)

If you describe the pod, you'll see the warning message in there,
Warning FailedScheduling 45s (x2 over 45s) default-scheduler persistentvolumeclaim "postgres-pv-claim" not found
On a high level, a database instance can run within a Kubernetes container. A database instance stores data in files, and the files are stored in persistent volume claims. A PersistentVolumeClaim must be created and made available to a PostgreSQL instance.To create the database instance as a container, you use a deployment configuration. In order to provide an access interface that is independent of the particular container, you create a service that provides access to the database. The service remains unchanged even if a container (or pod) is moved to a different node.
In your case, create a PVC resource and bound it to PV so that will be used by the pod. As currently it does not found that , it went into pending state. This can be achieved in multiple ways, you can either use the hostPath as the local storage,
$ k get pods
NAME READY STATUS RESTARTS AGE
postgres-795cfcd67b-khfgn 1/1 Running 0 18s
Sample PV and PVC configs as below,
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nautilus
spec:
storageClassName: manual
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/mohan"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
You can check the Persistent Volume doc for more details. Also, read more about storage class and StatefulSets for deploying database applications in Kubernetes cluster.

Thanks to all who tried to help me! Problem was at PersistentVolume.spec.hostPath.path. There was an invalid character at the path. I tried to use "./path".

Related

Kubernetes Persistent Volume Claim doesn't save the data

I made a persistent volume claim on kubernetes to save mongodb data after restarting the deployment I found that data is not existed also my PVC is in bound state.
here is my yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
and I made clusterIP service for the deployment
First off, if the PVC status is still Bound and the desired pod happens to start on another node, it will fail as the PV can't be mounted into the pod. This happens because the reclaimPolicy: Retain of the StorageClass (can also be set on the PV directly persistentVolumeReclaimPolicy: Retain). In order to fix this, you have to manually overrite/delete the claimRef of the PV. Use kubectl patch pv PV_NAME -p '{"spec":{"claimRef": null}}' to do this, after doing so the PV's status should be Available.
In order to see if the your application writes any data to the desired path, run your application and exec into it (kubectl -n NAMESPACE POD_NAME -it -- /bin/sh) and check your /data/db. You could also create an file with some random text, restart your application and check again.
I'm fairly certain that if your PV isn't being recreated every time your application starts (which shouldn't be the case, because of Retain), then it's highly that your Application isn't writing to the path specified. But you could also share your PersistentVolume config with us, as there might be some misconfiguration there as well.

Kubernetes mount volume keeps timeing out even though volume can be mounted from sudo mount

I have a read only persistent volume that I'm trying to mount onto the statefulset, but after making some changes to the program and re-creating the pods, the pod can now no longer mount to the volume.
PV yaml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
nfs:
server: <ip>
path: "/var/foo"
claimRef:
name: foo-pvc
namespace: foo
PVC yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-pvc
namespace: foo
spec:
accessModes:
- ReadOnlyMany
storageClassName: ""
volumeName: foo-pv
resources:
requests:
storage: 2Gi
Statefulset yaml:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: fooContainer
image: <image>
imagePullPolicy: Always
volumeMounts:
- name: writer-data
mountPath: <path>
- name: nfs-objectd
mountPath: <path>
volumes:
- name: nfs-foo
persistentVolumeClaim:
claimName: foo-pvc
volumeClaimTemplates:
- metadata:
name: writer-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-sc"
resources:
requests:
storage: 2Gi
k describe pod reports "Unable to attach or mount volumes: unmounted volumes=[nfs-foo]: timed out waiting for the condition". There is a firewall between the machine running kubernetes and the NFS, however the port has been unblocked, and the folder has been exported for mounting on the NFS side. Running sudo mount -t nfs :/var/foo /var/foo is able to successfully mount the NFS, so I don't understand why kuebernetes isn't about to mount it anymore. Its been stuck failing mount for several days now. Is there any other way to debug this?
Thanks!
Based on the error “unable to attach or mount volumes …….timed out waiting for condition”, there were some similar issues reported to the Product Team and it is a known issue. But, this error is more observed on the preemptible/spot nodes when the node is preempted. In similar occurrences of this issue for other users, upgrading the control plane version resolved this issue temporarily in preemptible/spot nodes.
Also, if you are not using any preemptible/spot nodes in your cluster, this issue might have happened when the old node is replaced by a new node. If you are still facing this issue, try upgrading the control plane to the same version i.e. you can execute the following command:
$ gcloud container clusters upgrade CLUSTER_NAME --master --zone ZONE --cluster-version VERSION
Another workaround to fix this issue would be remove the stale VolumeAttachment with the following command:
$ kubectl delete volumeattachment [volumeattachment_name]
After running the command and thus removing the VolumeAttachment, the pod should eventually pick up and retry. You can read more about this issue and its cause here.

How to have data persist in GKE kubernetes StatefulSet with postgres?

So I'm just trying to get a web app running on GKE experimentally to familiarize myself with Kubernetes and GKE.
I have a statefulSet (Postgres) with a persistent volume/ persistent volume claim which is mounted to the Postgres pod as expected. The problem I'm having is having the Postgres data endure. If I mount the PV at var/lib/postgres the data gets overridden with each pod update. If I mount at var/lib/postgres/data I get the warning:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
Using Docker alone having the volume mount point at var/lib/postgresql/data works as expected and data endures, but I don't know what to do now in GKE. How does one set this up properly?
Setup file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sm-pd-volume-claim
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "postgis-db"
namespace: "default"
labels:
app: "postgis-db"
spec:
serviceName: "postgis-db"
replicas: 1
selector:
matchLabels:
app: "postgis-db"
template:
metadata:
labels:
app: "postgis-db"
spec:
terminationGracePeriodSeconds: 25
containers:
- name: "postgis"
image: "mdillon/postgis"
ports:
- containerPort: 5432
name: postgis-port
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
volumes:
- name: sm-pd-volume
persistentVolumeClaim:
claimName: sm-pd-volume-claim
You are getting this error because the postgres pod has tried to mount the data directory on / folder. It is not recommended to do so.
You have to create subdirectory to resolve this issues on the statefulset manifest yaml files.
volumeMounts:
- name: sm-pd-volume
mountPath: /var/lib/postgresql/data
subPath: data

No way to apply change to Deployment binded with ReadWriteOnce PV in gCloud Kubernetes engine?

As the GCE Disk does not support ReadWriteMany , I have no way to apply change to Deployment but being stucked at ContainerCreating with FailedAttachVolume .
So here's my setting:
1. PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
2. Service
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
3. Deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql/mysql-server
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /mysql-data
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Which these are all fine for creating the PVC, svc and deployment. Pod and container started successfully and worked as expected.
However when I tried to apply change by:
kubectl apply -f mysql_deployment.yaml
Firstly, the pods were sutcked with the existing one did not terminate and the new one would be creating forever.
NAME READY STATUS RESTARTS AGE
mysql-nowhash 1/1 Running 0 2d
mysql-newhash 0/2 ContainerCreating 0 15m
Secondly from the gCloud console, inside the pod that was trying to create, I got two crucial error logs:
1 of 2 FailedAttachVolume
Multi-Attach error for volume "pvc-<hash>" Volume is already exclusively attached to one node and can't be attached to another FailedAttachVolume
2 of 2 FailedMount
Unable to mount volumes for pod "<pod name and hash>": timeout expired waiting for volumes to attach/mount for pod "default"/"<pod name and hash>". list of unattached/unmounted volumes=[mysql-persistent-storage]
What I could immediately think of is the ReadWriteOnce capability of gCloud PV. Coz the kubernetes engine would create a new pod before terminating the existing one. So under ReadWriteOnce it can never create a new pod and claim the existing pvc...
Any idea or should I use some other way to perform deployment updates?
appreciate for any contribution and suggestion =)
remark: my current work-around is to create an interim NFS pod to make it like a ReadWriteMany pvc, this worked but sounds stupid... requiring an additional storage i/o overhead to facilitate deployment update ?.. =P
The reason is that if you are applying UpdateStrategy: RollingUpdate (as it is default) k8s waits for the new Container to become ready before shutting down the old one. You can change this behaviour by applying UpdateStrategy: Recreate
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy

Kubernetes - Pod which encapsulates DB is crashing

I am experiencing issues when I try to deploy my Django application to Kubernetes cluster. More specifically, when I try to deploy PostgreSQL.
Here is what my .YML deployment file looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
tier: backend
ports:
- protocol: TCP
port: 5432
targetPort: 5432
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
hostPath:
path: /tmp/data/persistent-volume-1 #U okviru cvora n
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
tier: backend
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: agent_technologies_db
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data/db-files
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pv-claim
- name: postgres-credentials
secret:
secretName: postgres-credentials
Here is what I get when I run kubectl get pods command :
NAME READY STATUS RESTARTS AGE
agent-technologies-deployment-7c7c6676ff-8p49r 1/1 Running 0 2m
agent-technologies-deployment-7c7c6676ff-dht5h 1/1 Running 0 2m
agent-technologies-deployment-7c7c6676ff-gn8lp 1/1 Running 0 2m
agent-technologies-deployment-7c7c6676ff-n9qql 1/1 Running 0 2m
postgres-8676b745bf-8f7jv 0/1 CrashLoopBackOff 4 3m
And here is what I get when I try to inspect what is going on with PostgreSQL deployment by using kubectl logs $pod_name:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/data" or run initdb
with an argument other than "/var/lib/postgresql/data".
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
Note: I am using Google Cloud as a provider.
You can't have your db in /var/lib/postgres/data/whatever.
Change that path by /var/lib/postgres/whatever and it will work.
17.2.1. Use of Secondary File Systems
Many installations create their database clusters on file systems (volumes) other than the machine's "root" volume. If you choose to do this, it is not advisable to try to use the secondary volume's topmost directory (mount point) as the data directory. Best practice is to create a directory within the mount-point directory that is owned by the PostgreSQL user, and then create the data directory within that. This avoids permissions problems, particularly for operations such as pg_upgrade, and it also ensures clean failures if the secondary volume is taken offline.
And, by the way, I had to create a secret, as it is not in the post:
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
user: cG9zdGdyZXM= #postgres
password: cGFzc3dvcmQ= #password
Note that the username needs to be "postgres". I don't know if you are covering this...
Adding to what #suren answered.
I had this issue while running postgresql-setup --initdb in RHEL 8.4. I was getting this error :
Initializing database in '/var/lib/pgsql/data'
ERROR: Initializing database failed, possibly see /var/lib/pgsql/initdb_postgresql.log
So, following suren's suggestion, I deleted the 'data' folder and ran the command again. Worked like a charm!