Postgresql storage on K3s - postgresql

I am trying to setup a Postgresql Stateful-Set on a k3s single node (Raspberry Pi 4 8Gb) cluster which will be common between any services that end up needing postgresql. Currently, since this is just a single node, I am using the Rancher local-path storage provisioner (this will probably change as I add nodes) pointing to an external hdd set to mount, though this will probably change when I get around to adding nodes.
My pod spins up (after figuring out a small headache with mounting the data directory which was as per this issue), and I can access the postgresql instance with kubectl port-forward -n common pod/postgresql-stateful-set-0 5432:5432, and making whatever changes I need.
At this point, my problem manifests, whereby I notice that no data is persisted in my data directory. I have found this issue which is the exact issue I've encountered, but following all the examples of "fixes", from playing with the directory locations (which gave me chmod issues like encountered with the mounting data directory issues), to changing to a pv I defined.
Find below my yml file which is use kubectl apply -f postgres.yml and is based on the bitnami helm template:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgresql-storage
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgresql-pv
namespace: common
spec:
storageClassName: postgresql-storage
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/storage/k3s/common/postgresql"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: common
name: pvc-postgresql
spec:
storageClassName: postgresql-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Secret
metadata:
name: postgresql-password
namespace: common
data:
POSTGRES_PASSWORD: <PWD>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgresql-configmap
namespace: common
data:
POSTGRESQL_PORT_NUMBER: "5432"
PGDATA: /var/lib/postgresql/data/pgdata
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-stateful-set
namespace: common
labels:
name: postgres-stateful-set
spec:
replicas: 1
serviceName: postgresql-stateful-set
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- name: postgresql
image: docker.io/postgres:14.2-alpine
envFrom:
- configMapRef:
name: postgresql-configmap
- secretRef:
name: postgresql-password
ports:
- name: tcp-postgresql
containerPort: 5432
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "rootAdmin" -h 127.0.0.1 -p 5432 -d rootDefault
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- -e
- exec pg_isready -U "rootAdmin" -h 127.0.0.1 -p 5432 -d rootDefault
resources:
limits:
memory: "300Mi"
cpu: "300m"
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: postgresql-data
mountPath: /var/lib/postgresql
volumes:
- name: dshm
emptyDir:
medium: Memory
volumeClaimTemplates:
- metadata:
name: postgresql-data
spec:
storageClassName: postgresql-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
kind: Service
apiVersion: v1
metadata:
name: postgresql-svc
namespace: common
spec:
selector:
app: postgresql
type: ClusterIP
clusterIP: None
ports:
- name: tcp-postgresql
port: 5432
protocol: TCP
Couple of things I should probably mention:
I added my own Storage class for the purposes of retaining the record rather then delete on the record being removed.
using the alpine version just for the sake of smaller image size.
The base OS that k3s is hosted on is Ubuntu 20.04 LTS
K3s was setup using this ansible playbook here
I have exec-ed into the container to see what is in the directory defined by PGDATA and confirmed that data was written.

Related

HostPath assign persistentVolume to the specific work node in cluster

Using kubeadm to create a cluster, I have a master and work node.
Now I want to share a persistentVolume in the work node, which will be bound with Postgres pod.
Expecting the code will create persistentVolume in the path /postgres of work node, but it seems the hostPath will not work in a cluster, how should I assign this property to the specific node?
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-postgres
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/postgres"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
persistentVolumeClaim:
claimName: pvc-postgres
containers:
- name: postgres
image: postgres:12
imagePullPolicy: Always
env:
- name: DB_USER
value: postgres
- name: DB_PASS
value: postgres
- name: DB_NAME
value: postgres
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: "/postgres"
name: vol-postgres
livenessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 5
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- name: postgres
port: 5432
targetPort: postgres
selector:
app: postgres
As per docs.
A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
In short, hostPath type refers to node (machine or VM) resource, where you will schedule pod. It mean that you already need to have this folder on this node.
To assign resources to specify node you have to use nodeSelector in your Deployment, PV.
Depends on the scenario, using hostPath is not the best idea, however I will provide below example YAMLs which might show you concept. Based on your YAMLs but with nginx image.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-postgres
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/postgres" ## this folder need exist on your node. Keep in minds also who have permissions to folder. Used tmp as it have 3x rwx
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /home ## path to folder inside container
name: vol-postgres
affinity: ## specified affinity to schedule all pods on this specific node with name ubuntu18-kubeadm-worker1
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
persistentVolumeClaim:
claimName: pvc-postgres
persistentvolume/pv-postgres created
persistentvolumeclaim/pvc-postgres created
deployment.apps/postgres created
Unfortunately PV is bounded to PVC in 1:1 relationship, so for each time, you would need to create PV and PVC.
However if you are using hostPath it's enough to specify nodeAffinity, volumeMounts and volumes in Deployment YAML without PV and PVC.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- mountPath: /home
name: vol-postgres
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
hostPath:
path: /tmp/postgres
deployment.apps/postgres created
user#ubuntu18-kubeadm-master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-77bc9c4566-jgxqq 1/1 Running 0 9s
user#ubuntu18-kubeadm-master:~$ kk exec -ti postgres-77bc9c4566-jgxqq /bin/bash
root#ubuntu18-kubeadm-worker1:/# cd home
root#ubuntu18-kubeadm-worker1:/home# ls
test.txt txt.txt
There are ways to achieve it. You can mount your volume into a NAS or create a storage cluster using disks and create a persistent volume and persistent volume claim for that. If your use-case is to have persistence in local storage then you can create a local-storage storageclass in one of your cluster nodes and that volume space can be used by any pod in your cluster. To create a local-storage storageclass, refer this (https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/)

How to define PVC for specific path in a single node in kubernetes

I am running local k8s cluster and defining PV as hostPath for mysql pods.
Sharing all the configuration details below .
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The problem I am getting is as mysql pod is running in k8s cluster ,when its deleted and recreate ,it will choose any one of the node and deployed .So mysql hostpath always mounted to specific node .Is it a good idea to fix the node for mysql or any other options are there ?please share if any idea .
you have below choices
Use node selector or node affinity to ensure that pod gets scheduled on the node where the mount is created OR
Use local persistent volumes. it is supported on kubernetes 1.14 and above
Why are you using a PVC and a PV? Actually, for hostPath, you don't even need to create the PV object. It just gets it.
You should use a StatefulSet if you want a pod that is re-created to get the storage it was using the previous one (state).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: ["ReadWriteOnce"]
# storageClassName: "standard"
resources:
requests:
storage: 2Gi
This statefulSet fails, but it is a mysql thing. As reference, should serve.

Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil>

I'm trying to install Redis cluster (StatefulSet) out of GKE and when getting pvc I've got
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 10s persistentvolume-controller Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil>
Already added "--cloud-provider=gce" on files /etc/kubernetes/manifests/kube-controller-manager.yaml and sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml. Restarted but still the same.
Can anyone help me please? What's the trick for making k8s work on GCP?
My manifest taken from here:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
fix-ip.sh: |
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
if [ -f ${CLUSTER_CONFIG} ]; then
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
fi
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: "us-west2-a"
reclaimPolicy: Retain
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
clusterIP: None
selector:
app: redis-cluster
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 5
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0-rc
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/fix-ip.sh", "redis-server", "/conf/redis.conf"]
args:
- --cluster-announce-ip
- "$(POD_IP)"
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-cluster
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: slow
resources:
requests:
storage: 5Gi
Please verify your "StorageClass: slow", it seems there is an indentation problem (starting with reclaimPolicy)
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: "us-west2-a"
reclaimPolicy: Retain
#
Update:
Please add --cloud-provider=gce into: kube-apiserver.yaml, kube-controller-manager.yaml, KUBELET_KUBECONFIG_ARGS. You can also enable enable-admission-plugins=DefaultStorageClass
Verify in your "VM instance details" permissiosn in "Cloud API access scopes" permissions.
Verify if your storage class pv and pvc are working properly.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
spec:
accessModes:
- ReadOnlyMany
storageClassName: slow
resources:
requests:
storage: 1Gi
Google offers two main types of persistent disk, which are provisioned automatically on kubernetes:
Standard storage (labeled pd-standard)
SSD storage (labeled pd-ssd)
By default, GKE will provision standard storage persistent disks. In fact, that’s the only storage class even available at first.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-storageclass
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
you can tell a persistent volume to use the new ssd storage class with the following key/value pair: storageClassName: ssd.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ssd-storageclass
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 1Gi

kubernetes StorageClass does not retain existing data

My Kubernetes StorageClass volume doesn't retain existing data when the pod is deleted and deployed back with my postgresql database. When I delete the pod, the new pod is created but the database is empty.
I have followed variations of the different versions of the tutorials (https://kubernetes.io/docs/concepts/storage/persistent-volumes/) but nothing seems to work.
I paste all the YAML files cause the problem might be in the combination.
storage-google.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: spingular-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 7Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-east4-a
jhipsterpress-postgresql.yml
apiVersion: v1
kind: Secret
metadata:
name: jhipsterpress-postgresql
namespace: default
labels:
app: jhipsterpress-postgresql
type: Opaque
data:
postgres-password: NjY0NXJxd24=
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: jhipsterpress-postgresql
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: spingular-pvc
containers:
- name: postgres
image: postgres:10.4
env:
- name: POSTGRES_USER
value: jhipsterpress
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/
---
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress-postgresql
namespace: default
spec:
selector:
app: jhipsterpress-postgresql
ports:
- name: postgresqlport
port: 5432
type: LoadBalancer
jhipsterpress-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jhipsterpress
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jhipsterpress
version: "v1"
template:
metadata:
labels:
app: jhipsterpress
version: "v1"
spec:
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 jhipsterpress-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: jhipsterpress-app
image: galore/jhipsterpress
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://jhipsterpress-postgresql.default.svc.cluster.local:5432/jhipsterpress
- name: SPRING_DATASOURCE_USERNAME
value: jhipsterpress
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: jhipsterpress-postgresql
key: postgres-password
- name: JAVA_OPTS
value: " -Xmx256m -Xms256m"
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
jhipsterpress-service.yml
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress
namespace: default
labels:
app: jhipsterpress
spec:
selector:
app: jhipsterpress
type: LoadBalancer
ports:
- name: http
port: 8080
When I included a Retain Policy I was getting this error:
#cloudshell:~ (academic-veld-230622)$ kubectl apply -f storage-google.yaml
error: error validating "storage-google.yaml": error validating data:
ValidationError(PersistentVolumeClaim.spec): unknown field "persistentVolumeReclaimPolicy" in io.k8s.api.core.v1.PersistentVolumeClaimSpec; if you choose to ignore these errors, turn validation off with --validate=false
Please, if you know of a complete example on a public image that works (in postgresql, I can make it work with Mongo), I will really appreciate it.
Thanks to all.
Note that for this to work you need to have your PVC dynamically provision a PV to satisfy its requirements, then there will be a permanent binding between the PVC and PV and every time your workload uses the PVC then it will use the same PV. Specifically indicated by this excerpt:
If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC
If in your case the Google Persistent Disk is being provisioned by the PVC, and you can verify that on GCP it's the same PV used every time, then it's probably an issue with the pod startup process where it's removing all the data. (Is there any reason why you are using /var/lib/postgresql/ vs /var/lib/postgresql?)
Also, persistentVolumeReclaimPolicy: Retain applies to a PV, not a PVC. For dynamically provisioned PVs the value is Delete. In your case, it wouldn't apply because your dynamically provisioned volume should be bound to your PVC. In other words, you are not reclaiming the volume.
Having said all that the recommended way to deploy a DB is using StatefulSets similar to this mysql example using a volumeClaimTemplate.

Failed to attach volume ... already being used by

I am running Kubernetes in a GKE cluster using version 1.6.6 and another cluster with 1.6.4. Both are experiencing issues with failing over GCE compute disks.
I have been simulating failures using kill 1 inside the container or killing the GCE node directly. Sometimes I get lucky and the pod will get created on the same node again. But most of the time this isn't the case.
Looking at the event log it shows the error trying to mount 3 times and it fails to do anything more. Without human intervention it never corrects it self. I am forced to kill the pod multiple times until it works. During maintenances this is a giant pain.
How do I get Kubernetes to fail over with volumes properly ? Is there a way to tell the deployment to try a new node on failure ? Is there a way to remove the 3 retry limit ?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dev-postgres
namespace: jolene
spec:
revisionHistoryLimit: 0
template:
metadata:
labels:
app: dev-postgres
namespace: jolene
spec:
containers:
- image: postgres:9.6-alpine
imagePullPolicy: IfNotPresent
name: dev-postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
env:
[ ** Removed, irrelevant environment variables ** ]
ports:
- containerPort: 5432
livenessProbe:
exec:
command:
- sh
- -c
- exec pg_isready
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 6
readinessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 5
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: dev-jolene-postgres
I have tried this with and without PersistentVolume / PersistentVolumeClaim.
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: dev-jolene-postgres
spec:
capacity:
storage: "1Gi"
accessModes:
- "ReadWriteOnce"
claimRef:
namespace: jolene
name: dev-jolene-postgres
gcePersistentDisk:
fsType: "ext4"
pdName: "dev-jolene-postgres"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dev-jolene-postgres
namespace: jolene
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
By default, every node is schedulable, so there is no need to explicitly mention it in deployment. and feature which can mention retry limits is still in progress, which can be tracked here, https://github.com/kubernetes/kubernetes/issues/16652