I'm trying to change the settings of my postgres database inside my local minikube cluster. I mistakenly deployed a database without specifying the postgres user, password and database.
The problem: When I add the new env variables and use kubectl apply -f postgres-deployment.yml, postgres does not create the user, password or database that specified by the environment variables.
This is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: PGUSER
value: admin
- name: PGPASSWORD
value: password
- name: PGDATABSE
value: testdb
How can I change the settings of postgres when I apply the deployment file?
Can you share pod's logs?
kubectl logs <pod_name>
Postgres is using init script with defined variable names:
POSTGRES_USER
POSTGRES_PASSWORD
POSTGRES_DB
Try this one out
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_DB
value: testdb
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pg-data
volumes:
- name: pg-data
emptyDir: {}
Related
After pod restart, all user session data are missing.
But all other data are present after pod restart (ex:- realm, user, realm settings)
keycloak is running with Postgres as persistence storage in a single pod.
below is the deployment file config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "XXXXXXX"
- name: KEYCLOAK_PASSWORD
value: "XXXXXXX"
- name: REALM
value: "XXXXXXX"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: "POSTGRES"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
can you please let me know, any configuration required to persist user session?
Please take a look at the existing chart.
On High Availability and Clustering section, it shows you that you need to set: CACHE_OWNERS_COUNT to 2 or higher and CACHE_OWNERS_AUTH_SESSIONS_COUNT to 2 or higher.
It's also recommended to use a separate Infinispan cluster if you have a big (more than 1 million from my experience) sessions.
In a multi-container pod:
step-1: Deploy first container Postgres Database and create a schema
step-2: Wait until the Postgres pod came up
step-3: then start deploying second container keycloak
I have written below deployment file to run :
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
but keycloak is starting with H2 embedded database instead of Postgres. if I am using init-container to nslookup on Postgres on deployment file like below :
initContainers:
- name: init-postgres
image: busybox
command: ['sh', '-c', 'until nslookup postgres; do echo waiting for postgres; sleep 2; done;']
pod is getting stuck at "podinitialization"
you forget to add the
- name: DB_VENDOR
value: POSTGRES
in the deployment YAML file due to that keycloak by default using the H2 database mode.
YAML ref file : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
Hi I have set up an small NFS server at home using my raspberry pi.
An I want to set that as the default storage for all of my kubernetes containers.
However I keep on getting this chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
here is my config.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pg-ss
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgredb
volumes:
- name: pv-data
nfs:
path: /mnt/infra-data/pg
server: 192.168.1.150
readOnly: false
I'm wondering what would be the cause of this. and how can i solve it.
Thanks,
I'm trying to setup PostgreSQL on Minikube with data path being my host folder mounted on Minikube (I'd like to keep my data on host).
With the kubernetes object created (below) I get permission error, the same one as here How to solve permission trouble when running Postgresql from minikube? although the question mentioned doesn't answer the issue. It advises to mount minikube's VM dir instead.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: storage
env:
- name: POSTGRES_PASSWORD
value: user
- name: POSTGRES_USER
value: pass
- name: POSTGRES_DB
value: k8s
volumes:
- name: storage
hostPath:
path: /data/postgres
Is there any other way to do that other than building own image on top of Postgres and playing with the permissions somehow? I'm on macOS with Minikube 0.30.0 and I'm experiencing that with both Virtualbox and hyperkit drivers for Minikube.
Look at these lines from here : hostPath
the files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a hostPath volume
So, either you have to run as root or you have to change the file permission of /data/postgres directory.
However, you can run your Postgres container as root without rebuilding docker image.
You have to add following to your container:
securityContext:
runAsUser: 0
Your yaml should look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: storage
env:
- name: POSTGRES_PASSWORD
value: user
- name: POSTGRES_USER
value: pass
- name: POSTGRES_DB
value: k8s
securityContext:
runAsUser: 0
volumes:
- name: storage
hostPath:
path: /data/postgres
For some reason, the postgres instance isn't being locked down with a password using the following kubernetes script.
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
labels:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- resources:
image: postgres:9.4
name: postgres
env:
- name: DB_PASS
value: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-persistent-storage
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: postgres-disk
fsType: ext4
Any ideas?
According to the docker hub documentation for the postgres image you should be using the environment variable POSTGRES_PASSWORD instead of DB_PASSWORD.