Postgres DB Password not being set in Kubernetes script - postgresql

For some reason, the postgres instance isn't being locked down with a password using the following kubernetes script.
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
labels:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- resources:
image: postgres:9.4
name: postgres
env:
- name: DB_PASS
value: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-persistent-storage
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: postgres-disk
fsType: ext4
Any ideas?

According to the docker hub documentation for the postgres image you should be using the environment variable POSTGRES_PASSWORD instead of DB_PASSWORD.

Related

How to persist user session at keycloak running on kubernetes?

After pod restart, all user session data are missing.
But all other data are present after pod restart (ex:- realm, user, realm settings)
keycloak is running with Postgres as persistence storage in a single pod.
below is the deployment file config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "XXXXXXX"
- name: KEYCLOAK_PASSWORD
value: "XXXXXXX"
- name: REALM
value: "XXXXXXX"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: "POSTGRES"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
can you please let me know, any configuration required to persist user session?
Please take a look at the existing chart.
On High Availability and Clustering section, it shows you that you need to set: CACHE_OWNERS_COUNT to 2 or higher and CACHE_OWNERS_AUTH_SESSIONS_COUNT to 2 or higher.
It's also recommended to use a separate Infinispan cluster if you have a big (more than 1 million from my experience) sessions.

How to run a keycloak as second container after first container postgres Database start up at multi-container pod environment of kubernetes?

In a multi-container pod:
step-1: Deploy first container Postgres Database and create a schema
step-2: Wait until the Postgres pod came up
step-3: then start deploying second container keycloak
I have written below deployment file to run :
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
but keycloak is starting with H2 embedded database instead of Postgres. if I am using init-container to nslookup on Postgres on deployment file like below :
initContainers:
- name: init-postgres
image: busybox
command: ['sh', '-c', 'until nslookup postgres; do echo waiting for postgres; sleep 2; done;']
pod is getting stuck at "podinitialization"
you forget to add the
- name: DB_VENDOR
value: POSTGRES
in the deployment YAML file due to that keycloak by default using the H2 database mode.
YAML ref file : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml

chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted

Hi I have set up an small NFS server at home using my raspberry pi.
An I want to set that as the default storage for all of my kubernetes containers.
However I keep on getting this chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
here is my config.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pg-ss
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgredb
volumes:
- name: pv-data
nfs:
path: /mnt/infra-data/pg
server: 192.168.1.150
readOnly: false
I'm wondering what would be the cause of this. and how can i solve it.
Thanks,

Postgres / K8S : PANIC could not locate a valid checkpoint record / CrashLoopBackOff

Postgres can't start giving the error:
PANIC could not locate a valid checkpoint record
On Google, there is a lot of solution, but all of them need to connect the pod to execute some pg commands.
But, as I use K8S, my pod falls into status: CrashLoopBackOff, so I can't connect anymore to my pod.
How should I do to fix my postgres DB ?
EDIT:
I have tried to run the command:
pg_resetwal /var/lib/postgresql/data
with:
...
spec:
containers:
- args:
- pg_resetwal
- /var/lib/postgresql/data
But I get:
pg_resetwal: cannot be executed by "root"
You must run pg_resetwal as the PostgreSQL superuser.
Can go further...
EDIT2:
I tried to run a new pod with the same volumes attached, and the same postgres container, but changing the command to : pg_resetwal /var/lib/postgresql/data
I also added:
securityContext:
runAsUser: 0
Here is the yaml for deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: metadata-postgres-fix
name: metadata-postgres-fix
namespace: metadata
spec:
selector:
matchLabels:
app: metadata-postgres-fix
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: metadata-postgres-fix
spec:
containers:
- args:
- pg_resetwal
- /var/lib/postgresql/data
envFrom:
- secretRef:
name: metadata-env
image: postgres:11.3
name: metadata-postgres-fix
securityContext:
runAsUser: 0
ports:
- containerPort: 5432
imagePullPolicy: Always
volumeMounts:
- mountPath: /etc/postgresql/postgresql.conf
name: metadata-postgres-data
subPath: postgres.conf
- mountPath: /docker-entrypoint-initdb.d/init.sh
name: metadata-postgres-data
subPath: init.sh
- mountPath: /var/lib/postgresql/data
name: metadata-postgres-claim
subPath: postgres
restartPolicy: Always
volumes:
- name: metadata-postgres-data
configMap:
name: cfgmap-metadata-postgres
- name: metadata-postgres-claim
persistentVolumeClaim:
claimName: metadata-postgres-claim
nodeSelector:
kops.k8s.io/instancegroup: nodes
I solved it changing
- args:
- pg_resetwal
- /var/lib/postgresql/data
with a pause to be able to get UID of postgres:
- args:
- sleep
- 1000
with
cat /etc/passwd
I could find posgres UID is 999
and finally change runAsUser: 0 with runAsUser: 999

Reconfigure postgres with kubectl apply command

I'm trying to change the settings of my postgres database inside my local minikube cluster. I mistakenly deployed a database without specifying the postgres user, password and database.
The problem: When I add the new env variables and use kubectl apply -f postgres-deployment.yml, postgres does not create the user, password or database that specified by the environment variables.
This is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: PGUSER
value: admin
- name: PGPASSWORD
value: password
- name: PGDATABSE
value: testdb
How can I change the settings of postgres when I apply the deployment file?
Can you share pod's logs?
kubectl logs <pod_name>
Postgres is using init script with defined variable names:
POSTGRES_USER
POSTGRES_PASSWORD
POSTGRES_DB
Try this one out
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_DB
value: testdb
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pg-data
volumes:
- name: pg-data
emptyDir: {}