Running postgres command in pod definition doesnt work - postgresql

I'm just adding the containers part of the spec. Everything is otherwise set up and working fine and values are hardcoded here. This is a simple Postgres pod that is part of a single replica deployment with its own PVC to persist state. But the problem is having nothing to do with my pod/deployment setup.
containers:
- name: postgres-container
image: postgres
imagePullPolicy: Always
volumeMounts:
- name: postgres-internal-volume
mountPath: /var/lib/postgresql/data
subPath: postgres
envFrom:
- configMapRef:
name: postgres-internal-cnf
ports:
- containerPort: 5432
command: ['psql']
args: [-U postgres -tc "SELECT 1 FROM pg_database WHERE datname = 'dominion'" | grep -q 1 || psql -h localhost -p 5432 -U postgres -c "CREATE DATABASE dominion"]
This command will create a database if it does not already exist. If I create the deployment and exec into the pod and run this command everything works fine. If I however run it here the pod fails to spin up and I get this error:
psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I was under the impression that this error comes from the default connection values being incorrect, but here I am hardcoding the localhost and the port number.

With your pod spec, you've replaced the default command -- which starts up the postgres server -- with your own command, so the server never starts. The proper way to perform initialization tasks with the official Postgres image is in the documentation.
You want to move your initialization commands into a ConfigMap, and then mount the scripts into /docker-entrypoint-initdb.d as described in those docs.
The docs have more details, but here's a short example. We want to run
CREATE DATABASE dominion when the postgres server starts (and only
if it is starting with an empty data directory). We can define a
simple SQL script in a ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-scripts
data:
create-dominion-db.sql: |
CREATE DATABASE dominion
And then mount that script into the appropriate location in the pod
spec:
volumes:
- name: postgres-init-scripts
configMap:
name: postgres-init-scripts
containers:
- name: postgres-container
image: postgres
imagePullPolicy: Always
volumeMounts:
- name: postgres-internal-volume
mountPath: /var/lib/postgresql/data
subPath: postgres
- name: postgres-init-scripts
mountPath:
/docker-entrypoint-initdb.d/create-dominion-db.sql
subPath: create-dominion-db.sql
envFrom:
- configMapRef:
name: postgres-internal-cnf
ports:
- containerPort: 5432

Related

Pass postgres parameter into Kubernetes deployment

I am trying to set a postgres parameter (shared_buffers) into my postgres database pod. I am trying to set an init container to set the db variable, but it is not working because the init container runs as the root user.
What is the best way to edit the db variable on the pods? I do not have the ability to make the change within the image, because the variable needs to be different for different instances. If it helps, the command I need to run is a "postgres -c" command.
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
You didn't share your Pod/Deployment definition, but I believe you want to set shared_buffers from the command line of the actual container (not the init container) in your Pod definition. Something like this if you are using a deployment:
apiVersion: v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.2
imagePullPolicy: "IfNotPresent"
command: ["postgres"] # <-- add this
args: ["-D", "-c", "shared_buffers=128MB"] # <-- add this
ports:
- containerPort: 5432
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- name: postgresql-config-volume # <-- use if you are using a ConfigMap (see below)
mountPath: /var/lib/postgres/data/postgresql.conf
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim # <-- note: you need to have this already predefined
- name: postgresql-config-volume # <-- use if you are using a ConfigMap (see below)
configMap:
name: postgresql-config
Notice that if you are using a ConfigMap you can also do this (note that you may want to add more configuration options besides shared_buffers):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgresql-config
data:
postgresql.conf: |
shared_buffers=256MB
In my case, the #Rico answer didn't help me out of the box because I don't use postgres with a persistent storage mount, which means there is no /var/lib/postgresql/data folder and pre-existed database (so both proposed options have failed in my case).
To successfully apply postgres settings, I used only args (without command section).
In that case, k8s will pass these args to the default entrypoint defined in the docker image (docs), and as for postgres entrypoint, it is made so that any options passed to the docker command will be passed along to the postgres server daemon (look section Database Configuration at: https://hub.docker.com/_/postgres)
apiVersion: v1
kind: Pod
metadata:
name: postgres
spec:
containers:
- image: postgres:9.6.8
name: postgres
args: ["-c", "shared_buffers=256MB", "-c", "max_connections=207"]
To check that the settings applied:
$ kubectl exec -it postgres -- bash
root#postgres:/# su postgres
$ psql -c 'show max_connections;'
max_connections
-----------------
207
(1 row)

How to test if container is running postgres

I just deployed a docker with Postgres on it on AWS EKS.
Below is the description details.
How do i access or test if postgres is working. I tried accessing both IP with post within VPC from worker node.
psql -h #IP -U #defaultuser -p 55432
Below is the deployment.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 55432
# envFrom:
# - configMapRef:
# name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: efs
Surprisingly I am able to connect to psql but on 5432. :( Not sure what I am doing wrong. I passed containerPort as 55432
In short, you need to run the following command to expose your database on 55432 port.
kubectl expose deployment postgres --port=55432 --target-port=5432 --name internal-postgresql-svc
From now on, you can connect to it via port 55432 from inside your cluster by using the service name as a hostname, or via its ClusterIP address:
kubectl get internal-postgresql-svc
What you did in your deployment manifest file, you just attached additional information about the network connections a container uses, between misleadingly, because your container exposes 5432 port only (you can verify it by your self here). You should use a Kubernetes Service - abstraction which enables access to your PODs, and does the necessary port mapping behind the scene.
Please check also different port Types, if you want to expose your postgresql database outside of the Kubernetes cluster.
To test if progress is running fine inside POD`s container:
kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace default --image bitnami/postgresql --env="PGPASSWORD=<HERE_YOUR_PASSWORD>" --command -- psql --host <HERE_HOSTNAME=SVC_OR_IP> -U <HERE_USERNAME>

How to set environment variable in container from Kubernetes?

I want to set an environment variable (I'll just name it ENV_VAR_VALUE) to a container during deployment through Kubernetes.
I have the following in the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
...
...
The container needs to use the ENV_VAR_VALUE's value.
But in the container's application logs, it's value always comes out empty.
So, I tried checking it's value from inside the container:
$ kubectl exec -it appname-service bash
root#appname-service:/# echo $ENV_VAR_VALUE
some.important.value
root#appname-service:/#
So, the value was successfully set.
I imagine it's because the environment variables defined from Kubernetes are set after the container is already initialized.
So, I tried overriding the container's CMD from the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
command: ["/bin/bash"]
args: ["-c", "application-command"]
...
...
Even still, the value of ENV_VAR_VALUE is still empty during the execution of the command.
Thankfully, the application has a restart function
-- because when I restart the app, ENV_VAR_VALUE get used successfully.
-- I can at least do some other tests in the mean time.
So, the question is...
How should I configure this in Kubernetes so it isn't a tad too late in setting the environment variables?
As requested, here is the Dockerfile.
I apologize for the abstraction...
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y some-dependencies
COPY application-script.sh application-script.sh
RUN ./application-script.sh
# ENV_VAR_VALUE is set in this file which is populated when application-command is executed
COPY app-config.conf /etc/app/app-config.conf
CMD ["/bin/bash", "-c", "application-command"]
You can try also running two commands in Kubernetes POD spec:
(read in env vars): "source /env/required_envs.env" (would come via secret mount in volume)
(main command): "application-command"
Like this:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
command: ["/bin/sh", "-c"]
args:
- source /env/db_cred.env;
application-command;
Why don't you move the
RUN ./application-script.sh
below
COPY app-config.conf /etc/app/app-config.conf
Looks like the app is running before the env conf is available for it.

How to backup a Postgres database in Kubernetes on Google Cloud?

What is the best practice for backing up a Postgres database running on Google Cloud Container Engine?
My thought is working towards storing the backups in Google Cloud Storage, but I am unsure of how to connect the Disk/Pod to a Storage Bucket.
I am running Postgres in a Kubernetes cluster using the following configuration:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6.2-alpine
imagePullPolicy: IfNotPresent
env:
- name: PGDATA
value: /var/lib/postgresql/data
- name: POSTGRES_DB
value: my-database-name
- name: POSTGRES_PASSWORD
value: my-password
- name: POSTGRES_USER
value: my-database-user
name: postgres-container
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: my-postgres-volume
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: my-postgres-disk
name: my-postgres-volume
I have attempted to create a Job to run a backup:
apiVersion: batch/v1
kind: Job
metadata:
name: postgres-dump-job
spec:
template:
metadata:
labels:
app: postgres-dump
spec:
containers:
- command:
- pg_dump
- my-database-name
# `env` value matches `env` from previous configuration.
image: postgres:9.6.2-alpine
imagePullPolicy: IfNotPresent
name: my-postgres-dump-container
volumeMounts:
- mountPath: /var/lib/postgresql
name: my-postgres-volume
readOnly: true
restartPolicy: Never
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: my-postgres-disk
name: my-postgres-volume
(As far as I understand) this should run the pg_dump command and output the backup data to stdout (which should appear in the kubectl logs).
As an aside, when I inspect the Pods (with kubectl get pods), it shows the Pod never gets out of the "Pending" state, which I gather is due to there not being enough resources to start the Job.
Is it correct to run this process as a Job?
How do I connect the Job to Google Cloud Storage?
Or should I be doing something completely different?
I'm guessing it would be unwise to run pg_dump in the database Container (with kubectl exec) due to a performance hit, but maybe this is ok in a dev/staging server?
As #Marco Lamina said you can run pg_dump on postgres pod like
DUMP
// pod-name name of the postgres pod
// postgres-user database user that is able to access the database
// database-name name of the database
kubectl exec [pod-name] -- bash -c "pg_dump -U [postgres-user] [database-name]" > database.sql
RESTORE
// pod-name name of the postgres pod
// postgres-user database user that is able to access the database
// database-name name of the database
cat database.sql | kubectl exec -i [pod-name] -- psql -U [postgres-user] -d [database-name]
You can have a job pod that does run this command and exports this to a file storage system such as AWS s3.
I think running pg_dump as a job is a good idea, but connecting directly to your DB's persistent disk is not. Try having pg_dump connect to your DB over the network! You could then have a second disk onto which your pg_dump command dumps the backups. To be on the safe side, you can create regular snapshots of this second disk.
The reason for the Jobs POD to stay in Pending state is that it forever tries to attach/mount the GCE persistent disk and fails to do so because it is already attached/mounted to another POD.
Attaching a persistent disk to multiple PODs is only supported if all of them attach/mount the volume in ReadOnly mode. This is of course no viable solution for you.
I never worked with GCE, but it should be possible to easily create a snapshot from the PD from within GCE. This would not give a very clean backup, more like something in the state of "crashed in the middle, but recoverable", but this is probably acceptable for you.
Running pg_dump inside the database POD is a viable solution, with a few drawbacks as you already noticed, especially performance. You'd also have to move out the resulting backup from the POD afterwards, e.g. by using kubectl cp and another exec to cleanup the backup in the POD.
You can use Minio Client
First of all use simple dockerfile to make docker image contains postgres along with minio client (let name this image postgres_backup):
FROM postgres
RUN apt-get update && apt-get install -y wget
RUN wget https://dl.min.io/client/mc/release/linux-amd64/mc
RUN chmod +x mc
RUN ./mc alias set gcs https://storage.googleapis.com BKIKJAA5BMMU2RHO6IBB V8f1CwQqAcwo80UEIJEjc5gVQUSSx5ohQ9GSrr12
Now you can use postgres_backup image in your CronJob (I assumed you made backups bucket in your Google storage):
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup-job
spec:
# Backup the database every day at 2AM
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres_backup
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
command: ["/bin/sh"]
args: ["-c", 'pg_dump -Fc -U [Your Postgres Username] -W [Your Postgres Password] -h [Your Postgres Host] [Your Postgres Database] | ./mc pipe gcs/backups/$(date -Iseconds).dump']
restartPolicy: Never
A lot of tutorials use kubectl cp or transfer the file inside the pod, but you can also pipe the pg_dump container output directly to another process.
kubectl run --env=PGPASSWORD=$PASSWORD --image=bitnami/postgresql postgresql -it --rm -- \
bash -c "pg_dump -U $USER -h $HOST -d $DATABASE" |\
gzip > backup.sql.gz
The easiest way to dump without storing any additional copies on your pod:
kubectl -n [namespace] exec -it [pod name] -- bash -c "export PGPASSWORD='[db password]'; pg_dump -U [db user] [db name]" > [database].sql

"root" execution of the PostgreSQL server is not permitted

When I try to start postgresql I get an error:
postgres
postgres does not know where to find the server configuration file.
You must specify the --config-file or -D invocation option or set the
PGDATA environment variable.
So then I try to set my config file:
postgres -D /usr/local/var/postgres
And I get the following error:
postgres cannot access the server configuration file "/usr/local/var/postgres/postgresql.conf": Permission denied
Hmm okay. Next, I try to perform that same action as an admin:
sudo postgres -D /usr/local/var/postgres
And I receive the following error:
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for more
information on how to properly start the server.
I googled around for that error message but cannot find a solution.
Can anyone provide some insight into this?
For those trying to run custom command using the official docker image, use the following command. docker-entrypoint.sh handles switching the user and handling other permissions.
docker-entrypoint.sh -c 'shared_buffers=256MB' -c 'max_connections=200'
Your command does not do what you think it does. To run something as system user postgres:
sudo -u postgres command
To run the command (also named postgres!):
sudo -u postgres postgres -D /usr/local/var/postgres
Your command does the opposite:
sudo postgres -D /usr/local/var/postgres
It runs the program postgres as the superuser root (sudo without -u switch), and Postgres does not allow to be run with superuser privileges for security reasons. Hence the error message.
If you are going to run a couple of commands as system user postgres, change the user with:
sudo -u postgres -i
... and exit when you are done.
PostgreSQL error: Fatal: role "username" does not exist
If you see this error message while operating as system user postgres, then something is wrong with permissions on the file or one of the containing directories.
postgres cannot access the server configuration file "/usr/local/var/postgres/postgresql.conf": Permission denied
/usr/local/var/postgres/postgresql.conf
Consider instruction in the Postgres manual.
Also consider the wrapper pg_ctl - or pg_ctlcluster in Debian-based distributions.
And know the difference between su and sudo. Related:
PostgreSQL error: Fatal: role "username" does not exist
The answer of Muthukumar is the best !! After all day searching by the more simple way of change my Alpine Postgres deployment in Kubernetes, I found this simple answer.
There is my complete description. Enjoy it !!
First I need to create/define a ConfigMap with correct values. Save in the file "custom-postgresql.conf":
# DB Version: 12
# OS Type: linux
# DB Type: oltp
# Total Memory (RAM): 16 GB
# CPUs num: 4
# Connections num: 9999
# Data Storage: ssd
# https://pgtune.leopard.in.ua/#/
# 2020-10-29
listen_addresses = '*'
max_connections = 9999
shared_buffers = 4GB
effective_cache_size = 12GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 209kB
min_wal_size = 2GB
max_wal_size = 8GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
Create the Config/Map:
kubectl create configmap custom-postgresql-conf --from-file=custom-postgresql.conf
Please, take care that the values in custom settings are defined
according to the Pod resources, mainly by memory and CPU assignments.
There is the manifest (postgres.yml):
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 128Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: default
spec:
type: ClusterIP
selector:
app: postgres
tier: core
ports:
- name: port-5432-tcp
port: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: postgres
tier: core
template:
metadata:
labels:
app: postgres
tier: core
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
- name: postgresql-conf
configMap:
name: postgresql-conf
items:
- key: custom-postgresql.conf
path: postgresql.conf
containers:
- name: postgres
image: postgres:12-alpine
resources:
requests:
memory: 128Mi
cpu: 600m
limits:
memory: 16Gi
cpu: 1500m
readinessProbe:
exec:
command:
- "psql"
- "-w"
- "-U"
- "postgres"
- "-d"
- "postgres"
- "-c"
- "SELECT 1"
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command:
- "psql"
- "-w"
- "postgres"
- "-U"
- "postgres"
- "-d"
- "postgres"
- "-c"
- "SELECT 1"
initialDelaySeconds: 45
timeoutSeconds: 2
imagePullPolicy: IfNotPresent
# this was the problem !!!
# I found the solution here: https://stackoverflow.com/questions/28311825/root-execution-of-the-postgresql-server-is-not-permitted
command: [ "docker-entrypoint.sh", "-c", "config_file=/etc/postgresql/postgresql.conf" ]
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgresql
- name: postgresql-conf
mountPath: /etc/postgresql/postgresql.conf
subPath: postgresql.conf
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: etldatasore-username
key: ETLDATASTORE__USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: etldatasore-database
key: ETLDATASTORE__DATABASE
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: etldatasore-password
key: ETLDATASTORE__PASSWORD
You can apply with
kubectl apply -f postgres.yml
Go to your pod and check for applied settings:
kubectl get pods
kubectl exec -it postgres-548f997646-6vzv2 bash
bash-5.0# su - postgres
postgres-548f997646-6vzv2:~$ psql
postgres=# show config_file;
config_file
---------------------------------
/etc/postgresql/postgresql.conf
(1 row)
postgres=#
# if you want to check all custom settings, do
postgres=# SHOW ALL;
Thank you Muthukumar !!
Please, try yourself, validate, and share !!!