The server must be started by the user that owns the data directory - postgresql

I am trying to get some persistant storage for a docker instance of PostgreSQL running on Kubernetes. However, the pod fails with
FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
This is the NFS configuration:
% exportfs -v
/srv/nfs/postgresql/postgres-registry
kubehost*.example.com(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash)
$ ls -ldn /srv/nfs/postgresql/postgres-registry
drwxrwxrwx. 3 999 999 4096 Jul 24 15:02 /srv/nfs/postgresql/postgres-registry
$ ls -ln /srv/nfs/postgresql/postgres-registry
total 4
drwx------. 2 999 999 4096 Jul 25 08:36 pgdata
The full log from the pod:
2019-07-25T07:32:50.617532000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T07:32:50.618113000Z This user must also own the server process.
2019-07-25T07:32:50.619048000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T07:32:50.619496000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T07:32:50.619943000Z The default text search configuration will be set to "english".
2019-07-25T07:32:50.620826000Z Data page checksums are disabled.
2019-07-25T07:32:50.621697000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T07:32:50.647445000Z creating subdirectories ... ok
2019-07-25T07:32:50.765065000Z selecting default max_connections ... 20
2019-07-25T07:32:51.035710000Z selecting default shared_buffers ... 400kB
2019-07-25T07:32:51.062039000Z selecting default timezone ... Etc/UTC
2019-07-25T07:32:51.062828000Z selecting dynamic shared memory implementation ... posix
2019-07-25T07:32:51.218995000Z creating configuration files ... ok
2019-07-25T07:32:51.252788000Z 2019-07-25 07:32:51.251 UTC [79] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T07:32:51.253339000Z 2019-07-25 07:32:51.251 UTC [79] HINT: The server must be started by the user that owns the data directory.
2019-07-25T07:32:51.262238000Z child process exited with exit code 1
2019-07-25T07:32:51.263194000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T07:32:51.380205000Z running bootstrap script ...
The deployment has the following in:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
What am I doing wrong?
Edit: Added storage.yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.3.7
path: /srv/nfs/postgresql/postgres-registry
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Edit: And the full deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
subPath: "pgdata"
name: postgredb-registry-persistent-storage
volumes:
- name: postgredb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
Even more debugging adding:
command: ["/bin/bash", "-c"]
args:["id -u; ls -ldn /var/lib/postgresql/data"]
Which returned:
999
drwx------. 2 99 99 4096 Jul 25 09:11 /var/lib/postgresql/data
Clearly, the UID/GID are wrong. Why?
Even with the work around suggested by Jakub Bujny, I get this:
2019-07-25T09:32:08.734807000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T09:32:08.735335000Z This user must also own the server process.
2019-07-25T09:32:08.736976000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T09:32:08.737416000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T09:32:08.737882000Z The default text search configuration will be set to "english".
2019-07-25T09:32:08.738754000Z Data page checksums are disabled.
2019-07-25T09:32:08.739648000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T09:32:08.766606000Z creating subdirectories ... ok
2019-07-25T09:32:08.852381000Z selecting default max_connections ... 20
2019-07-25T09:32:09.119031000Z selecting default shared_buffers ... 400kB
2019-07-25T09:32:09.145069000Z selecting default timezone ... Etc/UTC
2019-07-25T09:32:09.145730000Z selecting dynamic shared memory implementation ... posix
2019-07-25T09:32:09.168161000Z creating configuration files ... ok
2019-07-25T09:32:09.200134000Z 2019-07-25 09:32:09.199 UTC [70] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T09:32:09.200715000Z 2019-07-25 09:32:09.199 UTC [70] HINT: The server must be started by the user that owns the data directory.
2019-07-25T09:32:09.208849000Z child process exited with exit code 1
2019-07-25T09:32:09.209316000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T09:32:09.274741000Z running bootstrap script ... 999
2019-07-25T09:32:09.278124000Z drwx------. 2 99 99 4096 Jul 25 09:32 /var/lib/postgresql/data

Using your setup and ensuring the nfs mount is owned by 999:999 it worked just fine.
You're also missing an 's' in your name: postgredb-registry-persistent-storage
And with your subPath: "pgdata" do you need to change the $PGDATA? I didn't include the subpath for this.
$ sudo mount 172.29.0.218:/test/nfs ./nfs
$ sudo su -c "ls -al ./nfs" postgres
total 8
drwx------ 2 postgres postgres 4096 Jul 25 14:44 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
$ kubectl apply -f nfspv.yaml
persistentvolume/postgres-registry-pv-volume created
persistentvolumeclaim/postgres-registry-pv-claim created
$ kubectl apply -f postgres.yaml
deployment.extensions/postgres-registry created
$ sudo su -c "ls -al ./nfs" postgres
total 124
drwx------ 19 postgres postgres 4096 Jul 25 14:46 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
drwx------ 3 postgres postgres 4096 Jul 25 14:46 base
drwx------ 2 postgres postgres 4096 Jul 25 14:46 global
drwx------ 2 postgres postgres 4096 Jul 25 14:46 pg_commit_ts
. . .
I noticed using nfs: directly in the persistent volume took significantly longer to initialize the database, whereas using hostPath: to the mounted nfs volume behaved normally.
So after a few minutes:
$ kubectl logs postgres-registry-675869694-9fp52 | tail -n 3
2019-07-25 21:50:57.181 UTC [30] LOG: database system is ready to accept connections
done
server started
$ kubectl exec -it postgres-registry-675869694-9fp52 psql
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.
postgres=#
Checking the uid/gid
$ kubectl exec -it postgres-registry-675869694-9fp52 bash
postgres#postgres-registry-675869694-9fp52:/$ whoami && id -u && id -g
postgres
999
999
nfspv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 172.29.0.218
path: /test/nfs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
postgres.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb-registry-persistent-storage
volumes:
- name: postgresdb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim

I cannot explain why those 2 IDs are different but as workaround I would try to override postgres's entrypoint with
command: ["/bin/bash", "-c"]
args: ["chown -R 999:999 /var/lib/postgresql/data && ./docker-entrypoint.sh postgres"]

This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control. The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM

Related

Pass postgres parameter into Kubernetes deployment

I am trying to set a postgres parameter (shared_buffers) into my postgres database pod. I am trying to set an init container to set the db variable, but it is not working because the init container runs as the root user.
What is the best way to edit the db variable on the pods? I do not have the ability to make the change within the image, because the variable needs to be different for different instances. If it helps, the command I need to run is a "postgres -c" command.
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
You didn't share your Pod/Deployment definition, but I believe you want to set shared_buffers from the command line of the actual container (not the init container) in your Pod definition. Something like this if you are using a deployment:
apiVersion: v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.2
imagePullPolicy: "IfNotPresent"
command: ["postgres"] # <-- add this
args: ["-D", "-c", "shared_buffers=128MB"] # <-- add this
ports:
- containerPort: 5432
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- name: postgresql-config-volume # <-- use if you are using a ConfigMap (see below)
mountPath: /var/lib/postgres/data/postgresql.conf
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim # <-- note: you need to have this already predefined
- name: postgresql-config-volume # <-- use if you are using a ConfigMap (see below)
configMap:
name: postgresql-config
Notice that if you are using a ConfigMap you can also do this (note that you may want to add more configuration options besides shared_buffers):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgresql-config
data:
postgresql.conf: |
shared_buffers=256MB
In my case, the #Rico answer didn't help me out of the box because I don't use postgres with a persistent storage mount, which means there is no /var/lib/postgresql/data folder and pre-existed database (so both proposed options have failed in my case).
To successfully apply postgres settings, I used only args (without command section).
In that case, k8s will pass these args to the default entrypoint defined in the docker image (docs), and as for postgres entrypoint, it is made so that any options passed to the docker command will be passed along to the postgres server daemon (look section Database Configuration at: https://hub.docker.com/_/postgres)
apiVersion: v1
kind: Pod
metadata:
name: postgres
spec:
containers:
- image: postgres:9.6.8
name: postgres
args: ["-c", "shared_buffers=256MB", "-c", "max_connections=207"]
To check that the settings applied:
$ kubectl exec -it postgres -- bash
root#postgres:/# su postgres
$ psql -c 'show max_connections;'
max_connections
-----------------
207
(1 row)

Minikube volume write permissions?

The big picture is: I'm trying to install WordPress with plugins in Kubernetes, for development in Minikube.
I want to use the official wp-cli Docker image to install the plugins. I am trying to use a write-enabled persistence volume. In Minikube, I turn on the mount to minikube cluster with command:
minikube mount ./src/plugins:/data/plugins
Now, the PV definition looks like this:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: wordpress-install-plugins-pv
labels:
app: wordpress
env: dev
spec:
capacity:
storage: 5Gi
storageClassName: ""
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: /data/plugins
The PVC looks like this:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-install-plugins-pvc
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: ""
volumeName: wordpress-install-plugins-pv
Both the creation and the binding are succesful. The Job definition for plugin installation looks like this:
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-plugins
labels:
env: dev
app: wordpress
spec:
template:
spec:
securityContext:
fsGroup: 82 # www-data
volumes:
- name: plugins-volume
persistentVolumeClaim:
claimName: wordpress-install-plugins-pvc
- name: config-volume
configMap:
name: wordpress-plugins
containers:
- name: wpcli
image: wordpress:cli
volumeMounts:
- mountPath: "/configmap"
name: config-volume
- mountPath: "/var/www/html/wp-content/plugins"
name: plugins-volume
command: ["sh", "-c", "id; \
touch /var/www/html/wp-content/plugins/test; \
ls -al /var/www/html/wp-content; \
wp core download --skip-content --force && \
wp config create --dbhost=mysql \
--dbname=$MYSQL_DATABASE \
--dbuser=$MYSQL_USER \
--dbpass=$MYSQL_PASSWORD && \
cat /configmap/wp-plugins.txt | xargs -I % wp plugin install % --activate" ]
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secrets
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secrets
key: dbname
restartPolicy: Never
backoffLimit: 3
Again, the creation looks fine and all the steps look fine. The problem I have is that apparently the permissions to the mounted volume do not allow the current user to write to the folder. Here's the log contents:
uid=82(www-data) gid=82(www-data) groups=82(www-data)
touch: /var/www/html/wp-content/plugins/test: Permission denied
total 9
drwxr-xr-x 3 root root 4096 Mar 1 20:15 .
drwxrwxrwx 3 www-data www-data 4096 Mar 1 20:15 ..
drwxr-xr-x 1 1000 1000 64 Mar 1 17:15 plugins
Downloading WordPress 5.3.2 (en_US)...
md5 hash verified: 380d41ad22c97bd4fc08b19a4eb97403
Success: WordPress downloaded.
Success: Generated 'wp-config.php' file.
Installing WooCommerce (3.9.2)
Downloading installation package from https://downloads.wordpress.org/plugin/woocommerce.3.9.2.zip...
Unpacking the package...
Warning: Could not create directory.
Warning: The 'woocommerce' plugin could not be found.
Error: No plugins installed.
Am I doing something wrong? I tried different minikube mount options, but nothing really helped! Did anyone run into this issue with minikube?
This is a long-term issue that prevents a non-root user to write to a container when mounting a hostPath PersistentVolume in Minikube.
There are two common workarounds:
Simply use the root user.
Configure a Security Context for a Pod or Container using runAsUser, runAsGroup and fsGroup. You can find a detailed info with an example in the link provided.
Please let me know if that helped.
I looked deeper into the way the volume mount works in minikube, and I think I came up with solution.
TL;DR
minikube mount ./src/plugins:/data/mnt/plugins --uid 82 --gid 82
Explanataion
There are to mounting moments:
minikube mounting the directory with minikube mount
volumne being mounted in Kubernetes
minikube mount sets up the directory in the VM with the UID and GID provided as parameters, with the default being docker user and group.
When the volume is being mounted in the Pod as a directory, it gets mounted with the exact same UID and GID as the host one! You can see this in my question:
drwxr-xr-x 1 1000 1000 64 Mar 1 17:15 plugins
UID=1000 and GID=1000 refer to the docker UID and GID in the minikube host. Which gave me an idea, that I should try mounting with the UID and GID of the user in the Pod.
82 is the id of both the user and the group www-data in the wordpress:cli Docker image, and it works!
One last think worth mentioning: the volume is mounted as a subdirectory in the Pod (wp-content in my case). It turned out that wp-cli actually needs access to that directory as well to create temporary folder. What I ended up doing is adding an emptyDir volume, like this:
volumes
- name: content
emptyDir: {}
I hope it help anybody! For what it's worth, my version of minikube is 1.7.3, running on OS X with VirtualBox driver.
Unfortunately, for Minikube today, 2 (Configure a Security Context for a Pod or Container using runAsUser, runAsGroup and fsGroup. You can find a detailed info with an example in the link provided.) doesn't seem to be a viable option, because the HostPast provisioner, which is used under the hood, doesn't honor Security Context. There seems to be a newer HostPath Provisioner, which preemptively sets new mounts to 777, but the one that came with my 1.25 MiniKube is still returning these paths as 755.

Write permissions on volume mount with OpenShift

Using OpenShift 3.11, I've mounted an nfs persistent volume, but the application cannot copy into the new volume, saying:
oc logs my-project-77858bc694-6kbm6
cp: cannot create regular file '/config/dbdata/resdb.lock.db': Permission denied
...
I've tried to change the ownership of the folder by doing a chown in an InitContainers, but it tells me the operation not permitted.
initContainers:
- name: chowner
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- ls -alt /config/dbdata; chown 1001:1001 /config/dbdata;
volumeMounts:
- name: my-volume
mountPath: /config/dbdata/
oc logs my-project-77858bc694-6kbm6 -c chowner
total 12
drwxr-xr-x 3 root root 4096 Nov 7 03:06 ..
drwxr-xr-x 2 99 99 4096 Nov 7 02:26 .
chown: /config/dbdata: Operation not permitted
I expect to be able to write to the mounted volume.
You can give your Pods permission to write into a volume by using fsGroup: GROUP_ID in a Security Context. fsGroup makes your volumes writable by GROUP_ID and makes all processes inside your container part of that group.
For example:
apiVersion: v1
kind: Pod
metadata:
name: POD_NAME
spec:
securityContext:
fsGroup: GROUP_ID
...

Mongodb container's data becomes "read-only" after restarting kubernetes, with glusterfs as storage?

My mongo is running as a docker container on the kubernetes, with glusterfs providing persistent volume. After I restart kuberntes (the machine power off and restart), all the mongo pods cannot come back, their logs:
chown: changing ownership of `/data/db/user_management.ns': Read-only file system
chown: changing ownership of `/data/db/storage.bson': Read-only file system
chown: changing ownership of `/data/db/local.ns': Read-only file system
chown: changing ownership of `/data/db/mongod.lock': Read-only file system
Here /data/db/ is the mounted gluster volume and I can make sure it's rw mode!:
# kubectl get pod mongoxxx -o yaml
apiVersion: v1
kind: Pod
spec:
containers:
- image: mongo:3.0.5
imagePullPolicy: IfNotPresent
name: mongo
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data/db
name: mongo-storage
volumes:
- name: mongo-storage
persistentVolumeClaim:
claimName: auth-mongo-data
# kubectl describe pod mongoxxx
...
Volume Mounts:
/data/db from mongo-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wdrfp (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mongo-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: auth-mongo-data
ReadOnly: false
...
# kubect get pv xxx
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
name: auth-mongo-data
resourceVersion: "215201"
selfLink: /api/v1/persistentvolumes/auth-mongo-data
uid: fb74a4b9-e0a3-11e6-b0d1-5254003b48ea
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 4Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: auth-mongo-data
namespace: default
glusterfs:
endpoints: glusterfs-cluster
path: infra-auth-mongo
persistentVolumeReclaimPolicy: Retain
status:
phase: Bound
And when I ls on the kubernetes node:
# ls -ls /var/lib/kubelet/pods/fc6c9ef3-e0a3-11e6-b0d1-5254003b48ea/volumes/kubernetes.io~glusterfs/auth-mongo-data/
total 163849
4 drwxr-xr-x. 2 mongo input 4096 1月 22 21:18 journal
65536 -rw-------. 1 mongo input 67108864 1月 22 21:16 local.0
16384 -rw-------. 1 mongo root 16777216 1月 23 17:15 local.ns
1 -rwxr-xr-x. 1 mongo root 2 1月 23 17:15 mongod.lock
1 -rw-r--r--. 1 mongo root 69 1月 23 17:15 storage.bson
4 drwxr-xr-x. 2 mongo input 4096 1月 22 21:18 _tmp
65536 -rw-------. 1 mongo input 67108864 1月 22 21:18 user_management.0
16384 -rw-------. 1 mongo root 16777216 1月 23 17:15 user_management.ns
I cannot chown though the volume is mounted as rw.
My host is CentOs 7.3:
Linux c4v160 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux.
I guess that it is because the glusterfs volume I have provided is unclean. The glusterfs volume infra-auth-mongo may consist of dirty directories. One solution is to remove this volume and create another.
Another solution is to hack mongodb dockerfile, force it change the ownership of /data/db before starting mongodb process. Like this: https://github.com/harryge00/mongo/commit/143bfc317e431692010f09b5c0d1f28395d2055b

"root" execution of the PostgreSQL server is not permitted

When I try to start postgresql I get an error:
postgres
postgres does not know where to find the server configuration file.
You must specify the --config-file or -D invocation option or set the
PGDATA environment variable.
So then I try to set my config file:
postgres -D /usr/local/var/postgres
And I get the following error:
postgres cannot access the server configuration file "/usr/local/var/postgres/postgresql.conf": Permission denied
Hmm okay. Next, I try to perform that same action as an admin:
sudo postgres -D /usr/local/var/postgres
And I receive the following error:
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for more
information on how to properly start the server.
I googled around for that error message but cannot find a solution.
Can anyone provide some insight into this?
For those trying to run custom command using the official docker image, use the following command. docker-entrypoint.sh handles switching the user and handling other permissions.
docker-entrypoint.sh -c 'shared_buffers=256MB' -c 'max_connections=200'
Your command does not do what you think it does. To run something as system user postgres:
sudo -u postgres command
To run the command (also named postgres!):
sudo -u postgres postgres -D /usr/local/var/postgres
Your command does the opposite:
sudo postgres -D /usr/local/var/postgres
It runs the program postgres as the superuser root (sudo without -u switch), and Postgres does not allow to be run with superuser privileges for security reasons. Hence the error message.
If you are going to run a couple of commands as system user postgres, change the user with:
sudo -u postgres -i
... and exit when you are done.
PostgreSQL error: Fatal: role "username" does not exist
If you see this error message while operating as system user postgres, then something is wrong with permissions on the file or one of the containing directories.
postgres cannot access the server configuration file "/usr/local/var/postgres/postgresql.conf": Permission denied
/usr/local/var/postgres/postgresql.conf
Consider instruction in the Postgres manual.
Also consider the wrapper pg_ctl - or pg_ctlcluster in Debian-based distributions.
And know the difference between su and sudo. Related:
PostgreSQL error: Fatal: role "username" does not exist
The answer of Muthukumar is the best !! After all day searching by the more simple way of change my Alpine Postgres deployment in Kubernetes, I found this simple answer.
There is my complete description. Enjoy it !!
First I need to create/define a ConfigMap with correct values. Save in the file "custom-postgresql.conf":
# DB Version: 12
# OS Type: linux
# DB Type: oltp
# Total Memory (RAM): 16 GB
# CPUs num: 4
# Connections num: 9999
# Data Storage: ssd
# https://pgtune.leopard.in.ua/#/
# 2020-10-29
listen_addresses = '*'
max_connections = 9999
shared_buffers = 4GB
effective_cache_size = 12GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 209kB
min_wal_size = 2GB
max_wal_size = 8GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
Create the Config/Map:
kubectl create configmap custom-postgresql-conf --from-file=custom-postgresql.conf
Please, take care that the values in custom settings are defined
according to the Pod resources, mainly by memory and CPU assignments.
There is the manifest (postgres.yml):
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 128Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: default
spec:
type: ClusterIP
selector:
app: postgres
tier: core
ports:
- name: port-5432-tcp
port: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: postgres
tier: core
template:
metadata:
labels:
app: postgres
tier: core
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
- name: postgresql-conf
configMap:
name: postgresql-conf
items:
- key: custom-postgresql.conf
path: postgresql.conf
containers:
- name: postgres
image: postgres:12-alpine
resources:
requests:
memory: 128Mi
cpu: 600m
limits:
memory: 16Gi
cpu: 1500m
readinessProbe:
exec:
command:
- "psql"
- "-w"
- "-U"
- "postgres"
- "-d"
- "postgres"
- "-c"
- "SELECT 1"
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command:
- "psql"
- "-w"
- "postgres"
- "-U"
- "postgres"
- "-d"
- "postgres"
- "-c"
- "SELECT 1"
initialDelaySeconds: 45
timeoutSeconds: 2
imagePullPolicy: IfNotPresent
# this was the problem !!!
# I found the solution here: https://stackoverflow.com/questions/28311825/root-execution-of-the-postgresql-server-is-not-permitted
command: [ "docker-entrypoint.sh", "-c", "config_file=/etc/postgresql/postgresql.conf" ]
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgresql
- name: postgresql-conf
mountPath: /etc/postgresql/postgresql.conf
subPath: postgresql.conf
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: etldatasore-username
key: ETLDATASTORE__USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: etldatasore-database
key: ETLDATASTORE__DATABASE
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: etldatasore-password
key: ETLDATASTORE__PASSWORD
You can apply with
kubectl apply -f postgres.yml
Go to your pod and check for applied settings:
kubectl get pods
kubectl exec -it postgres-548f997646-6vzv2 bash
bash-5.0# su - postgres
postgres-548f997646-6vzv2:~$ psql
postgres=# show config_file;
config_file
---------------------------------
/etc/postgresql/postgresql.conf
(1 row)
postgres=#
# if you want to check all custom settings, do
postgres=# SHOW ALL;
Thank you Muthukumar !!
Please, try yourself, validate, and share !!!