I have a custom kubernetes cluster with local disk PersistentVolumes. I am trying to deploy spring-cloud-dataflow using this guide.
However, none of the pods are able to write on persistent volumes mounted. Here are the errors.
│
│ mariadb 12:55:19.88 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars │
│ mariadb 12:55:19.89 INFO ==> Initializing mariadb database │
│ mkdir: cannot create directory '/bitnami/mariadb/data': Permission denied
│ zookeeper 12:55:47.87 INFO ==> ** Starting ZooKeeper setup **
│
│ mkdir: cannot create directory '/bitnami/zookeeper/data': Permission denied │
│ Stream closed EOF for default/spring-cdf-release-zookeeper-0 (zookeeper)
I have tried adding initContainers but did not help.
rabbitmq:
enabled: false
mariadb:
initContainers:
- name: take-data-dir-ownership
image: docker.io/bitnami/minideb:stretch
command:
- chown
- -R
- 777:777
- /bitnami/mariadb
securityContext:
runAsUser: 0
volumeMounts:
- name: data-spring-cdf-release-mariadb-0
mountPath: /bitnami/mariadb
kafka:
enabled: true
initContainers:
- name: take-data-dir-ownership
image: docker.io/bitnami/minideb:stretch
command:
- chown
- -R
- 777:777
- /bitnami/kafka
securityContext:
runAsUser: 0
volumeMounts:
- name: data-spring-cdf-release-kafka-0
mountPath: /bitnami/kafka
zookeeper:
enabled: true
initContainers:
- name: take-data-dir-ownership
image: docker.io/bitnami/minideb:stretch
command:
- chown
- -R
- 777:777
- /bitnami/zookeeper
securityContext:
runAsUser: 0
volumeMounts:
- name: data-spring-cdf-release-zookeeper-0
mountPath: /bitnami/zookeeper
Any suggestions how can I make this volume writable by the pod?
Related
Need to change the values of AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER & AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY via CI-CD variable. The above values are present in airflow_template.yaml file. I tried substituting the CI-CD variables, but it is not working. If there is a better way to parameterize. please let me know.
#My project folder structure looks like below:
dataops
-- docker
-- base
-- airflow.cfg
-- **airflow_template.yaml**
-- Dockerfile
-- dag-image
--Dockerfile
-- helm
--Chart.yaml
--values.yaml
--templates
--deployment.yaml
--svc.yaml
**airflow_template.yaml**
apiVersion: v1
kind: Pod
metadata:
labels: {}
spec:
containers:
- args: []
command: []
env:
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY
value: $DEV_AIRFLOW_CONTAINER_REPO
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: $DEV_AIRFLOW_LOG_FOLDER
envFrom: []
imagePullPolicy: Always
name: base
ports: []
volumeMounts:
- mountPath: /usr/local/airflow/logs
name: airflow-logs
hostNetwork: false
imagePullSecrets: []
initContainers: []
nodeSelector: {}
restartPolicy: Never
securityContext:
runAsUser: 1000
serviceAccountName: default
volumes:
- emptyDir: {}
name: airflow-logs
gitlab-ci.yml
stages:
- build_and_upload
- deploy_to_dev
- tag_prod
- deploy_to_prod
build_and_upload:
stage: build_and_upload
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.14-dind
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- mkdir -p edfi/operation
- cp -r airflow_dags/ dataops/docker/dag-image/airflow_dags/
- cd dataops/docker/dag-image/
- docker build -t "$DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA" --build-arg COMMIT_HASH=$CI_COMMIT_SHORT_SHA .
- docker tag $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA $DEV_DAGS_IMAGE:latest
- docker push $DEV_DAGS_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $DEV_DAGS_IMAGE:latest
only:
refs:
- develop
# variables:
# - $CI_COMMIT_MESSAGE =~ /penguin/
deploy_to_dev:
stage: deploy_to_dev
image: $CI_REGISTRY_IMAGE:kube-image
script:
- echo $DEV_CREDENTIALS > service_account.json && cat service_account.json | docker login -u _json_key --password-stdin https://gcr.io
- echo "as- $DEV_AIRFLOW_LOG_FOLDER"
- export DEV_AIRFLOW_CONTAINER_REPO="${DEV_AIRFLOW_CONTAINER_REPO}"
- export DEV_AIRFLOW_LOG_FOLDER="${DEV_AIRFLOW_LOG_FOLDER}"
- gcloud auth activate-service-account $DEV_SERVICE_ACCOUNT --key-file=./service_account.json --project=$DEV_PROJECT_NAME
- gcloud container clusters get-credentials $DEV_GKE_CLUSTER --region $REGION
- echo $DEV_DB_CONN > dataops/helm/airflow-loadbalancer/files/secrets/airflow/AIRFLOW__CORE__SQL_ALCHEMY_CONN
- cd dataops/helm/
- helm upgrade airflow-dev airflow-loadbalancer/ --install --atomic --set dags_image.tag=$CI_COMMIT_SHORT_SHA
only:
refs:
- develop
You could make it a jinja2 template and use a small Python program to interpolate the values into the template.
Then you also have all the flexibility to use environment variables or something else.
I am make a nfs file share and using it in kubernetes pods, but when I start pods, it give me tips :
2020-05-31 03:00:06+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.30-1debian10 started.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
I searching from internet and understand the nfs default map other root login to nfsnobody account, if the privillege not correct, this error should happen, but I follow the steps and still not solve it. This is the ways I having tried:
1 addd unsecure config no_root_squash in /etc/exports:
/mnt/data/apollodb/apollopv *(rw,sync,no_subtree_check,no_root_squash)
2 remove the PVC and PV and directly using nfs in pod like this:
volumes:
- name: apollo-mysql-persistent-storage
nfs:
server: 192.168.64.237
path: /mnt/data/apollodb/apollopv
containers:
- name: mysql
image: 'mysql:5.7'
ports:
- name: mysql
containerPort: 3306
protocol: TCP
env:
- name: MYSQL_ROOT_PASSWORD
value: gfwge4LucnXwfefewegLwAd29QqJn4
resources: {}
volumeMounts:
- name: apollo-mysql-persistent-storage
mountPath: /var/lib/mysql
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
this tell me the problem not in pod define but in the nfs config itself.
3 give every privillege using this command
chmod 777 /mnt/data/apollodb/apollopv
4 chown to nfsnobody like this
sudo chown nfsnobody:nfsnobody -R apollodb/
sudo chown 999:999 -R apollodb
but the problem still not solved,so what should I try to make it works?
You wouldn't set this via chown, you would use fsGroup security setting instead.
Issue:
Deploying Artifactory as a deployment in Kubernetes. The VolumeMounts are being mounted as root:artifact and permissions of drwxr-sr-x
/var/opt/jfrog/artifactory
drwxr-sr-x 2 root artifact 4096 Jan 24 17:52 etc
/var/opt/jfrog/artifactory/etc
-rw-r--r-- 1 root artifact 1048 Jan 24 17:48 artifactory.config.import.yml
-rw-r--r-- 1 root artifact 12703 Jan 24 17:48 artifactory.system.properties
Expected:
The VolumeMount should be mounted as artifact:artifact with read and write permissions
kubernetes manifest file its incomplete due to restriction
spec:
securityContext:
runAsUser: 1030
runAsGroup: 1030
fsGroup: 1030
volumeMounts:
- name: artifactory-volume
mountPath: "/var/opt/jfrog/artifactory"
- name: bootstrap
mountPath: "/var/opt/jfrog/artifactory/etc/artifactory.config.import.yml"
subPath: bootstrap
- name: artifactory-system-properties
mountPath: "/var/opt/jfrog/artifactory/etc/artifactory.system.properties"
subPath: artifactory.system.properties
resources:
limits:
cpu: "3"
memory: 6Gi
requests:
cpu: "2"
memory: 4Gi
volumes:
- name: bootstrap
secret:
secretName: artifactory6170-artifactory
- name: artifactory-system-properties
configMap:
name: artifactory6170-artifactory-system-properties
- name: artifactory-volume
persistentVolumeClaim:
claimName: artifactory6170-artifactory
Kubernetes Version :
Server Version: version.Info{
Major: "1",
Minor: "14",
GitVersion: "v1.14.1",
GitCommit: "b7394102d6ef778017f2ca4046abbaa23b88c290",
GitTreeState: "clean",
BuildDate: "2019-04-08T17:02:58Z",
GoVersion: "go1.12.1",
Compiler: "gc",
Platform: "linux/amd64"
}
I believe the security context covers the required
runAsUser: 1030
runs the process as 1030
runAsGroup: 1030
Any files created will also be owned by user 1030 and group 1030 when runAsGroup is specified.
runs
fsGroup: 1030
the owner of any volume attached will be owner by group ID 1099.
Docker file path
Not sure why the container comes up with wrong user ownership, any help would be really appreciated.
Error:
kubectl logs artifactory6170-artifactory-756cffb9-68zjj
2020-01-26 12:28:13 [719 entrypoint-artifactory.sh] Preparing to run Artifactory in Docker
2020-01-26 12:28:13 [720 entrypoint-artifactory.sh] Running as uid=1030(artifactory) gid=1030(artifactory) groups=1030(artifactory)
2020-01-26 12:28:13 [57 entrypoint-artifactory.sh] Dockerfile for this image can found inside the container.
2020-01-26 12:28:13 [58 entrypoint-artifactory.sh] To view the Dockerfile: 'cat /docker/artifactory-pro/Dockerfile.artifactory'.
2020-01-26 12:28:13 [63 entrypoint-artifactory.sh] Checking open files and processes limits
2020-01-26 12:28:13 [66 entrypoint-artifactory.sh] Current max open files is 1048576
2020-01-26 12:28:13 [78 entrypoint-artifactory.sh] Current max open processes is unlimited
2020-01-26 12:31:13 [211 entrypoint-artifactory.sh] Testing directory /var/opt/jfrog/artifactory has read/write permissions for user 'artifactory' (id 1030)
/entrypoint-artifactory.sh: line 180: /var/opt/jfrog/artifactory/etc/test-permissions: Permission denied
2020-01-26 12:31:13 [229 entrypoint-artifactory.sh] ###########################################################
2020-01-26 12:31:13 [230 entrypoint-artifactory.sh] /var/opt/jfrog/artifactory DOES NOT have proper permissions for user 'artifactory' (id 1030)
2020-01-26 12:31:13 [231 entrypoint-artifactory.sh] Directory: /var/opt/jfrog/artifactory, permissions: 2775, owner: artifactory, group: artifactory
2020-01-26 12:31:13 [232 entrypoint-artifactory.sh] Mounted directory must have read/write permissions for user 'artifactory' (id 1030)
2020-01-26 12:31:13 [233 entrypoint-artifactory.sh] ###########################################################
2020-01-26 12:31:13 [47 entrypoint-artifactory.sh] ERROR: Directory /var/opt/jfrog/artifactory has bad permissions for user 'artifactory' (id 1030)
All i had to do was add an initContainer and mount the Configmaps to /tmp and have it moved to the necessary path /var/opt/jfrog/artifactory/etc/, instead of mounting the configmap in the volumemount /var/opt/jfrog/artifactory.
Reason: ConfigMaps are ReadOnly hence /etc was and would always be Readonly.
initContainers:
- name: "grant-permissions"
image: "busybox:1.26.2"
securityContext:
runAsUser: 0
imagePullPolicy: "IfNotPresent"
command:
- 'sh'
- '-c'
- 'mkdir /var/opt/jfrog/artifactory/etc ; cp -vf /tmp/artifactory* /var/opt/jfrog/artifactory/etc ; chown -R 1030:1030 /var/opt/jfrog/ ; rm -rfv /var/opt/jfrog/artifactory/lost+found'
volumeMounts:
- mountPath: "/var/opt/jfrog/artifactory"
name: artifactory-volume
- name: bootstrap
mountPath: "/tmp/artifactory.config.import.yml"
subPath: bootstrap
readOnly: false
- name: artifactory-system-properties
mountPath: "/tmp/artifactory.system.properties"
subPath: artifactory.system.properties
readOnly: false
then mount the volume to the main container that runs artifactory
containers:
- name: artifactory
image: "registry.eu02.dsg.arm.com/sqa/artifactory-pro:6.17.0"
volumeMounts:
- name: artifactory-volume
mountPath: "/var/opt/jfrog/artifactory"
As explained here, here, here and here you cannot change the permission of mounted directory.
As a work around you can use initContainer which runs before the actual container to change the permission to the directory:
initContainers:
- name: volume-mount
image: busybox
command: ["sh", "-c", "chown -R 1030:1030 <your_directory>"]
volumeMounts:
- name: <your volume>
mountPath: <your mountPath>
I am trying to get some persistant storage for a docker instance of PostgreSQL running on Kubernetes. However, the pod fails with
FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
This is the NFS configuration:
% exportfs -v
/srv/nfs/postgresql/postgres-registry
kubehost*.example.com(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash)
$ ls -ldn /srv/nfs/postgresql/postgres-registry
drwxrwxrwx. 3 999 999 4096 Jul 24 15:02 /srv/nfs/postgresql/postgres-registry
$ ls -ln /srv/nfs/postgresql/postgres-registry
total 4
drwx------. 2 999 999 4096 Jul 25 08:36 pgdata
The full log from the pod:
2019-07-25T07:32:50.617532000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T07:32:50.618113000Z This user must also own the server process.
2019-07-25T07:32:50.619048000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T07:32:50.619496000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T07:32:50.619943000Z The default text search configuration will be set to "english".
2019-07-25T07:32:50.620826000Z Data page checksums are disabled.
2019-07-25T07:32:50.621697000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T07:32:50.647445000Z creating subdirectories ... ok
2019-07-25T07:32:50.765065000Z selecting default max_connections ... 20
2019-07-25T07:32:51.035710000Z selecting default shared_buffers ... 400kB
2019-07-25T07:32:51.062039000Z selecting default timezone ... Etc/UTC
2019-07-25T07:32:51.062828000Z selecting dynamic shared memory implementation ... posix
2019-07-25T07:32:51.218995000Z creating configuration files ... ok
2019-07-25T07:32:51.252788000Z 2019-07-25 07:32:51.251 UTC [79] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T07:32:51.253339000Z 2019-07-25 07:32:51.251 UTC [79] HINT: The server must be started by the user that owns the data directory.
2019-07-25T07:32:51.262238000Z child process exited with exit code 1
2019-07-25T07:32:51.263194000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T07:32:51.380205000Z running bootstrap script ...
The deployment has the following in:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
What am I doing wrong?
Edit: Added storage.yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.3.7
path: /srv/nfs/postgresql/postgres-registry
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Edit: And the full deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
subPath: "pgdata"
name: postgredb-registry-persistent-storage
volumes:
- name: postgredb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
Even more debugging adding:
command: ["/bin/bash", "-c"]
args:["id -u; ls -ldn /var/lib/postgresql/data"]
Which returned:
999
drwx------. 2 99 99 4096 Jul 25 09:11 /var/lib/postgresql/data
Clearly, the UID/GID are wrong. Why?
Even with the work around suggested by Jakub Bujny, I get this:
2019-07-25T09:32:08.734807000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T09:32:08.735335000Z This user must also own the server process.
2019-07-25T09:32:08.736976000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T09:32:08.737416000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T09:32:08.737882000Z The default text search configuration will be set to "english".
2019-07-25T09:32:08.738754000Z Data page checksums are disabled.
2019-07-25T09:32:08.739648000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T09:32:08.766606000Z creating subdirectories ... ok
2019-07-25T09:32:08.852381000Z selecting default max_connections ... 20
2019-07-25T09:32:09.119031000Z selecting default shared_buffers ... 400kB
2019-07-25T09:32:09.145069000Z selecting default timezone ... Etc/UTC
2019-07-25T09:32:09.145730000Z selecting dynamic shared memory implementation ... posix
2019-07-25T09:32:09.168161000Z creating configuration files ... ok
2019-07-25T09:32:09.200134000Z 2019-07-25 09:32:09.199 UTC [70] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T09:32:09.200715000Z 2019-07-25 09:32:09.199 UTC [70] HINT: The server must be started by the user that owns the data directory.
2019-07-25T09:32:09.208849000Z child process exited with exit code 1
2019-07-25T09:32:09.209316000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T09:32:09.274741000Z running bootstrap script ... 999
2019-07-25T09:32:09.278124000Z drwx------. 2 99 99 4096 Jul 25 09:32 /var/lib/postgresql/data
Using your setup and ensuring the nfs mount is owned by 999:999 it worked just fine.
You're also missing an 's' in your name: postgredb-registry-persistent-storage
And with your subPath: "pgdata" do you need to change the $PGDATA? I didn't include the subpath for this.
$ sudo mount 172.29.0.218:/test/nfs ./nfs
$ sudo su -c "ls -al ./nfs" postgres
total 8
drwx------ 2 postgres postgres 4096 Jul 25 14:44 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
$ kubectl apply -f nfspv.yaml
persistentvolume/postgres-registry-pv-volume created
persistentvolumeclaim/postgres-registry-pv-claim created
$ kubectl apply -f postgres.yaml
deployment.extensions/postgres-registry created
$ sudo su -c "ls -al ./nfs" postgres
total 124
drwx------ 19 postgres postgres 4096 Jul 25 14:46 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
drwx------ 3 postgres postgres 4096 Jul 25 14:46 base
drwx------ 2 postgres postgres 4096 Jul 25 14:46 global
drwx------ 2 postgres postgres 4096 Jul 25 14:46 pg_commit_ts
. . .
I noticed using nfs: directly in the persistent volume took significantly longer to initialize the database, whereas using hostPath: to the mounted nfs volume behaved normally.
So after a few minutes:
$ kubectl logs postgres-registry-675869694-9fp52 | tail -n 3
2019-07-25 21:50:57.181 UTC [30] LOG: database system is ready to accept connections
done
server started
$ kubectl exec -it postgres-registry-675869694-9fp52 psql
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.
postgres=#
Checking the uid/gid
$ kubectl exec -it postgres-registry-675869694-9fp52 bash
postgres#postgres-registry-675869694-9fp52:/$ whoami && id -u && id -g
postgres
999
999
nfspv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 172.29.0.218
path: /test/nfs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
postgres.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb-registry-persistent-storage
volumes:
- name: postgresdb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
I cannot explain why those 2 IDs are different but as workaround I would try to override postgres's entrypoint with
command: ["/bin/bash", "-c"]
args: ["chown -R 999:999 /var/lib/postgresql/data && ./docker-entrypoint.sh postgres"]
This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control. The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM
When I try to start postgresql I get an error:
postgres
postgres does not know where to find the server configuration file.
You must specify the --config-file or -D invocation option or set the
PGDATA environment variable.
So then I try to set my config file:
postgres -D /usr/local/var/postgres
And I get the following error:
postgres cannot access the server configuration file "/usr/local/var/postgres/postgresql.conf": Permission denied
Hmm okay. Next, I try to perform that same action as an admin:
sudo postgres -D /usr/local/var/postgres
And I receive the following error:
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for more
information on how to properly start the server.
I googled around for that error message but cannot find a solution.
Can anyone provide some insight into this?
For those trying to run custom command using the official docker image, use the following command. docker-entrypoint.sh handles switching the user and handling other permissions.
docker-entrypoint.sh -c 'shared_buffers=256MB' -c 'max_connections=200'
Your command does not do what you think it does. To run something as system user postgres:
sudo -u postgres command
To run the command (also named postgres!):
sudo -u postgres postgres -D /usr/local/var/postgres
Your command does the opposite:
sudo postgres -D /usr/local/var/postgres
It runs the program postgres as the superuser root (sudo without -u switch), and Postgres does not allow to be run with superuser privileges for security reasons. Hence the error message.
If you are going to run a couple of commands as system user postgres, change the user with:
sudo -u postgres -i
... and exit when you are done.
PostgreSQL error: Fatal: role "username" does not exist
If you see this error message while operating as system user postgres, then something is wrong with permissions on the file or one of the containing directories.
postgres cannot access the server configuration file "/usr/local/var/postgres/postgresql.conf": Permission denied
/usr/local/var/postgres/postgresql.conf
Consider instruction in the Postgres manual.
Also consider the wrapper pg_ctl - or pg_ctlcluster in Debian-based distributions.
And know the difference between su and sudo. Related:
PostgreSQL error: Fatal: role "username" does not exist
The answer of Muthukumar is the best !! After all day searching by the more simple way of change my Alpine Postgres deployment in Kubernetes, I found this simple answer.
There is my complete description. Enjoy it !!
First I need to create/define a ConfigMap with correct values. Save in the file "custom-postgresql.conf":
# DB Version: 12
# OS Type: linux
# DB Type: oltp
# Total Memory (RAM): 16 GB
# CPUs num: 4
# Connections num: 9999
# Data Storage: ssd
# https://pgtune.leopard.in.ua/#/
# 2020-10-29
listen_addresses = '*'
max_connections = 9999
shared_buffers = 4GB
effective_cache_size = 12GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 209kB
min_wal_size = 2GB
max_wal_size = 8GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
Create the Config/Map:
kubectl create configmap custom-postgresql-conf --from-file=custom-postgresql.conf
Please, take care that the values in custom settings are defined
according to the Pod resources, mainly by memory and CPU assignments.
There is the manifest (postgres.yml):
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 128Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: default
spec:
type: ClusterIP
selector:
app: postgres
tier: core
ports:
- name: port-5432-tcp
port: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: postgres
tier: core
template:
metadata:
labels:
app: postgres
tier: core
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
- name: postgresql-conf
configMap:
name: postgresql-conf
items:
- key: custom-postgresql.conf
path: postgresql.conf
containers:
- name: postgres
image: postgres:12-alpine
resources:
requests:
memory: 128Mi
cpu: 600m
limits:
memory: 16Gi
cpu: 1500m
readinessProbe:
exec:
command:
- "psql"
- "-w"
- "-U"
- "postgres"
- "-d"
- "postgres"
- "-c"
- "SELECT 1"
initialDelaySeconds: 15
timeoutSeconds: 2
livenessProbe:
exec:
command:
- "psql"
- "-w"
- "postgres"
- "-U"
- "postgres"
- "-d"
- "postgres"
- "-c"
- "SELECT 1"
initialDelaySeconds: 45
timeoutSeconds: 2
imagePullPolicy: IfNotPresent
# this was the problem !!!
# I found the solution here: https://stackoverflow.com/questions/28311825/root-execution-of-the-postgresql-server-is-not-permitted
command: [ "docker-entrypoint.sh", "-c", "config_file=/etc/postgresql/postgresql.conf" ]
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgresql
- name: postgresql-conf
mountPath: /etc/postgresql/postgresql.conf
subPath: postgresql.conf
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: etldatasore-username
key: ETLDATASTORE__USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: etldatasore-database
key: ETLDATASTORE__DATABASE
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: etldatasore-password
key: ETLDATASTORE__PASSWORD
You can apply with
kubectl apply -f postgres.yml
Go to your pod and check for applied settings:
kubectl get pods
kubectl exec -it postgres-548f997646-6vzv2 bash
bash-5.0# su - postgres
postgres-548f997646-6vzv2:~$ psql
postgres=# show config_file;
config_file
---------------------------------
/etc/postgresql/postgresql.conf
(1 row)
postgres=#
# if you want to check all custom settings, do
postgres=# SHOW ALL;
Thank you Muthukumar !!
Please, try yourself, validate, and share !!!