openshift postgres persistent volume permissions - postgresql

The postgres image I am currently deploying with openshift is generally working great. However I need to persistently store the database data (of course) and to do so i created a persistent volume claim and mounted it to the postgres data directory like so:
- mountPath: /var/lib/pgsql/data/userdata
name: db-storage-volume
and
- name: db-storage-volume
persistentVolumeClaim:
claimName: db-storage
The problem I am facing now is that the initdb script wants to change the permission of that data folder, but it cant and the directory is assigned to a very weird user/group, as the output of ls -la /var/lib/pgsql/data indicates (including the failing command output):
total 12
drwxrwxr-x. 3 postgres root 21 Aug 30 13:06 .
drwxrwx---. 3 postgres root 17 Apr 5 09:55 ..
drwxrwxrwx. 2 nobody nobody 12288 Jun 26 11:11 userdata
chmod: changing permissions of '/var/lib/pgsql/data/userdata': Permission denied
How can I handle this? I mean the permissions are enough to read/write but initdb (and the base images initialization functions) really want to change the permission of that folder.

Just as I had sent my question I had an idea and it turns out it worked:
Change the mount to the parent folder /var/lib/pgsql/data/
Modify my entry script to include a mkdir /var/lib/pgsql/data/userdata when it runs first (aka the folder does not exist yet)
Now it is:
total 16
drwxrwxrwx. 3 nobody nobody 12288 Aug 30 13:19 .
drwxrwx---. 3 postgres root 17 Apr 5 09:55 ..
drwxr-xr-x. 2 1001320000 nobody 4096 Aug 30 13:19 userdata
Which works. Notice that the folder itself is still owned by nobody:nobody and is 777, but the created userdata folder is owned by the correct user.

Related

postgresql archive permission denied

We have installed postgres v12 on ubuntu 20.04 (with apt install -y postgresql postgresql-contrib) and wish to enable archiving to /data/db/postgres/archive by setting the following in postgresql.conf:
max_wal_senders=2
wal_keep_segments=256
wal_sender_timeout=60s
archive_mode=on
archive_command=cp %p /data/db/postgres/archive/%f
However the postgres service fails to write there:
2022-11-15 15:02:26.212 CET [392860] FATAL: archive command failed with exit code 126
2022-11-15 15:02:26.212 CET [392860] DETAIL: The failed archive command was: archive_command=cp pg_wal/000000010000000000000002 /data/db/postgres/archive/000000010000000000000002
2022-11-15 15:02:26.213 CET [392605] LOG: archiver process (PID 392860) exited with exit code 1
sh: 1: pg_wal/000000010000000000000002: Permission denied
This directory /data/db/postgres/archive/ is owned by the postgres user and when we su postgres we are able to create and delete files without a problem.
Why can the postgresql service (running as postgres) not write to a directory it owns?
Here are the permissions on all the parents of the archive directory:
drwxr-xr-x 2 postgres root 6 Nov 15 14:59 /data/db/postgres/archive
drwxr-xr-x 3 root root 21 Nov 15 14:29 /data/db/postgres
drwxr-xr-x 3 root root 22 Nov 15 14:29 /data/db
drwxr-xr-x 5 root root 44 Nov 15 14:29 /data
2022-11-15 15:02:26.212 CET [392860] DETAIL: The failed archive command was: archive_command=cp pg_wal/000000010000000000000002 /data/db/postgres/archive/000000010000000000000002
So, your archive_command is apparently set to the peculiar string archive_command=cp %p /data/db/postgres/archive/%f.
After the %variables are substituted, the result is passed to the shell. The shell does what it was told, which is to set the (unused) environment variable 'archive_command' to be 'cp', and then tries to execute the file pg_wal/000000010000000000000002, which is not allowed to because it doesn't have the execute bit set.
I don't know how you managed to get such a deformed archive_command, but it didn't come from anything you showed us.

NGINX Ingress - Redis Module add on

Hi there I'm having some trouble adding in a 3rd party module into the Kubernetes NGINX-Ingress custom image, and haven't been successful in figuring out how to do it.
I have seen one other has had this problem, and they seemed to of compiled it from scratch but dont seem to provide details as to how they added the file.
I'm installing the ingress controller via Helm, with a simple values.yaml file to make the alterations shown below:
values.yaml
controller:
image:
registry: [registry]
repository: [repo]
image: [image]
tag: 1.0.2
pullPolicy: Always
config:
entries:
main-snippets: "load_module /etc/nginx/modules/ngx_http_redis2_module.so"
prometheus:
create: true
Which was accepted when doing a helm install, in pulling a custom image into the pod/container for the ingress controller. I have been able to dynamically compile the module into a .so file that I'm keeping locally at this time to add into the custom image. But the issue that I'm having it seems that when building a Docker file for this custom image I seem to be unable to add the module file.
Dockerfile
FROM nginx/nginx-ingress:2.1.0
#Adds nginx redis2 module
USER root
COPY ngx_http_redis2_module.so /etc/nginx/modules/ngx_http_redis2_module.so
USER www-data
The dockerfile is what I'm using above to attempt to add the file into the proper place, which should be under the /etc/nginx/modules folder like the others. But after running the pod, and bashing into it I'm only seeing the following:
-rwxr-xr-x 1 www-data www-data 164256 Sep 26 17:40 ngx_http_auth_digest_module.so
-rwxr-xr-x 1 www-data www-data 5388256 Sep 26 17:40 ngx_http_brotli_filter_module.so
-rwxr-xr-x 1 www-data www-data 78152 Sep 26 17:40 ngx_http_brotli_static_module.so
-rwxr-xr-x 1 www-data www-data 100704 Sep 26 17:40 ngx_http_geoip2_module.so
-rwxr-xr-x 1 www-data www-data 113056 Sep 26 17:40 ngx_http_influxdb_module.so
-rwxr-xr-x 1 www-data www-data 267080 Sep 26 17:40 ngx_http_modsecurity_module.so
-rwxr-xr-x 1 www-data www-data 2468184 Sep 26 17:40 ngx_http_opentracing_module.so
-rwxr-xr-x 1 www-data www-data 74856 Sep 26 17:40 ngx_stream_geoip2_module.so
It seems that I'm making the image incorrectly or something else entirely and any help would be appreciated.
So I found that I was doing a few things wrong partially, but thanks to #jordanm for helping me out here.
I was able to run it using the docker run -it --entrypoint=/bin/sh command pointing to my image. I saw that my commands, of adding the file on there was correct but I was doing a few things incorrectly. First off I was using the wrong image, should of been using the controller, so here is the proper dockerfile:
Dockerfile
FROM k8s.gcr.io/ingress-nginx/controller:v1.1.1
#Adds nginx redis2 module
USER root
COPY --chmod=744 ngx_http_redis2_module.so /etc/nginx/modules/ngx_http_redis2_module.so
USER www-data
Also I had to tweak my values.yml file for helm to be a bit different also:
values.yml
controller:
image:
registry: [registry]
image: [image]
tag: "1.0.5"
digest: "sha256:[sha value]"
pullPolicy: Always
config:
main-snippets: "load_module /etc/nginx/modules/ngx_http_redis2_module.so;"
prometheus:
create: true
I had to add in the digest value which didnt seem to be stated in the helm instructions. In order to get the SHA256 value use this command docker inspect --format='{{index .RepoDigests 0}}' [IMAGE]:[TAG] and use that output in the digest value.
What was happening which I didnt take notice, was the container was failing so fast and a different one was being spun up before I even noticed it. So what I did was I went onto my test control plane / node and uninstalled NGINX first, then I did a sudo docker system prune -af which removed unused images. This way I was being guaranteed my image was being pulled through and deployed, and not reverting back to a different image.
I dont know why but the description of the pod would state that it was deploying my image, but I believe under the hood it would use another image.

MongoDB fails to rotate logs, keeping deleted files open

I have a similar problem to what was reported in [1], in that the MongoDB log files are being kept open as deleted when a log rotation happens. However, I believe the underlying causes are quite different, so I created a new question. The long and short of it is that when this happens, which is not all the time, I end up with no Mongo logs at all; and sometimes the deleted log file is kept for so long it becomes an issue as its size becomes too big.
Unlike [1], I have setup log rotation directly in Mongo [2]. It is done as follows:
systemLog:
verbosity: 0
destination: file
path: "/SOME_PATH/mongo.log"
logAppend: true
timeStampFormat: iso8601-utc
logRotate: reopen
In terms of the setup: I am running MongoDB 4.2.9 (WireTiger) on RHEL 7.4. The database sits on an XFS filesystem. The mount options we have for XFS are as follows:
rw,nodev,relatime,seclabel,attr2,inode64,noquota
Any pointers as to what could be causing this behaviour would be greatly appreciated. Thanks in advance for your time.
Update 1
Thanks to everyone for all the pointers. I now understand better the configuration, but I still think there is something amiss. To recap, In addition to the settings telling MongoDB to reopen the file on log rotate, I am also using the logrotate command. The configuration is fairly similar to what is suggested in [3]:
# rotate log every day
daily
# or if size limit exceeded
size 100M
# number of rotations to keep
rotate 5
# don't error if log is missing
missingok
# don't rotate if empty
notifempty
# compress rotated log file
compress
# permissions of rotated logs
create 644 SOMEUSER SOMEGROUP
# run post-rotate once per rotation, not once per file (see 'man logrotate')
sharedscripts
# 1. Signal to MongoDB to start a new log file.
# 2. Delete the empty 0-byte files left from compression.
postrotate
/bin/kill -SIGUSR1 $(cat /SOMEDIR/PIDFILE.pid 2> /dev/null) > /dev/null 2>&1
find /SOMEDIR/ -size 0c -delete
endscript
The main difference really is the slightly more complex postrotate command, though it does seem semantically to do the same as in [3], e.g.:
kill -USR1 $(/usr/sbin/pidof mongod)
At any rate, what seems to be happening with the present log rotate configuration is that, very infrequently, MongoDB appears to get the SIGUSR1 but does not create a new log file. I stress the "seems/appears" as I do not have any hard evidence of this since its a very tricky problem to replicate under a controlled environment. But we can compare the two scenarios. I see that the log rolling is working in the majority of cases:
-rw-r--r--. 1 SOMEUSER SOMEGROUP 34M May 5 10:51 mongo.log
-rw-------. 1 SOMEUSER SOMEGROUP 9.2M Feb 25 2020 mongo.log-20200225.gz
-rw-------. 1 SOMEUSER SOMEGROUP 8.3M Nov 17 03:39 mongo.log-20201117.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 8.6M Jan 30 03:19 mongo.log-20210130.gz
-rw-------. 1 SOMEUSER SOMEGROUP 8.6M Feb 27 03:31 mongo.log-20210227.gz
...
However, on occasion it seems that instead of creating a new log file, MongoDB keeps hold of the deleted file handle (note the missing mongo.log):
$ ls -lh
total 74M
-rw-r--r--. 1 SOMEUSER SOMEGROUP 18M Feb 17 03:29 mongo.log-20210217.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 18M Feb 18 03:11 mongo.log-20210218.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 18M Feb 19 03:41 mongo.log-20210219.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 15M Feb 20 03:07 mongo.log-20210220.gz
-rw-r--r--. 1 SOMEUSER SOMEGROUP 6.5M Mar 13 03:41 mongo.log-20210313.gz
$ lsof -p SOMEPID | grep deleted | numfmt --field=7 --to=iec
mongod SOMEPID SOMEUSER 679w REG 253,5 106M 1191182408 /SOMEDIR/mongo.log (deleted)
Its not entirely clear to me how would one get more information on what MongoDB is doing upon receiving the SIGUSR1 signal. I also noticed that I get a lot of successful rotations before hitting the issue - may just be a coincidence, but I wonder if its the final rotation that is causing the problem (e.g. rotate 5). I'll keep on investigating but any additional pointers are most welcome. Thanks in advance.
[1] SO: MongoDB keeps deleted log files open after rotation
[2] MongoDB docs: Rotate Log Files
[3] How can I disable the logging of MongoDB?

How to prevent docker-compose up from changing folder permissions

I'm running a WSL2 Ubuntu terminal with docker for windows, and every time I run docker-compose up the permissions of the folder that contains the project get changed.
Before running docker-compose:
drwxr-xr-x 12 cesarvarela cesarvarela 4096 Jun 24 15:37 .
After:
drwxr-xr-x 12 999 cesarvarela 4096 Jun 24 15:37
This prevents me from changing git branch, editing files, etc. I have to chown the folder again to my user to do that, but I would like to not having to do this everytime.

Cannot run psql in PostgreSQL 9.5

I am using PostgreSQL 9.5 on Ubuntu 16.04 LTS.
I receive the below error when I type psql:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
On checking the logs in /var/log/postgresql/postgresql-9.5-main.log, I see the error as:
2018-11-26 13:17:41 IST [3508-1] FATAL: could not access private key file "/etc/ssl/private/ssl-cert-snakeoil.key": Permission denied
Below are the permissions of the /etc/ssl/private and ssl-cert-snakeoil.key files:
vivek#vivek-ThinkPad-E480:~$ ls -l /etc/ssl
total 36
drwxr-xr-x 2 root root 20480 Nov 22 13:06 certs
-rwxr-xr-x 1 root root 10835 Dec 8 2017 openssl.cnf
drwxr--r-- 2 root ssl-cert 4096 Nov 22 13:06 private
vivek#vivek-ThinkPad-E480:~$ sudo ls -l /etc/ssl/private
total 4
-rw-r----- 1 root ssl-cert 1704 Nov 22 13:06 ssl-cert-snakeoil.key
The postgres user is also added to the group ssl-cert.
vivek#vivek-ThinkPad-E480:~$ getent group ssl-cert
ssl-cert:x:112:postgres
NOTE: I found that there is no server.key present in /var/lib/postgresql/9.5/main.
I also posted this on DBA Stackexchange, but no response as yet.
Can anyone guide me in the right direction in setting permissions?
That can never work, and your server will not be able to start, because the OS user postgres has no permissions to access files in etc/ssl/private.
To allow users in the group ssl-cert to access files in the directory, run
chmod g+x /etc/ssl/private
While you're at it, make sure that /etc/ssl has the required permissions.
To test if everything works, become user postgres and try to read the file.