Postgres database running in docker keeps hanging - postgresql

I'm using the postgres docker image, and after months of using databases running in docker images, now I'm getting the behaviour where after a certain period of time, they simply just hang. I can exec with bin/bash but can't do anything with postgres at all; the commands don't return and the containers can't be brought down. Even docker kill -s SIGKILL <container_id> doesn't work; needs a reboot of the docker server to stop them.
The only smoking gun I can see is the message:
WARNING: could not open statistics file "pg_stat_tmp/global.stat": Operation not permitted
on all containers. Anyone has any ideas I'd be really appreciative as this is killing things at the moment.

This is happening due to a user permission mismatch in the docker container.
Listing the relevant files in the container:
$ docker exec <container> ls -l /var/lib/postgresql/data/pg_stat_tmp
-rw------- 1 root root [...] db_0.stat
-rw------- 1 root root [...] db_1.stat
-rw------- 1 root root [...] db_2.stat
-rw------- 1 postgres postgres [...] global.stat
we can see that all the db_*.stat files are owned by root:root, while global.stat is owned by postgres:postgres.
Checking the docker user gives us:
$ docker exec <container> whoami
root
So, we'd like all of these files to be owned by the postgres user.
Luckily, this is quite easy! Just set user to postgres, and restart!
In a dockerfile:
USER postgres
Using docker-compose:
services:
postgres:
image: postgres:13
user: postgres

As Norling and Dharman's answer did not work for me, I've tried another way: leaving the temporary file in container - and that worked. I've changed with inline command the Postgresql config:
postgresdb:
image: 'postgres'
command: postgres -c stats_temp_directory=/tmp
.
.
.

Related

How do I run Mongodb container as root user

When I try to do a mongodump :
mongodump -u aaa -p abc123 --authenticationDatabase admin -d TestDb --gzip --out /var/backups/dump-25-05-22/mybackup.gz
inside my mongodb pod (kubectl exec -t <pod_name> --bash> I am getting an error :
Failed: error dumping metadata: error creating directory for metadata file /var/backups/...: permission denied
For context I do not have access to the host machine where the k8s instance is running, I can only do kubectl commands so the issue can only be fixed from K8S side.
These are the current permissions on the mount path (/var/backups) :
ls -la
drwxr-xr-x 2 root root 4096 Jun 11 2021 .
drwxr-xr-x 1 root root 4096 Jun 11 2021 ..
I have also tried the following :
1.
sudo chmod 777 -R /var/backups/
bash: sudo: command not found
chmod 777 -R /var/backups/
error changing permissions of '/var/backups/': Operation not permitted
Tried to enable sudo mode (Requires password and I dont know it)
su -
Having done a lot of digging around this the best solution I have is to tweak the mongo Dockerfile and build my own custom image.
The only feasible solution I found is at this link and it says "Configure mongo to run as root via editing the mongo config and copying it during the docker build process. This assumes you're using a docker file to build that image. Then it will have no problem accessing the attached volume."
My question(s):
Given an example Dockerfile as below how would I make the so-called mongod user run as root ? :
FROM mongo:3.2
COPY entrypoint.sh /root/entrypoint.sh
RUN chown -R mongodb:mongodb /var/log /data/db
USER mongodb
ENTRYPOINT ["/root/entrypoint.sh"]
At which level is this mongodb user accessible ? I have checked the users in the mongodb container but they is no entry for such (Unless Im missing something obvious) :
cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats

Why is postgres container ignoring /docker-entrypoint-initdb.d/* in Gitlab CI

Gitlab CI keeps ignoring the sql-files in /docker-entrypoint-initdb.d/* in this project.
here is docker-compose.yml:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
here is .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: debian:stable-slim
script:
- bash ./deploy.sh
The deployment script basically uses rsync to deploy the content of the repository to to the server via SSH:
rsync -rav --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw -e "ssh -l gitlab-ci" --exclude=".git" --delete ./ "gitlab-ci#$DEPLOY_SERVER:test/"
and then ssh's into the server to stop and restart the container:
ssh "gitlab-ci#$DEPLOY_SERVER" "cd test && docker-compose down && docker-compose up --build --detach"
This all goes well, but when the container starts up, it is supposed to run all the files that are in /docker-entrypoint-initdb.d/* as we can see here.
But instead, when doing docker logs -f lbsn-testdb on the server, I can see it stating
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
and I have no clue, why that happens. When running this container locally or even when I ssh to that server, clone the repo and bring up the containers manually, it all goes well and parses the sql-files. Just not when the Gitlab CI does it.
Any ideas on why that is?
This has been easier than I expected, and fatally nothing to do with Gitlab CI but with file permissions.
I passed --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw to rsync which looked really secure because only the user can do stuff. I confess that I propably copypasted it from somewhere on the internet. But then the files are mounted to the Docker container, and in there they have those permissions as well:
-rw------- 1 1005 1004 314 May 8 15:48 100-create-database.sql
On the host my gitlab-ci user owns those files, they are obviously also owned by some user with ID 1005 in the container as well, and no permissions are given to other users than this one.
Inside the container the user who does things is postgres though, but it can't read those files. Instead of complaining about that, it just ignores them. That might be something to create an issue about…
Now that I pass --chmod=D755,F644 it looks like that:
-rw-r--r-- 1 1005 1004 314 May 8 15:48 100-create-database.sql
and the docker logs say
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/100-create-database.sql
Too easy to think of in the first place :-/
If you already run the postgres service before, the init files will be ignored when you restart it so try to use --build to build the image again
docker-compose up --build -d
and before you run again :
Check the existing volumes with
docker volume ls
Then remove the one that you are using for you pg service with
docker volume rm {volume_name}
-> Make sure that the volume is not used by a container, if so then remove the container as well
I found this topic discovering a similar problem with PostgreSQL installation using the docker-compose tool.
The solution is basically the same. For the provided configuration:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
Your deployment script should set 0755 permissions to your postgres container volume, like chmod -R 0755 ./testdb in this case. It is important to make all subdirectories visible, so chmod -R option is required.
Official Postgres image is running under internal postgres user with the UID 70. Your application user in the host is most likely has different UID like 1000 or something similar. That is the reason for postgres init script to miss installation steps due to permissions error. This issue appears several years, but still exist in the latest PostgreSQL version (currently is 12.1)
Please be aware of security vulnerability when having readable for all init files in the system. It is good to use shell environment variables to pass secrets into the init scrip.
Here is a docker-compose example:
postgres:
image: postgres:12.1-alpine
container_name: app-postgres
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- APP_POSTGRES_DB
- APP_POSTGRES_SCHEMA
- APP_POSTGRES_USER
- APP_POSTGRES_PASSWORD
ports:
- '5432:5432'
volumes:
- $HOME/app/conf/postgres:/docker-entrypoint-initdb.d
- $HOME/data/postgres:/var/lib/postgresql/data
Corresponding script create-users.sh for creating users may looks like:
#!/bin/bash
set -o nounset
set -o errexit
set -o pipefail
POSTGRES_USER="${POSTGRES_USER:-postgres}"
POSTGRES_PASSWORD="${POSTGRES_PASSWORD}"
APP_POSTGRES_DB="${APP_POSTGRES_DB:-app}"
APP_POSTGRES_SCHEMA="${APP_POSTGRES_SCHEMA:-app}"
APP_POSTGRES_USER="${APP_POSTGRES_USER:-appuser}"
APP_POSTGRES_PASSWORD="${APP_POSTGRES_PASSWORD:-app}"
DATABASE="${APP_POSTGRES_DB}"
# Create single database.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE DATABASE ${DATABASE}"
# Create app user.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE USER ${APP_POSTGRES_USER} SUPERUSER PASSWORD '${APP_POSTGRES_PASSWORD}'"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "GRANT ALL PRIVILEGES ON DATABASE ${DATABASE} TO ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname "${DATABASE}" --command "CREATE SCHEMA ${APP_POSTGRES_SCHEMA} AUTHORIZATION ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "ALTER USER ${APP_POSTGRES_USER} SET search_path = ${APP_POSTGRES_SCHEMA},public"

Can't start mongo with docker command, but can with /bin/bash inside container (with data volume)

This docker-compose.yml:
services:
database:
image: mongo:3.2
ports:
- "27017"
command: "mongod --dbpath=/usr/database"
networks:
- backend
volumes:
- dbdata:/usr/database
volumes:
dbdata:
results in this error (snipped):
database_1 | 2016-11-28T06:30:29.864+0000 I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: /usr/database/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
Ditto for just trying to run the command in a container using that image directly:
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
But, if I run /bin/bash when starting the container, and THEN start mongo, we're OK:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
root#8aab722fad89:/# mongod --dbpath=/usr/database
Based on the output, the difference seems to be that in the second scenario, the command is run as root.
So, my questions are:
Why does the /bin/bash method work, when the others do not?
How can I replicate that reason, in the docker-compose?
Note: On OSX, since that seems to effect whether you can mount a host directory as a volume for Mongo to use - not that I'm doing that.
To clarify, this image  hub.docker.com/_/mongo is an official MongoDB docker image from DockerHub, but NOT an official docker image from MongoDB.
Now to answer your questions,
Why does the /bin/bash method work, when the others do not?
This answer is based on Dockerfile v3.2. First to point out that your volume mount command -v /usr/database , is essentially creating a directory in the container with the root ownership permission.
Your command below failed with permission denied because the the docker image is running the command as user mongodb (see this dockerfile line) . As the directory /usr/database is owned by root.
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
While if you execute below /bin/bash then manually run mongod:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
Your are logged in as root and executing mongod as root, and it has the permission to create database files in /usr/database/.
Also, if you're executing the line below, it works because you're pointing to a directory /data/db which the permission has been corrected for user mongodb (see this dockerfile line)
$ docker run -v db:/data/db mongo:3.2
How can I replicate that reason, in the docker-compose?
The easiest solution is to use command: "mongod --dbpath=/data/db" because the permission ownership has been corrected in the Dockerfile.
If you are intending to use a host volume, you probably would have to add mongodb user on your host OSX and change appropriate directories permission. Modifying the permission ownership of a volume mount is outside the scope of docker-compose.

Transition PostgreSQL persistent storage on docker to modern docker storage only

With the advent of
docker volume create
for storage only containers, I'm still using the old way for running postgres on my machine for small applications without a dockerfile:
# MAKE MY DATA STORE
STORAGE_DIR=/home/username/mydockerdata/pgdata
docker create -v $STORAGE_DIR:/var/lib/postgresql/data --name mypgdata ubuntu true
# CREATE THE PG
docker run --name mypg -e POSTGRES_PASSWORD=password123 -d -p 5432:5432 --volumes-from mypgdata library/postgres:9.5.4
# RUN IT
docker start mypg
# docker stop mypg
I have 4 questions:
How could I move the old way to store my data in a local, persistent container to modern volumes?
The permissions my way have always seemed whacky:
$ ls -lah $STORAGE_DIR/..
drwx------ 19 999 root 4.0K Aug 28 10:04 pgdata
Should I do this differently?
Does my networking look correct here? This will be visible only on the machine hosting docker, or is this also published to all machines on my wifi network?
Besides the weak password, standard ports, default usernames, for example here, are there other security fears in doing this for personal use only that I should be aware of?
Create a new volume and copy the data over. Then run your container with the new volume definition.
docker volume create --name mypgdata
docker run --rm \
-v $STORAGE_DIR:/data \
-v mypgdata:/datanew ubuntu \
sh -c 'tar -C /data -cf - . | tar -C /datanew -xvf -'
docker run --rm -v mypgdata:/data ubuntu ls -l /data
The permissions are normal. UID 999 is the postgres user that the postgres image creates.
Port 5432 will be accessible on all your docker hosts interfaces. If you only want it to be available on localhost use --port 127.0.0.1:5432:5432
Moving to listening on localhost mitigates most security issues, until someone gains access to your docker host. General security is a bit too broad a topic for a dot point.

Mongo docker with volume get error

After running docker run --name mongo -p 27017:27017 -v ~/Documents/store/mongo/:/data/db -d mongo --smallfiles
[initandlisten] exception in initAndListen: 98 Unable to create/open
lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod
instance already running?, terminating
In store directory, ls -l
drwxr-xr-x 4 MeoBeoI staff 136 Dec 8 17:11 mongo
drwxr-xr-x 2 MeoBeoI staff 68 Dec 9 10:20 redis
I use OSX 10.11.1
Looking at the mongo Dockerfile:
a user mongodb is created
/data/db is created and protected for mongodb:mongodb
So try and make sure to start your container with that user:
docker run --user mongodb ...
If that is not working, then fallback to the original docker run command (which runs as root by default) and define run.sh (that needs to be copied in your own image) which will:
do the right chown -R mongodb:mongodb /data/db
call entrypoint.sh mongod
That is:
docker run ... run.sh
That would be an approach similar to issue 7198 (Make uid & gid configurable for shared volumes).