MacOS Postgres in docker is not getting deleted after running docker-compose down - postgresql

I am spinning up a Postgres database in Docker-compose and it worked great the first time. I wanted to test some things so I ran
docker-compose down
And after my changes. I ran docker-compose up. This time, I keep getting a message saying
PostgreSQL Database directory appears to contain a database; Skipping initialization
My docker-compose is as follows:
postgres_db:
image: postgres:14.1
restart: unless-stopped
environment:
- POSTGRES_PASSWORD=${BP_ADMIN_PASSWORD}
volumes:
- ./admin/${BP_ENV:-dev}/database:/docker-entrypoint-initdb.d
- ./data/postgres:/var/lib/postgresql/data
command: postgres -c logging_collector=on -c log_rotation_age=1d -c log_directory=/mnt/log -c
ports:
- "5432:5432"
I ran docker-compose down --volumes to try to get rid of everything, but it did not work.
Then, I deleted the entire ./data/postgres folder on my local drive but it still failed. After, I commented out - ./data/postgres:/var/lib/postgresql/data but it still did not work.
How do I get rid of the existing database?

Running docker-compose down --volumes will only delete Docker volumes. You're not using volumes; you're using bind-mounts, in which you are mounting a host directory inside your container:
volumes:
- ./admin/${BP_ENV:-dev}/database:/docker-entrypoint-initdb.d
- ./data/postgres:/var/lib/postgresql/data
The way you delete that data is by using rm, as in:
rm -rf ./data/postgres/*
If you wanted to use a Docker volume for the database directory, that would look like:
version: "3"
services:
postgres_db:
image: postgres:14.1
restart: unless-stopped
environment:
- POSTGRES_PASSWORD=${BP_ADMIN_PASSWORD}
volumes:
- ./admin/${BP_ENV:-dev}/database:/docker-entrypoint-initdb.d
- pgdata:/var/lib/postgresql/data
command: postgres -c logging_collector=on -c log_rotation_age=1d -c log_directory=/mnt/log -c
ports:
- "5432:5432"
volumes:
pgdata:
In this case, running docker-compose down -v (or --volumes, if you
prefer) would delete the database volumes.

Related

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Permission issue with PostgreSQL in docker container

I'm trying to run a docker image with PostgreSQL that has a volume configured for persisting data.
docker-compose.yml
version: '3.1'
services:
db:
image: postgres
restart: always
volumes:
- ./data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: example
When I start the container I see the output
fixing permissions on existing directory /var/lib/postgresql/data ... ok
and the data folder is no longer readable for me.
If I elevate myself and access the data directory I can see that the files are there. Furthermore, the command ls -ld data gives me
drwx------ 19 systemd-coredump root 4096 May 17 16:22 data
I can manually set the directory permission with sudo chmod 755 data, but that only works until I restart the container.
Why does this happen, and how can I fix it?
The other answer indeed points to the root cause of the problem, however the help page it points to does not contain a solution. Here is what I came up with to make this work for me:
start the container using your normal docker-compose file, this creates the directory with the hardcoded uid:gid (999:999)
version: '3.7'
services:
db:
image: postgres
container_name: postgres
volumes:
- ./data:/var/lib/postgresql/data
environment:
POSTGRES_USER: fake_database_user
POSTGRES_PASSWORD: fake_database_PASSWORD
stop the container and manually change the ownership to uid:gid you want (I'll use 1000:1000 for this example
$ docker stop postgres
$ sudo chown -R 1000:1000 ./data
Edit your docker file to add your desired uid:gid and start it up again using docker-compose (notice the user:)
version: '3.7'
services:
db:
image: postgres
container_name: postgres
volumes:
- ./data:/var/lib/postgresql/data
user: 1000:1000
environment:
POSTGRES_USER: fake_database_user
POSTGRES_PASSWORD: fake_database_password
The reason you can't just use user: from the start is that if the image runs as a different user it fails to create the data files.
On the image documentation page, it does mention a solution to add a volume to expose the /etc/passwd file as read-only in the image when providing --user option, however, that did not work for me with the latest image, as I was getting the following error. In fact none of the three proposed solutions worked for me.
initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
This is because of what is written in the dockerfile of the postgres image.
From line 15 to 18, you'll see that the group 999 and the user 999 are used. I'm guessing that in your host, they map respectively to systemd-coredump and root.
You need to know that whenever you use a user/group in an image, if the uid/gid exist in your host, then it will be mapped to it.
You can read the documentation on the docker hub from the postgres image here. There is a section Arbitrary --user Notes that explain how it works in the context of this image.
An easier and permanent solution would be as follows:
Add these lines to ~/.bashrc:
export UID=$(id -u)
export GID=$(id -g)
Reload your shell:
$ source ~/.bashrc
Modify your docker-compose.yml as follows:
version: "3.7"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
user: "${UID}:${GID}"
...
Source
here's what i did:
services:
postgres:
image: postgres:15.1
restart: always
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_user
- POSTGRES_DB=my_user
user: root
ports:
- "5432:5432"
volumes:
- /home/my_user/volumes/postgres/data:/var/lib/postgresql/data
- /home/my_user/volumes/postgres/config:/etc/postgresql
postgres_setup:
image: postgres:15.1
user: root
volumes:
- /home/my_user/volumes/postgres/data:/var/lib/postgresql/data
- /home/my_user/volumes/postgres/config:/etc/postgresql
entrypoint: [ "bash", "-c", "chmod 750 -R /var/lib/postgresql/data && chmod 750 -R /etc/postgresql"]
depends_on:
- postgres
pgadmin4:
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=my_user#admin.com
- PGADMIN_DEFAULT_PASSWORD=my_user
- PGADMIN_LISTEN_ADDRESS=0.0.0.0
user: root
ports:
- "5050:80"
volumes:
- /home/my_user/volumes/pgadmin/data:/var/lib/pgadmin
depends_on:
- postgres_setup
the postgres_setup container just changes permissions and then shuts down
I have been struggling with a similar issue and the answer hit me when trying to work around postgres (static uid per container, configured or 70 by default on alpine, 999 on standard image), and docker limitations (no uid translation of volumes).
The answer is to utilize Linux ACL without any changes to docker-compose.yml user - just keep the default internal container user id.
mkdir -p ./data
sudo setfacl -m u:$(id -u):rwx -R ./data/
docker-compose up -d
or
docker-compose up -d
sudo setfacl -m u:$(id -u):rwx -R ./data/
The order of creating data volume's directory does not matter and as long as ACL is set after it was created, you as a user will be able to access it recursively. You can of course add additional permissions.
To check who has access to data folder simply run:
getfacl ./data

Docker container shuts down giving 'data directory has wrong ownership' error when executed in windows 10

I have my docker installed in Windows. I am trying to install this application. It has given me the following docker-compose.yml file:
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "8085:80"
networks:
- attendizenet
volumes:
- .:/usr/share/nginx/html/attendize
depends_on:
- php
php:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
networks:
- attendizenet
php-worker:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
command: php artisan queue:work --daemon
networks:
- attendizenet
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
networks:
- attendizenet
maildev:
image: djfarrelly/maildev
ports:
- "1080:80"
networks:
- attendizenet
redis:
image: redis
networks:
- attendizenet
networks:
attendizenet:
driver: bridge
All the installation goes well, but the PostgreSQL container stops after starting for a moment giving following error.
2018-03-07 08:24:47.927 UTC [1] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2018-03-07 08:24:47.927 UTC [1] HINT: The server must be started by the user that owns the data directory
A simple PostgreSQL container from Docker Hub works smoothly, but the error occurs when we try to attach a volume to the container.
I am new to docker, so please ignore usage of terms wrongly.
This is a documented problem with the Postgres Docker image on Windows [1][2][3][4]. Currently, there doesn't appear to be a way to correctly mount Windows directories as volumes. You could instead use a persistent Docker volume, for example:
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- attendizenet
volumes:
pgdata:
Other things that didn't work:
Set PGDATA to a subdirectory (See PGDATA Setting)
environment:
- PGDATA=/var/lib/postgresql/data/mnt
volumes:
- ./pgdata:/var/lib/postgresql/data
Use a Bind Mount (docker-compose 3.2)
volumes:
- type: bind
source: ./pgdata
target: /var/lib/postgresql/data
Running as POSTGRES_USER=root
More Information:
GitHub
data directory "/var/lib/postgresql/data" has wrong ownership
Docker Forums
postgresql-data-pgdata-has-wrong-ownership
postgres-to-work-on-persistent-windows-mount
Please refer reinierkors' answer from here. The answer is as follows copied as is from the link here for reader's convenience and works for me
I solved this by mapping my local volume one directory below the one Postgres needs:
version: '3'
services:
postgres:
image: postgres
restart: on-failure
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=postgres
volumes:
- ./postgres_data:/var/lib/postgresql
ports:
- 5432:5432
I was having the same issue after downgrading my Docker from WSL 2 to WSL 1 and what Thomas Taylor's pertaining, I solved the issue by using named volume.
version: '3.8'
services:
postgres:
image: timescale/timescaledb:latest-pg12
...
volumes:
- pgdata:/var/lib/postgresql/data
...
volumes:
pgdata:
Map the local volume (e.g. C:\docker\pgdata) to one level (one directory) above what PostgreSQL needs. You can also do it from command line when starting the docker:
docker run -itd -e POSTGRES_USER=pguser -e POSTGRES_PASSWORD=pgpasswd \
-e PGDATA=/var/lib/postgresql/data/pgdata -p 5432:5432 \
-v c:\docker\pgdata:/var/lib/postgresql --name postgresql postgres
I met this issue when re-installed docker and used wsl-1 backend.
solution: switch docker to wsl-2 backend.
Even i had the problem i had to copy the data dir at regular intervals.
docker cp <container-name>:/var/lib/postgresql/data C:/docker/volumes/postgres
Owner for the data folder in postgres inside the container is Postgres user. Your current user may not have access privilege in the mounted folder. You need to give all permissions according to the requirements by given command below :
chmod 777 ./docker/pgdata
If this command is not helping to resolve this issue please refer the following link to do the user mapping from inside the container to outside the container.
https://docs.docker.com/engine/security/userns-remap/#prerequisites

Why I don't lose postgresql data when rebuild docker image?

version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Why I don't lose data when running docker-compose build --force-em --no-cache. If this is normal, why do we need to create volume for data folder ?
When running the command docker-compose build --force-em --no-cache, this will only build the web Docker image from the Dockerfile which in your case is in the same directory.
This command will not stop the containers that you have previously started using this compose file, thus you want lose any data when running this command.
However, as soon as you remove the containers using docker-compose down or when containers are stopped docker-compose rm, you won't find the postgres data when you restart the container.
If you want to persist the data, and make the container pick it up when it is recreated, you need to give the postgres data volume a name as such.
version: '3'
services:
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Now the postgres data won't be lost when the containers are recreated.

Docker postgres does not run init file in docker-entrypoint-initdb.d

Based on Docker's Postgres documentation, I can create any *.sql file inside /docker-entrypoint-initdb.d and have it automatically run.
I have init.sql that contains CREATE DATABASE ronda;
In my docker-compose.yaml, I have
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn ronda.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
data:
restart: always
build: ./postgres/
volumes:
- /var/lib/postgresql
command: "true"
and my postgres Dockerfile,
FROM library/postgres
RUN mkdir -p /docker-entrypoint-initdb.d
COPY init.sql /docker-entrypoint-initdb.d/
Running docker-compose build and docker-compose up work fine, but the database ronda is not created.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
If your initialisation requirements are just to create the ronda schema, then you could just make use of the POSTGRES_DB environment variable as described in the documentation.
The bit of your docker-compose.yml file for the postgres service would then be:
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
environment:
POSTGRES_DB: ronda
On a side note, do not use restart: always for your data container as this container does not run any service (just the true command). Doing this you are basically telling Docker to run the true command in an infinite loop.
Had the same problem with postgres 11.
Some points that helped me:
run:
docker-compose rm
docker-compose build
docker-compose up
The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.