Docker compose: Error: role "hleb" does not exist - postgresql

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!

Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Related

Docker postgres persisting data without volumes (should not persist)

I am using a docker container to run postgres for testing purposes, it should NOT persist data between different runs.
This is the dockerfile:
FROM postgres:alpine
ENV POSTGRES_PASSWORD=1234
EXPOSE 5432
And this is my compose file:
version: "3.9"
services:
web:
build:
context: ../../.
dockerfile: ./services/web/Dockerfile
ports:
- "3000:3000"
db:
build: ../db
ports:
- "5438:5432"
graphql:
build:
context: ../../.
dockerfile: ./services/graphql/Dockerfile
ports:
- "4000:4000"
indexer:
build:
context: ../../.
dockerfile: ./services/indexer-ts/Dockerfile
volumes:
- ~/.aws/:/root/.aws:ro
However, I find that between sessions all data is being persisted and I have no clue why. This is totally messing my tests and is not expected to happen.
Even after running docker system prune, all data still persists, meaning that the container is probably using a volume somehow
Does anyone know why this is happening and how to not persist the data?
When your stop your docker-compose environment by typing CTRL-C or similar, next time you run docker-compose up it will restart the same container if the configuration hasn't changed. So even absent volumes, any data that was there previously will continue to be there.
To ensure you're starting with fresh containers, always run:
docker-compose down
If you have explicit volumes defined in your configuration, adding -v will also delete those volumes:
docker-compose down -v
(That's not necessary in this situation.)
Unrelated to your question, but why are you building a custom postgres image? You could just set things up in your docker-compose.yaml file:
db:
image: postgres:alpine
environment:
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
ports:
- "5438:5432"
(And then set POSTGRES_PASSWORD in your .env file.)
You are correct, it is using a volume.
You can use the -v switch to clean up:
docker-compose rm -v db

Starting Tryton server with docker-compose file

I am trying to link an external postgres to tryton/tryton from docker hub.
docker-compose.yaml
version: '3.7'
services:
tryton-postgres:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_DB=tryton
restart: always
gnuserver:
image: tryton/tryton:4.6
links:
- tryton-postgres:postgres
ports:
- 8000:8000
depends_on:
- tryton-postgres
entrypoint: /entrypoint.sh trytond
when i ssh into the container and run trytond-admin --all -d tryton it seems to be looking for sqlite file instead of the connected postgres database. Are there some env variagbles i must set? What am i missing in my docker compose file?
Instead of changing the configuration file, with Docker it is simpler to set environment variable like:
DB_USER=
DB_PASSWORD=
DB_HOSTNAME=tryton-postgres
DB_PORT=5432
you need to edit /etc/tryton/trytond.conf to look at postgresql:
uri = postgresql://USERNAME:PASSWORD#tryton-postgres:5432/
see the Docs

Knex Migration with Docker Compose Psql

I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.

Docker container shuts down giving 'data directory has wrong ownership' error when executed in windows 10

I have my docker installed in Windows. I am trying to install this application. It has given me the following docker-compose.yml file:
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "8085:80"
networks:
- attendizenet
volumes:
- .:/usr/share/nginx/html/attendize
depends_on:
- php
php:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
networks:
- attendizenet
php-worker:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
command: php artisan queue:work --daemon
networks:
- attendizenet
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
networks:
- attendizenet
maildev:
image: djfarrelly/maildev
ports:
- "1080:80"
networks:
- attendizenet
redis:
image: redis
networks:
- attendizenet
networks:
attendizenet:
driver: bridge
All the installation goes well, but the PostgreSQL container stops after starting for a moment giving following error.
2018-03-07 08:24:47.927 UTC [1] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2018-03-07 08:24:47.927 UTC [1] HINT: The server must be started by the user that owns the data directory
A simple PostgreSQL container from Docker Hub works smoothly, but the error occurs when we try to attach a volume to the container.
I am new to docker, so please ignore usage of terms wrongly.
This is a documented problem with the Postgres Docker image on Windows [1][2][3][4]. Currently, there doesn't appear to be a way to correctly mount Windows directories as volumes. You could instead use a persistent Docker volume, for example:
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- attendizenet
volumes:
pgdata:
Other things that didn't work:
Set PGDATA to a subdirectory (See PGDATA Setting)
environment:
- PGDATA=/var/lib/postgresql/data/mnt
volumes:
- ./pgdata:/var/lib/postgresql/data
Use a Bind Mount (docker-compose 3.2)
volumes:
- type: bind
source: ./pgdata
target: /var/lib/postgresql/data
Running as POSTGRES_USER=root
More Information:
GitHub
data directory "/var/lib/postgresql/data" has wrong ownership
Docker Forums
postgresql-data-pgdata-has-wrong-ownership
postgres-to-work-on-persistent-windows-mount
Please refer reinierkors' answer from here. The answer is as follows copied as is from the link here for reader's convenience and works for me
I solved this by mapping my local volume one directory below the one Postgres needs:
version: '3'
services:
postgres:
image: postgres
restart: on-failure
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=postgres
volumes:
- ./postgres_data:/var/lib/postgresql
ports:
- 5432:5432
I was having the same issue after downgrading my Docker from WSL 2 to WSL 1 and what Thomas Taylor's pertaining, I solved the issue by using named volume.
version: '3.8'
services:
postgres:
image: timescale/timescaledb:latest-pg12
...
volumes:
- pgdata:/var/lib/postgresql/data
...
volumes:
pgdata:
Map the local volume (e.g. C:\docker\pgdata) to one level (one directory) above what PostgreSQL needs. You can also do it from command line when starting the docker:
docker run -itd -e POSTGRES_USER=pguser -e POSTGRES_PASSWORD=pgpasswd \
-e PGDATA=/var/lib/postgresql/data/pgdata -p 5432:5432 \
-v c:\docker\pgdata:/var/lib/postgresql --name postgresql postgres
I met this issue when re-installed docker and used wsl-1 backend.
solution: switch docker to wsl-2 backend.
Even i had the problem i had to copy the data dir at regular intervals.
docker cp <container-name>:/var/lib/postgresql/data C:/docker/volumes/postgres
Owner for the data folder in postgres inside the container is Postgres user. Your current user may not have access privilege in the mounted folder. You need to give all permissions according to the requirements by given command below :
chmod 777 ./docker/pgdata
If this command is not helping to resolve this issue please refer the following link to do the user mapping from inside the container to outside the container.
https://docs.docker.com/engine/security/userns-remap/#prerequisites

Docker postgres does not run init file in docker-entrypoint-initdb.d

Based on Docker's Postgres documentation, I can create any *.sql file inside /docker-entrypoint-initdb.d and have it automatically run.
I have init.sql that contains CREATE DATABASE ronda;
In my docker-compose.yaml, I have
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn ronda.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
data:
restart: always
build: ./postgres/
volumes:
- /var/lib/postgresql
command: "true"
and my postgres Dockerfile,
FROM library/postgres
RUN mkdir -p /docker-entrypoint-initdb.d
COPY init.sql /docker-entrypoint-initdb.d/
Running docker-compose build and docker-compose up work fine, but the database ronda is not created.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
If your initialisation requirements are just to create the ronda schema, then you could just make use of the POSTGRES_DB environment variable as described in the documentation.
The bit of your docker-compose.yml file for the postgres service would then be:
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
environment:
POSTGRES_DB: ronda
On a side note, do not use restart: always for your data container as this container does not run any service (just the true command). Doing this you are basically telling Docker to run the true command in an infinite loop.
Had the same problem with postgres 11.
Some points that helped me:
run:
docker-compose rm
docker-compose build
docker-compose up
The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.