This question already has answers here:
How to create User/Database in script for Docker Postgres
(9 answers)
Closed 8 months ago.
I have a very complex environment, there are two services which named as A service and B service depend on service postgresql, but A and B use difference super user. so once I start the service postgresql, I need create another superuser for service B.
This is a part of docker compose file :
postgres:
image: postgres:13.4
container_name: postgresql
hostname: postgresql
volumes:
- 'postgres_data:/var/lib/postgresql/data'
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
command: ["CREATE ROLE postgres LOGIN SUPERUSER; | ALTER USER postgres CREATEDB CREATEROLE LOGIN INHERIT REPLICATION BYPASSRLS;"]
enter image description here
In the environments, I already setup the default user, password and DB in service postgresql for service A, BUT I’d like to create another super user for service B once service postgresql started, does there any one know how to write the commands. many thanks.
I think this grammer is wrong: command: ["CREATE ROLE postgres LOGIN SUPERUSER; | ALTER USER postgres CREATEDB CREATEROLE LOGIN INHERIT REPLICATION BYPASSRLS;"]
The issue with running commands on container startup is that you need the database to be up to be able to run the commands. By overriding the command: on the container, you replace the normal command which is to start Postgres. So with a new command, Postgres will never start.
You can run the command from a different container once Postgres is started. The command-runner container uses psql to connect to the database and run the command. It also uses the postgres docker image, but because it overrides the command on the container, this container doesn't start a database. It only runs the command.
Here's what I've come up with.
version: "3"
services:
postgres:
image: postgres
environment:
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=password
- POSTGRES_DB=keycloak
command-runner:
image: postgres
command: /bin/sh -c 'sleep 10 && PGPASSWORD=password psql -U keycloak -h postgres -d keycloak -c "CREATE ROLE postgres LOGIN SUPERUSER; ALTER USER postgres CREATEDB CREATEROLE LOGIN INHERIT REPLICATION BYPASSRLS;"'
depends_on:
- postgres
I had to put in a sleep 10 command in the command-runner container for it to wait for Postgres to be ready to accept connections.
I also removed the | you had in your command before ALTER USER. I got a syntax error on it.
Related
The problem
I am trying to connect to PostgreSQL from PhpStorm, but it returns the following error:
[28P01] FATAL: password authentication failed for user "app"
The situation
I have the following .env file setup:
POSTGRES_DB=app
POSTGRES_USER=app
POSTGRES_PASSWORD=password
POSTGRES_VERSION=15
And the following in docker-compose.yml:
version: '3'
services:
database:
image: postgres:${POSTGRES_VERSION}-alpine
container_name: database
env_file:
- .env
volumes:
- db-data:/var/lib/postgresql/data:rw
ports:
- '5432'
volumes:
db-data:
When running docker-compose up -d that does create a container & volume successfully.
So then I enter the following into my PhpStorm:
But then the error pops up, entering the password again doesn't fix anything.
I am running this on Ubuntu 22.04.1 LTS
What I've tried
I've rebuilt the container several times with different user & password combinations (making sure to use docker-compose down -v to get rid of the volume), all with the same result.
I've tried changing the password by executing docker exec -it database psql -U app and then running ALTER ROLE app WITH PASSWORD 'password', but this did not change anything.
I also saw online that it might have to do something with authentication of the user being setup as ident, but I cannot find a way to change this in the docker-compose.yml file.
The question
How could I set this up so I can connect my PhpStorm to the PostgreSQL database properly?
In my case I was running Windows 10 with WSL2 (Ubuntu). I had installed Postgres in the Ubuntu instance as part of setting up an app. I'd removed the app but the Postgres server was still running. When I attempted to connect to the Docker Postgres instance using localhost:5432 I was instead connecting to the WSL2 Postgres instance.
In my case, since I was no longer using Postgres in WSL2 I removed it and this resolved the issue. You could also stop it or use the host name/IP as mentioned by #jjanes
I have been spending 3-4 hours on this and still have not found a solution.
I can successfully run the docker container and use psql from the container bash, however, when I try to call the db from my local machine I continue to get this error message:
error role "postgres" does not exist
I have already tried editing "listen_addresses" in the postgresql.conf file from the container bash
My setup:
I am using a macbook - Monterey 12.4
my docker compose file:
version: '3.4'
services:
postgres:
image: postgres:latest
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres_db
- POSTGRES_USER=testUser
- POSTGRES_PASSWORD=testPW
volumes:
- postgres-data:/var/lib/postgresql/db
but this issue occurs if I do it through the standard CLI command as well, i.e:
docker run -d -p 5432:5432 --name my-postgres -e POSTGRES_PASSWORD=mysecretpassword postgres
I tried to follow this tutorial but it didnt work:
[https://betterprogramming.pub/connect-from-local-machine-to-postgresql-docker-container-f785f00461a7][1]
when I try this command:
psql -h localhost -p 5432 -U postgres -W
it doesnt work:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: role "postgres" does not exist
Also for reference, the user "postgres" does exist in postgres - as a superuser
Replace POSTGRES_USER=testUser with POSTGRES_USER=postgres in the compose configuration. Also use the password defined in POSTGRES_PASSWORD. Delete the old container and create a new one.
Thank you all for your help on this.
It turns out the issue was that I was running postgres on my local machine as well.
so once I turn that off I was able to connect.
I appreciate your time!
I have a docker image that's not accepting credentials for a user that is defined in the yaml docker-compose file. When I go to the docker console for the container and check users it only lists postgres. Not sure what I am missing - here's the yaml file:
version: '3.8'
services:
db:
container_name: drewreport_container
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: mpassword
POSTGRES_USER: thedrewreport
POSTGRES_DB: thedrewreportdb
ports:
- "5432:5432"
volumes:
- thedrewreportdata:/var/lib/postgresql/data/
volumes:
thedrewreportdata:
Any ideas?
I can't reproduce your problem. Running docker-compose up, I see:
Creating network "docker_default" with the default driver
Creating volume "docker_thedrewreportdata" with default driver
Creating docker_client_1 ...
Creating docker_db_1 ...
Creating docker_client_1 ... done
Creating docker_db_1 ... done
Attaching to docker_db_1, docker_client_1
[...]
db_1 | 2021-07-19 23:03:39.676 UTC [1] LOG: database system is ready to accept connections
If I then connect with psql, I can authenticate using the username
and password you've defined in your docker-compose.yml:
# psql -h localhost -U thedrewreport thedrewreportdb
Password for user thedrewreport:
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
thedrewreportdb=# \du
List of roles
Role name | Attributes | Member of
---------------+------------------------------------------------------------+-----------
thedrewreport | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
thedrewreportdb=#
Note that any volumes specified in your docker-compose.yml will
persist between a docker-compose down and a docker-compose up, so
if you ever brought your stack up with different credentials, those
will never be replaced unless your explicitly destroy the volume by
running docker-compose down -v.
You can tell that docker-compose is re-using a volume if you don't
see a message like this when you run docker-compose up:
Creating volume "docker_thedrewreportdata" with default driver
If existing, the existing DB data you mount to /var/lib/postgresql/data/ will take precedence over the environment variables to initialize it.
You have 2 options:
Update the existing DB data to add your user / password / database. To do so you can use docker compose exec db bash and then connect using psql command to make your changes.
Delete or move your existing thedrewreportdata local volume, for instance updating it to ./thedrewreportdata_postgres:/var/lib/postgresql/data/
Once done, you can use docker compose exec db psql --username thedrewreport --dbname thedrewreportdb to doublecheck you can connect with your credentials to the updated DB.
I'm trying to initialize a database without using the entry point directory.
Here is a minimal Dockerfile:
FROM postgres:latest
POSTGRES_DB db
POSTGRES_USER user
POSTGRES_PASSWORD password
ADD db.sql /directory/
ADD script.sh /directory/
CMD ["sh", "/directory/script.sh"]
# Or ENTRYPOINT ["/directory/script.sh"]?
And script.sh:
psql -d db -U user < /directory/db.sql
This does not work because postgres isn't up when the script is run.
How can I run db.sql without using /docker-entrypoint-initdb.d?
If you look at the standard postgres image's entrypoint script the mechanism to support the /docker-entrypoint-initdb.d directory is pretty intricate: it needs to bootstrap the database directory and initial user and database, then it starts the database server in the background and runs everything in that directory, and finally runs the database for real. If you're trying to replicate this setup, you have to do all of these steps yourself.
There are other ways to set up a database, though. You can create an empty database and then run your application's migrations normally to create the initial schema. If you have an SQL file that normally you'd run against a database running the psql client tool, you can do the exact same thing with Docker
docker run -d -p 5432:5432 --name postgres postgres:12
psql -h localhost < db.sql
I have a postgres:9.5.6-alpine container, and another container, named web, which has to be linked to it.
I want to run a script named create_db.sh in postgres container after it has been started and docker-entrypoint.sh has been executed, in order to create a db and a user and restore a backup.
My docker-compose.yml (postgres part):
postgres:
build: ./postgres
container_name: postgres
volumes:
- /shared_folder/postgresql:/var/lib/postgresql
ports:
- "5432:5432"
command: sh /home/create_db.sh
The content of create_db.sh is:
#!/bin/sh
psql -d template1 -U postgres
psql --command "CREATE USER user WITH PASSWORD 'userpassword';"
psql --command "CREATE DATABASE userdb;"
psql --command "GRANT ALL PRIVILEGES ON DATABASE userdb to user;"
psql --command "\q;"
psql -U user -d userdb -f /var/lib/postgresql/backup.sql
exit
When i run docker-compose build and then docker-compose up i get this:
Attaching to postgres, web
postgres | psql: could not connect to server: No such file or directory
postgres | Is the server running locally and accepting
postgres | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've understood this is because when I launch create_db.sh postgres server is not ready, but so how can i run it after the docker-entrypoint.sh of the container?
You are overriding the original command and you do not start postgres in this script which is why your database is not available.
You can put your database initialization into the container's entrypoint directory: /docker-entrypoint-initdb.d. This executes all *.sh and *.sql files in this directory and does not touch the original command.
All files in this directory are automatically executed in the alphabetical order on container creation. Therefore, create a volume to add your scripts / sql files to the entrypoint and let the container execute them. This is described in the official postgres documentation, section "How to extend this image".
Your compose file then changes to something like this:
postgres:
build: ./postgres
volumes:
- /shared_folder/postgresql:/var/lib/postgresql
- ./db-init-scripts:/docker-entrypoint-initdb.d
ports:
- "5432:5432"
whereas a local directory, e.g. db-init-scripts, contains your initialization scripts (rename it if you want). Copy create_db.sh to this folder and it will be automatically executed when you create a new container.
Several database-images watch this entrypoint-directory, which is very convenient.
Your container_name: postgres seems redundant.
If you are getting an error /bin/sh: bad interpreter: Permission denied change execution permission to your script file first, since Docker copies over permissions:
chmod +x your/script.sh