How do I create a database within a docker container using only the docker-compose file? - postgresql

I'm trying to create a database and connect to it within my container network. I don't want to have to ssh into a box to create users/databases etc, as this is not a scalable or easily distributable process.
This is what I have so far:
# docker-compose.yml
db:
image: postgres:9.4
volumes:
- ./db/init.sql:/docker-entrypoint-initdb/10-init.sql
environment:
- PGDATA=/tmp
- PGDATABASE=web
- PGUSER=docker
- PGPASSWORD=password
This is my init.sql file:
CREATE DATABASE web;
CREATE USER docker WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE web TO docker;
When I start up the container and try to connect to it, I get this error:
db_1 | FATAL: role "docker" does not exist
db_1 | done
db_1 | server started
db_1 | FATAL: database "web" does not exist
db_1 | psql: FATAL: database "web" does not exist
The first time this happened, I tried to create a role like this:
CREATE ROLE docker with SUPERUSER PASSWORD password;
GRANT web TO docker;
But it did not have any effect. To make matters even more confusing, when I use node-postgres to connect to the db, I get this error:
Error: connect ECONNREFUSED
But how can the connection be refused if the db service isnt even up??
In a nutshell, these are the questions I'm trying to solve:
How can I create a database using only the files in my project (i.e. no manual commands)?
How do I create a user/role using only the files in my project?
How do I connect to this database?
Thank you in advance.

How can I create a database using only the files in my project (i.e.
no manual commands)?
The minimal docker-compose.yml config for you defined user and database is:
postgres:
image: postgres:9.4
environment:
- POSTGRES_DB=web
- POSTGRES_USER=myuser
How do I create a user/role using only the files in my project?
To execute scripts on database initialization take a look at the official docs for initdb.
To get you started with a quick and dirty solution create a new file e.g. init_conf.sh in the same directory as your docker-compose.yml:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 -U "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE ROLE docker with SUPERUSER PASSWORD 'password';
EOSQL
And add the volumes directive to your docker-compose.yml.
volumes:
- .:/docker-entrypoint-initdb.d
Recreate your container because otherwise, you wouldn't trigger a new database initialization. That means, docker stop and docker rm the old one first before executing docker-compose up again. STDOUT gives you now some information about our newly introduced script.
How do I connect to this database?
To connect to your database with docker exec via the terminal:
docker exec -ti folder_postgres_1 psql -U myuser -d web
A docker-compose.yml in one of my production environments looks like the following:
services:
postgres:
logging: &logging
driver: json-file
options:
max-size: "10m"
max-file: "5"
build: ./docker/postgres # path to custom Dockerfile
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_backup:/backups
env_file: .env
restart: always
# ... other services like web, celery, redis, etc.
Dockerfile:
FROM postgres:latest
# ...
COPY *.sh /docker-entrypoint-initdb.d/
# ...

The environment variable you are using are wrong. Try this
version: '3.3'
services:
db:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=password
- POSTGRES_DB=web
volumes:
- db_data:/var/lib/postgresql/data
# optional port
ports: ["5555:5432"]
volumes:
db_data:
then from any other docker-compose service you can access the DB at db:5432 and from your host machine you can access postgres on localhost:5555 if you also add the ports

Related

weblate connect to aws rds postgres

I'm trying to connect weblate to external rds postgres database.
I'm using docker compose file that run weblate container. To this container I add the environment variables to connect to rds postgres.
The weblate container doesn't connect to rds postgres and give me this error:
psql: error: connection to server at "XXXX.rds.amazonaws.com", port 5432 failed: FATAL: password authentication failed for user "postgres"
but if I try to connect to rds postgres from inside the weblate container via cli, it works.
docker compose file:
version: '3'
services:
weblate:
image: weblate/weblate
tmpfs:
- /app/cache
volumes:
- weblate-data:/app/data
env_file:
- ./environment
restart: always
ports:
- 80:8080
environment:
POSTGRES_PASSWORD: XXXX
POSTGRES_USER: myuser
POSTGRES_DATABASE: mydb
POSTGRES_HOST: XXX.rds.amazonaws.com
POSTGRES_PORT: 5432
It tries to connect as postgres user while your configuration states myuser. Maybe the ./environment file overrides that?
I found the problem.
The problem was the char $ inside password.
Maybe the library used to connect to postgres has a bug or, simple, not allowed $ in password string.
When I removed that char it works.

Azure Data Studio connection error: FATAL: password authentication failed for user "postgres"

I have created a PostgreSQL database inside a docker-compose container with the following yaml config:
version: '3'
services:
...
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: my_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: my_password
volumes:
db:
driver: local
Without the volumes section, I was not able to access the database from outside the container, meaning that I was not able to add any data to it. With this added section, I am now able to run:
docker exec -it container_db_1 psql -U postgres
Which allows me to create databases, tables, add data, etc.
However I am now trying to connect to the database with Azure Data Studio but I get the error FATAL: password authentication failed for user "postgres". I've triple checked all the connection settings but I always get this same error.
In the past I was able to connect to a postgres database created through a docker container (on its own, not with docker-compose). And I don't understand what is different this time, since I can connect through the a terminal in the same way.

Docker compose PostgreSQL 14.4 (latest) run 3 databases for multi-tenant system

My skill in Docker is little. I am using Docker desktop for Windows version 4.9.1 (81317) (latest version at this time). My web-app is multi-tenant system, need 3 databases. This is my docker-compose.yml (run 1 database ok)
version: '2.1'
services:
postgres:
image: postgres
ports:
- "5433:5432"
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: acc_spring
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d acc_spring"]
interval: 10s
timeout: 5s
retries: 3
Please guide me how to write/revise the above docker-composite.yml for running 3 databases at the same time:
tenant_master: port 5432, database name: tenant_master, username postgres, password postgres .
tenant_1: port 5432, database name: tenant_1, username postgres, password postgres .
tenant_2: port 5432, database name: tenant_2, username postgres, password postgres .
Notice: 3 databases in a database instance/server.
Please also explain for me what/how/ when to use restart: always, Do I have another option(s) for always?
Besides the port mapping (5433 seems wrong) you shouldn't need to change anything (well - probably the admin password shouldn't be 'postgres' in production).
You need to connect to this instance and create your databases (SQL or some app) and users. This is not related to Docker or docker compose. POSTGRES_DB is just the default database (not THE database).
However, if you want to do this as part of your container setup, you could do it like follows. Create a new project directory with these files:
File create-databases.sh
#!/bin/bash
set -eu
function create_database() {
local database=$1
echo " Creating database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $POSTGRES_USER;
EOSQL
}
create_database $POSTGRES_DB2
create_database $POSTGRES_DB3
File docker-compose.yml
services:
postgres:
image: postgres:latest
container_name: postgres
ports:
- "5433:5432"
restart: unless-stopped
volumes:
- ./create-databases.sh:/docker-entrypoint-initdb.d/create-databases.sh
- ./db_persist:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: tenant_master
POSTGRES_DB2: tenant_1
POSTGRES_DB3: tenant_2
While this is not best practice (I'm not a Postgres user) it will get you started. By using volumes, an init script is put into a folder where it is picked up by container initialization and the database is persisted, just in case. I left out the health-check for better clarity. It can be re-added of course.
Having said this, there would be another solution: just copy the postgres service (postgres2, postgres3) to be used with different ports (5434, 5435). Although this would run 3 independent database instances (needs more ressources), it might have its advantages as well.
As regards the restart policy: instead of "always" you might consider "unless-stopped" (see https://docs.docker.com/engine/reference/run/#restart-policies---restart) like it is done in my suggestion above.

Postgres, Local installed instance interfered with Docker instance

Windows 10-Pro. Have a local Postgres installed and working fine.
With it running, VSC terminal, docker-compose up the following ok:
version: '3.8'
services:
postgres:
image: postgres:10.4.2
ports:
- '5432:5432'
volumes:
- ./sql:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass1
POSTGRES_DB: db
But PSQL shell always complain password authentication failed for user.
Stopping postgres service from Windows Services and docker-compose up, PSQL shell authentication and query ok. But VSC terminal keep complaining another thing:
FATAL: password authentication failed for user "postgres"
DETAIL: User "postgres" has no password assigned.
Connection matched pg_hba.conf line 95: "host all all all md5"
How to stop the above error when docker container's instance is running? Also, possible to co-run both local and docker?
Hope you are enjoying you containers journey !
I tried to execute your docker-compose as it was but cannot fetch the postgres:10.4.2 image:
❯ docker-compose up
[+] Running 0/1
⠿ postgres Error 2.1s
Error response from daemon: manifest for postgres:10.4.2 not found: manifest unknown: manifest unknown
so i decided to use postgres:14.2 instead. Since I dont have your sql script i'll comment out the volume section.
Here is how my docker-compose looks like:
version: '3.8'
services:
postgres:
image: postgres:14.2
ports:
- '5432:5432'
# volumes:
# - ./sql:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass1
POSTGRES_DB: db
So, when a execute the compose I got this:
❯ docker-compose up -d
[+] Running 1/1
⠿ Container postgre-local-and-dockercompose-71984505-postgres-1 Started
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
4b90573f6108 postgres:14.2 "docker-entrypoint.s…" 18 seconds ago Up 15 seconds 0.0.0.0:5432->5432/tcp postgre-local-and-dockercompose-71984505-postgres-1
When i connect to the container with:
❯ docker exec -it postgre-local-and-dockercompose-71984505-postgres-1 bash
root#4b90573f6108:/#
and execute this command to connect to your created DB and connect with the "pass1" password:
root#4b90573f6108:/# psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=$POSTGRES_DB
Password:
psql (14.2 (Debian 14.2-1.pgdg110+1))
Type "help" for help.
db=#
everything is fine.
So I advise you to use the same postgres:14.2 image i tried with (patched with the last security issues) and do the same test.
If you want me to test exactly what you are doing just send your sql scripts.
To answer your second question, yes it is possible to co-run both local and docker postgres instances
you just have to port-forward the postgresql port of your container to another port like this:
version: '3.8'
services:
postgres:
image: postgres:14.2
ports:
- '5433:5432'
# volumes:
# - ./sql:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass1
POSTGRES_DB: db
since there is no port conflict (your local db is running on 5432 and your docker db on 5433), everything will work fine (I will use dbeaver to try to connect ):
PERFECT !
Hope I answered your questions.
bguess.

Reusing postgresql database from volume in docker-compose

When I created volume in Docker using command:
docker volume create pg-data
Then I set up basic postgresql database from postgres image:
docker run --rm -v pg-data:/var/lib/postgresql/data --name pg-docker -e POSTGRES_PASSWORD=docker -p 5433:5432 postgres
Everything worked fine. Database persist and I can even access it directly from the host. I created several roles here like app_user_1.
Now then I wanted to spin up postgresql in container using docker-compose. I shutdown the above postgresql container beforehand.
There I have this settting:
version: '3.7'
services:
db:
image: postgres
volumes:
- pg-data:/var/lib/postgresql/data/
expose:
- 5432
restart: always
environment:
- POSTGRES_PASSWORD=docker
- POSTGRES_USER=postgres
web:
build: .
volumes:
- ./app:/app
ports:
- 8001:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
depends_on:
- db
volumes:
pg-data:
However it seems that even though I mapped the same volume and used same env settings as in docker run command the postgresql instance in container created with docker-compose has no databases and no roles at all.
I get the following error:
psql: error: FATAL: role "postgres" does not exist
or
psql: error: FATAL: role "app_user_1" does not exist
So it seems it behaves as though as it is different instance of postgresql.
When I restarted the first container with docker run everything was there (all the databases and roles).
Any idea why this is happening? How can I reuse the databases from the first container in the docker-compose?
You need to define the volume you wish to use (the one you created manually with docker volume create as external to docker-compose as it was created externally
This is because the volumes created by docker-compose are 'internal' to it, so using ones created by just docker are 'external'. =)
Ref the offical docs at https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose
The change to your compose file would be as follows:
...
volumes:
pg-data:
external: true
(Just that last line)
Hope that helps! =)
Additional Note
You can confirm this, by performing a docker volume ls | grep pg-data command which will list all volumes, then only show you the ones referencing 'pg-data'.
On my system where I was testing before I gave my answer, I get the following:
docker volume ls | grep pg-data
local pg-data
local postgresstackoverflow_pg-data
As you can see, the docker volume create one is listed first, as a local volume called 'pg-data', then the docker-compose.yml created one is next prefixed by the naming convention of docker-compose with the directory name that it was in at the time.