how to start postgres prior to run docker - postgresql

I am super embarrassed to ask this question, because it seems a very basic one, but somehow I can't find the answer in docs.
I have a django app that uses postgres. In docker-compose.yaml there is the following requirement:
version: "2"
services:
database:
image: postgres:9.5
environment:
POSTGRES_DB: ${POSTGRES_DATABASE}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DATA: /var/lib/postgresql/data/pgdata
when I run my docker image:
docker run -it --name myapp myimage
it keeps repeating:
The database is not ready.
wait for postgres to start...
I ran postgres in detached mode: docker run -it -d postgres:9.5
but it does not help

With Docker Compose 2.1 syntax, you can specify healthchecks to control container start-up:
version: '2.1'
services:
application:
depends_on:
database:
condition: service_healthy
Check out https://github.com/docker-library/healthcheck/tree/master/postgres for an example Dockerfile for building Postgres with healthchecks.

Please have a look at this doc.
Second example is exacly what you need:
You create sh script and add it to your app container by using ADD or COPY:
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Then you modify your docker-compose.yaml like this:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-postgres.sh", "db", "python", "app.py"]
db:
image: postgres
In command you're orerriding default command of your container.
Of course "python", "app.py" part is depedent on how you start your app.
Fo Java it would be for example "java", "-jar", "my-app.jar" etc.

Everything is OK with the way you start the container, however, the issues you are experiencing it is because the DB and the APP containers are starting one after the other, however, the DB container, needs to prepare the DB, do the migrations and so on, so therefore, by the time your app container tries to reach your DB container it is not ready so that's why you are getting this error.
You have 2 choices, one is to edit your APP Dockerfile and add WAIT for 30 - 60 s or so, or you could just start the DB on its own, wait for it to be ready to go and then start your APP container.

Related

Getting 'could not translate host name "db" to address' when trying to create database migration

I'm using FastAPI and postgresql to build a simple python api backend. When I try to build the database migration file (using Alembic) this error occures:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
As I searched a lot about this problem, it seems to be because of my docker config, but I couldn't figure it out. Here is my docker-compose code:
version: '3.7'
services:
web:
build: ./src
command: sh -c "alembic upgrade head && uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000"
volumes:
- ./src/:/usr/src/app/
ports:
- 8002:8000
environment:
- DATABASE_URL=postgresql://user:pass#db/my_db
db:
image: postgres:12.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=my_db
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 5432:5432
volumes:
postgres_data:
I'm also suspicious to my other project that is using docker in my local machine (same setup with FastAPI, postgres, and docker), and it works fine and doesn't have any similar problem.
What should I check and change to fix the problem?
P.S: I'm a beginner in docker.
Here's what's happening:
Your web container is trying to connect to db, but db is not up yet, in simple words web cannot "see" db.
The short answer: start db first: docker-compose up -d db then start web: docker-compose up web or whatever you are running.
Now, the long answer. If you proceed with the short answer, that's ok, but it's cumbersome if you ask me. You could try changing the version to 2.x eg 2.4 and adding depends_on to web, example:
version: '2.4'
services:
web:
...
depends_on:
- db
db:
image: ...
...
If you just run docker-compose up ..., docker will start db first.
Now, you may hit another problem: postgres may not be ready to receive connections. In this case you will have to wait for it to be ready. You can achieve this by retrying the db connection on error (I don't remember the exact error, you will have to check the docs) or use something like pg_isready. Assuming this is a dev environment, you can add it to your Dockerfile by installing postgresql-client and changing your cmd to:
command: >
sh -c "
until pg_isready -q -h db; do sleep 1; done
&&
alembic upgrade head
&&
uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000"

Docker compose cannot run command service not running

I created docker-compose.yml which content you can find below. I navigate to the folder where file resist and run command:
docker-compose up -d
This was shown:
Starting postgres ... done
then i run that query:
docker-compose ps
Result:
Name Command State Ports
---------------------------------------------------------
postgres docker-entrypoint.sh postgres Exit 1
Now i wanted to run some command:
docker exec -it postgres psql -h localhost -p 54320 -U robert
This is what i get:
Error response from daemon: Container ae1565a84bcf0b3662b47d4f277efd2830273554b6bcf4437129e33b31c88b35 is not running
Is my container not running or? please of support.
docker-compose.yml:
version: "3"
services:
# Create a service named db.
db:
# Use the Docker Image postgres. This will pull the newest release.
image: "postgres"
# Give the container the name my_postgres. You can changes to something else.
container_name: "postgres"
# Setup the username, password, and database name. You can changes these values.
environment:
- POSTGRES_USER=robert
- POSTGRES_PASSWORD=robert
- POSTGRES_DB=mydb
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "54320:5432"
# Set a volume some that database is not lost after shutting down the container.
# I used the name postgres-data but you can changed it to something else.
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
Can you attempt exec
docker run -it postgres psql -h localhost -p 54320 -U robert
?
$ docker exec --help
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Since your container has the status exit, you can't use docker exec
Can you use this docker-compose file?
version: "3"
volumes:
postgres_app: ~
services:
# Create a service named db.
postgres:
image: "postgres"
environment:
POSTGRES_USER: robert
POSTGRES_PASSWORD: robert
POSTGRES_DB: "mydb"
volumes:
- "postgres_app:/var/lib/postgresql/data"
ports:
- "54320:5432"
restart: always
And this command docker-compose exec postgres psql -U robert -d mydb
I hope this will help!
On my computer i executed this file

Add custom config location to Docker Postgres image preserving its access parameters

I have written a Dockerfile like this:
FROM postgres:11.2-alpine
ADD ./db/postgresql.conf /etc/postgresql/postgresql.conf
CMD ["-c", "config_file=/etc/postgresql/postgresql.conf"]
It just adds custom config location to a generic Postgres image.
Now I have the following docker-compose service description
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
POSTGRES_PASSWORD passwordhere
POSTGRES_USER: user
POSTGRES_DB: db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
The problem is I can no longer remotely connect to DB using these credentials if I add this Config option. Without that CMD line it works just fine.
If I prepend "postgres" in CMD it has the same effect due to the underlying script prepending it itself.
Provided all the files are where they need to be, I believe the only problem with your setup is that you've omitted an actual executable from the CMD -- specifying just the option. You need to actually run postgres:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]
That should work!
EDIT in response to OP's first comment below
First, I did confirm that behavior doesn't change whether "postgres" is in the CMD or not. It's exactly as you said. Onward!
Then I thought there must be a problem with the particular postgresql.conf in use. If we could just figure out what the default file is.. turns out we can!
How to get the existing postgres.conf out of the postgres image
1. Create docker-compose.yml with the following contents:
version: "3"
services:
db:
image: postgres:11.2-alpine
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
2. Spin up the service using
$ docker-compose run --rm --name=postgres db
3. In another terminal get the location of the file used in this release:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SHOW config_file"
config_file
------------------------------------------
/var/lib/postgresql/data/postgresql.conf
(1 row)
4. View the contents of default postgresql.conf
$ docker exec -it postgres cat /var/lib/postgresql/data/postgresql.conf
5. Replace local config file
Now all we have to do is replace the local config file ./db/postgresql.conf with the contents of the known-working-state config and modify it as necessary.
Database objects are only created once!
Database objects are only created once by the postgres container (source). So when developing the database parameters we have to remove them to make sure we're in a clean state.
Here's a nuclear (be careful!) option to
(1) remove all exited Docker containers, and then
(2) remove all Docker volumes not attached to containers:
$ docker rm $(docker ps -a -q) -f && docker volume prune -f
So now we can be sure to start from a clean state!
Final setup
Let's bring our Dockerfile back into the picture (just like you have in the question).
docker-compose.yml
version: "3"
services:
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
Connect to the db
Now all we have to do is build from a clean state.
# ensure all volumes are deleted (see above)
$ docker-compose build
$ docker-compose run --rm --name=postgres db
We can now (still) connect to the database:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SELECT COUNT(1) FROM pg_database WHERE datname='db_name'"
Finally, we can edit the postgres.conf from a known working state.
As per this other discussion, your CMD command only has arguments and is missing a command. Try:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

Docker volume doesn't keep data after turned docker-compose down

I am using docker compose to combine 2 images (tomcat with my app and database - postgres).
My compose file looks like this :
version: '3'
services:
tomcat:
build: ./tomcat-img
ports:
- "8080:8080"
depends_on:
- "db"
db:
build: ./db-img
volumes:
- db-data:/var/lib/postgres/data
ports:
- "5433:5432"
volumes:
db-data:
and here is dockerfile for database image:
FROM postgres:9.5-alpine
ENV POSTGRES_DB mydb
ENV POSTGRES_USER xxxx
ENV POSTGRES_PASSWORD xxxx
COPY init-db.sql /docker-entrypoint-initdb.d/
EXPOSE 5432
CMD ["postgres"]
Next I started my containers with docker-compose cli docker-compose -f docker-compose.yml up
and run psql tool with:
docker exec -it container_id psql -d xxxx -U xxxx
and insert new record. After that I check if there really is:
select * from my_table;
After that I tried stopped docker compose and remove containers with:
docker-compose -f docker-compose.yml down
and start it again
docker-compose -f docker-compose.yml up
when I run again psql tool of db container and select data in my_table, there is no previous inserted record ... Can you help me to fix it please? I need init my db with init-db.sql just once and next using that persist storage. Thanks for answers.
In my dockerized Postgresql with a data volume I am binding to /var/lib/postgresql and not to /var/lib/postgres/data. Try changing your compose file to
volumes:
- db-data:/var/lib/postgresql

Docker wait for postgresql to be running

I am using postgresql with django in my project. I've got them in different containers and the problem is that i need to wait for postgres before running django. At this time i am doing it with sleep 5 in command.sh file for django container. I also found that netcat can do the trick but I would prefer way without additional packages. curl and wget can't do this because they do not support postgres protocol.
Is there a way to do it?
I've spent some hours investigating this problem and I got a solution.
Docker depends_on just consider service startup to run another service. Than it happens because as soon as db is started, service-app tries to connect to ur db, but it's not ready to receive connections. So you can check db health status in app service to wait for connection. Here is my solution, it solved my problem. :)
Important: I'm using docker-compose version 2.1.
version: '2.1'
services:
my-app:
build: .
command: su -c "python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
depends_on:
db:
condition: service_healthy
links:
- db
volumes:
- .:/app_directory
db:
image: postgres:10.5
ports:
- "5432:5432"
volumes:
- database:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
database:
In this case it's not necessary to create a .sh file.
This will successfully wait for Postgres to start. (Specifically line 6). Just replace npm start with whatever command you'd like to happen after Postgres has started.
services:
practice_docker:
image: dockerhubusername/practice_docker
ports:
- 80:3000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:password#db:5432/practicedocker
- PORT=3000
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=practicedocker
If you have psql you could simply add the following code to your .sh file:
RETRIES=5
until psql -h $PG_HOST -U $PG_USER -d $PG_DATABASE -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES--)) remaining attempts..."
sleep 1
done
The simplest solution is a short bash script:
while ! nc -z HOST PORT; do sleep 1; done;
./run-smth-else;
Problem with your solution tiziano is that curl is not installed by default and i wanted to avoid installing additional stuff. Anyway i did what bereal said. Here is the script if anyone would need it.
import socket
import time
import os
port = int(os.environ["DB_PORT"]) # 5432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while True:
try:
s.connect(('myproject-db', port))
s.close()
break
except socket.error as ex:
time.sleep(0.1)
In your Dockerfile add wait and change your start command to use it:
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
CMD /wait && npm start
Then, in your docker-compose.yml add a WAIT_HOSTS environment variable for your api service:
services:
api:
depends_on:
- postgres
environment:
- WAIT_HOSTS: postgres:5432
postgres:
image: postgres
ports:
- "5432:5432"
This has the advantage that it supports waiting for multiple services:
environment:
- WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
For more details, please read their documentation.
wait-for-it small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
can be cloned in Dockerfile by below command
RUN git clone https://github.com/vishnubob/wait-for-it.git
docker-compose.yml
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it/wait-for-it.sh", "db:5432", "--", "npm", "start"]
db:
image: postgres
Why not curl?
Something like this:
while ! curl http://$POSTGRES_PORT_5432_TCP_ADDR:$POSTGRES_PORT_5432_TCP_PORT/ 2>&1 | grep '52'
do
sleep 1
done
It works for me.
I have managed to solve my issue by adding health check to docker-compose definition.
db:
image: postgres:latest
ports:
- 5432:5432
healthcheck:
test: "pg_isready --username=postgres && psql --username=postgres --list"
timeout: 10s
retries: 20
then in the dependent service you can check the health status:
my-service:
image: myApp:latest
depends_on:
kafka:
condition: service_started
db:
condition: service_healthy
source: https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
If the backend application itself has a PostgreSQL client, you can use the pg_isready command in an until loop. For example, suppose we have the following project directory structure,
.
├── backend
│   └── Dockerfile
└── docker-compose.yml
with a docker-compose.yml
version: "3"
services:
postgres:
image: postgres
backend:
build: ./backend
and a backend/Dockerfile
FROM alpine
RUN apk update && apk add postgresql-client
CMD until pg_isready --username=postgres --host=postgres; do sleep 1; done \
&& psql --username=postgres --host=postgres --list
where the 'actual' command is just a psql --list for illustration. Then running docker-compose build and docker-compose up will give you the following output:
Note how the result of the psql --list command only appears after pg_isready logs postgres:5432 - accepting connections as desired.
By contrast, I have found that the nc -z approach does not work consistently. For example, if I replace the backend/Dockerfile with
FROM alpine
RUN apk update && apk add postgresql-client
CMD until nc -z postgres 5432; do echo "Waiting for Postgres..." && sleep 1; done \
&& psql --username=postgres --host=postgres --list
then docker-compose build followed by docker-compose up gives me the following result:
That is, the psql command throws a FATAL error that the database system is starting up.
In short, using an until pg_isready loop (as also recommended here) is the preferable approach IMO.
There are couple of solutions as other answers mentioned.
But don't make it complicated, just let it fail-fast combined with restart: on-failure. Your service will open connection to the db and may fail at the first time. Just let it fail. Docker will restart your service until it green. Keep your service simple and business-focused.
version: '3.7'
services:
postgresdb:
hostname: postgresdb
image: postgres:12.2
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=Ceo
migrate:
image: hanh/migration
links:
- postgresdb
environment:
- DATA_SOURCE=postgres://user:secret#postgresdb:5432/Ceo
command: migrate sql --yes
restart: on-failure # will restart until it's success
Check out restart policies.
None of other solution worked, except for the following:
version : '3.8'
services :
postgres :
image : postgres:latest
environment :
- POSTGRES_DB=mydbname
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
healthcheck :
test: [ "CMD", "pg_isready", "-q", "-d", "mydbname", "-U", "myusername" ]
interval : 5s
timeout : 5s
retries : 5
otherservice:
image: otherserviceimage
depends_on :
postgres:
condition: service_healthy
Thanks to this thread: https://github.com/peter-evans/docker-compose-healthcheck/issues/16
Sleeping until pg_isready returns true unfortunately is not always reliable. If your postgres container has at least one initdb script specified, postgres restarts after it is started during it's bootstrap procedure, and so it might not be ready yet even though pg_isready already returned true.
What you can do instead, is to wait until docker logs for that instance return a PostgreSQL init process complete; ready for start up. string, and only then proceed with the pg_isready check.
Example:
start_postgres() {
docker-compose up -d --no-recreate postgres
}
wait_for_postgres() {
until docker-compose logs | grep -q "PostgreSQL init process complete; ready for start up." \
&& docker-compose exec -T postgres sh -c "PGPASSWORD=\$POSTGRES_PASSWORD PGUSER=\$POSTGRES_USER pg_isready --dbname=\$POSTGRES_DB" > /dev/null 2>&1; do
printf "\rWaiting for postgres container to be available ... "
sleep 1
done
printf "\rWaiting for postgres container to be available ... done\n"
}
start_postgres
wait_for_postgres
You can use the manage.py command "check" to check if the database is available (and wait 2 seconds if not, and check again).
For instance, if you do this in your command.sh file before running the migration, Django has a valid DB connection while running the migration command:
...
echo "Waiting for db.."
python manage.py check --database default > /dev/null 2> /dev/null
until [ $? -eq 0 ];
do
sleep 2
python manage.py check --database default > /dev/null 2> /dev/null
done
echo "Connected."
# Migrate the last database changes
python manage.py migrate
...
PS: I'm not a shell expert, please suggest improvements.
#!/bin/sh
POSTGRES_VERSION=9.6.11
CONTAINER_NAME=my-postgres-container
# start the postgres container
docker run --rm \
--name $CONTAINER_NAME \
-e POSTGRES_PASSWORD=docker \
-d \
-p 5432:5432 \
postgres:$POSTGRES_VERSION
# wait until postgres is ready to accept connections
until docker run \
--rm \
--link $CONTAINER_NAME:pg \
postgres:$POSTGRES_VERSION pg_isready \
-U postgres \
-h pg; do sleep 1; done
An example for Nodejs and Postgres api.
#!/bin/bash
#entrypoint.dev.sh
echo "Waiting for postgres to get up and running..."
while ! nc -z postgres_container 5432; do
# where the postgres_container is the hos, in my case, it is a Docker container.
# You can use localhost for example in case your database is running locally.
echo "waiting for postgress listening..."
sleep 0.1
done
echo "PostgreSQL started"
yarn db:migrate
yarn dev
# Dockerfile
FROM node:12.16.2-alpine
ENV NODE_ENV="development"
RUN mkdir -p /app
WORKDIR /app
COPY ./package.json ./yarn.lock ./
RUN yarn install
COPY . .
CMD ["/bin/sh", "./entrypoint.dev.sh"]
If you want to run it with a single line command. You can just connect to the container and check if postgres is running
docker exec -it $DB_NAME bash -c "\
until psql -h $HOST -U $USER -d $DB_NAME-c 'select 1'>/dev/null 2>&1;\
do\
echo 'Waiting for postgres server....';\
sleep 1;\
done;\
exit;\
"
echo "DB Connected !!"
Inspired by #tiziano answer and the lack of nc or pg_isready, it seems that in a recent docker python image (python:3.9 here) that curl is installed by default and I have the following check running in my entrypoint.sh:
postgres_ready() {
$(which curl) http://$DBHOST:$DBPORT/ 2>&1 | grep '52'
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available.'
Trying with a lot of methods, Dockerfile, docker compose yaml, bash script. Only last of method help me: with makefile.
docker-compose up --build -d postgres
sleep 2
docker-compose up --build -d app