Connect to postgresql container from another container (Docker) - postgresql

I have am trying to follow this tutorial and set up a postgresql container.
I have the following script:
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
And the following docker-compose.yml:
version: '2'
services:
server:
build: .
ports:
- 3030:3030
depends_on:
- database
command: ["./setup/wait-for-postgres.sh", "localhost:5432", "--", "node", "src"]
database:
image: postgres
environment:
- "POSTGRES_USER=postgres"
- "POSTGRES_PASSWORD=postgres"
- "POSTGRES_DB=tide_server"
ports:
- 5432:5432
The problem is that when I run docker-compose up I get the following error:
server_1 | Postgres is unavailable - sleeping
server_1 | psql: could not translate host name "192.168.64.2:5432" to address: Name or servi
ce not known
server_1 | Postgres is unavailable - sleeping
server_1 | psql: could not translate host name "192.168.64.2:5432" to address: Name or servi
ce not known
server_1 | Postgres is unavailable - sleeping
server_1 | psql: could not translate host name "192.168.64.2:5432" to address: Name or servi
ce not known
Now I have tried setting the host as database, localhost, 0.0.0.0, and even the containers IP but nothing works, I have no idea what it should be or how to debug it, I am not 100% sure how docker-compose links the containers.

do not use depends_on. try it with "links"
version: '2'
services:
server:
build: .
ports:
- 3030:3030
links:
- database
#environment could be usefull too
environment:
DATABASE_HOST: database
command: ["./setup/wait-for-postgres.sh", "localhost:5432", "--", "node", "src"]
database:
image: postgres
environment:
- "POSTGRES_USER=postgres"
- "POSTGRES_PASSWORD=postgres"
- "POSTGRES_DB=tide_server"
ports:
- 5432:5432
for more informations https://docs.docker.com/compose/compose-file/#links

May be an old thread to answer but I have been using depends_on with the following docker-compose file
version: '3.4'
volumes:
postgres_data:
driver: local
services:
postgres:
image: postgres
volumes:
- ./postgres_data:/var/lib/postgresql:rw
- ./deployments:/opt/jboss/wildfly/standalone/deployments:rw
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
ports:
- 5432:5432
keycloak:
image: jboss/keycloak
environment:
POSTGRES_ADDR: postgres
POSTGRES_DATABASE: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: Pa55w0rd
ports:
- 8080:8080
- 9990:9990
depends_on:
- postgres

The tutorial skips over a few things, and is confusing in that it mentions the wait-for-it.sh script, but then shows a much simplified version that doesn't work if you pass hostname:port as one argument to it.
I had a crack at getting this to work and both for future me and others I will add the steps below. I did this on MacOS, and have both docker and docker-compose installed as well as nodejs.
I don't have your node app handy so I used the one as described here https://nodejs.org/de/docs/guides/nodejs-docker-webapp/
I have the following directory structure:
/src/package.json
/src/server.js
/.pgpass
/docker-compose.yml
/Dockerfile
/wait-for-postgres.sh
The contents of these files is listed below.
Steps
From the ./src directory run $ npm install (creates package-lock.json)
Fix pgpass permissions with $ chmod 600 .pgpass
Make the script executable $ chmod +x wait-for-postgres.sh
From the root directory $ docker-compose up
It will pull the postgres image and build the node app container.
When that's done it will wait for postgres and when postgres is up you'll see it ready.
Files
The src files are exactly as per the node js dockerize link above
/src/package.json
{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <first.last#example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.16.1"
}
}
/src/server.js
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
.pgpass
This uses the username:password postgres:postgres and is purely for development demo purposes. In the wild you will use some other method of secrets management and never ever commit a pgpass file to version control
#host:port:db:user:pass
db:5432:*:postgres:postgres
docker-compose.yml
I have added the wait-for-postgres.sh script as a managed volume, in the original question it was bundling it in with the app src which was weird.
I have also mounted the .pgpass file in the root user's home directory, which psql will look in for auto-password completion. If you don't have some method of supplying this then you'll get an error:
psql: fe_sendauth: no password supplied
Notice the command for the server container is referring to database which is a valid docker-compose internal dns name for the postgres container.
version: '2'
services:
server:
build: .
ports:
- 3030:3030
depends_on:
- database
volumes:
- ./wait-for-postgres.sh:/usr/app/setup/wait-for-postgres.sh
- ./.pgpass:/Users/root/.pgpass
command: ["/usr/app/setup/wait-for-postgres.sh", "database", "--", "node", "src"]
database:
image: postgres
environment:
- "POSTGRES_USER=postgres"
- "POSTGRES_PASSWORD=postgres"
- "POSTGRES_DB=tide_server"
ports:
- 5432:5432
Dockerfile
I have modified this from the node js tutorial, pinning it to the Debian "buster" version and also installing psql which it needs for that script.
FROM node:10-buster
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" > /etc/apt/sources.list.d/pgdg.list && \
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get -y update - && \
apt-get -y install libpq-dev && \
apt-get -y install postgresql-client-11
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
wait-for-postgres.sh
I have modified the script very slightly because I ran the "shellcheck" linter and it complained about a few things. I realise this script is from the docker tutorial page.
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$*"
export PGPASSFILE=./pgpass
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec "$cmd"

The problem here is the host itself.
psql -h **"$host"** -U "<USER>" -c '\l'
You are passing a wrong HOSTNAME "localhost:5432" / "192.168.64.2:5432"
What I did is setup a ~/.pgpass that has
localhost:5432:DB:USER:PASSWORD
and instead of passing "localhost:5432", omit the port. Just use "localhost"
This works for me ...

Related

problem with postgres docker container inside Gitlab CI

It's been few days I am blocked on this problem with my project, it's working on localhost but not on gitlabCI.
I would like to build a test database on the postgres docker image in gitlabCI but it doesn't work, I have try a lot of things and lose a lot of hours before ask this there :'(.
below my docker-compose.yml file :
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx
depends_on:
- postgres
- monapp
volumes:
- ./nginx-conf:/etc/nginx/conf.d
- ./util/certificates/certs:/etc/nginx/certs/localhost.crt
- ./util/certificates/private:/etc/nginx/certs/localhost.key
ports:
- 81:80
- 444:443
networks:
- monreseau
monapp:
image: monimage
container_name: monapp
depends_on:
- postgres
ports:
- "3000:3000"
networks:
- monreseau
command: "npm run local"
postgres:
image: postgres:9.6
container_name: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_HOST: postgres
POSTGRES_PASSWORD: postgres
volumes:
- ./pgDatas:/var/lib/postgresql/data/
- ./db_dumps:/home/dumps/
ports:
- "5432:5432"
networks:
- monreseau
networks:
monreseau:
and below my gitlab-ci.yml file:
stages:
# - build
- test
image:
name: docker/compose:latest
services:
- docker:dind
before_script:
- docker version
- docker-compose version
variables:
DOCKER_HOST: tcp://docker:2375/
# build:
# stage: build
# script:
# - docker build -t monimage .
# - docker-compose up -d
test:
stage: test
script :
- docker build -t monimage .
- docker-compose up -d
- docker ps
- docker exec -i postgres psql -U postgres -h postgres -f /home/dumps/test/dump_test_001 -c \\q
- exit
- docker exec -i monapp ./node_modules/.bin/env-cmd -f ./env/.env.builded-test npx jasmine spec/auth_queries.spec.js
- exit
this is the content of docker ps log on gitlabCI server :
docker ps on gitlab-CI
I thought to put postgres on host would work, but no I always have in gitlab-ci terminal:
psql: could not connect to server: Connection refused
Is the server running on host "postgres" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
I also tried to put docker on host but error :
psql: could not translate host name "docker" to address: Name or service not known
little precision : it is working on localhost of my computer when i am doing make builded-test
bellow my makefile:
builded-test:
docker build -t monimage .
docker-compose up -d
docker ps
docker exec -i postgres psql -U postgres -h postgres -f /home/dumps/test/dump_test_001 -c \\q
exit
docker exec -i monapp ./node_modules/.bin/env-cmd -f ./env/.env.builded-test npx jasmine spec/auth_queries.spec.js
exit
docker-compose down
I want to make work postgres image in my docker-compose on gitlab CI to execute my tests help me please :) thanks by advance
UPDATE
Now it working in gitlab-runner but still not on gitlab when I push, I update the files like following
I added :
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
and changed
test:
stage: test
script :
- docker build -t monimage .
- docker-compose up -d
- docker ps
- docker exec postgres psql -U postgres **-h postgres** -f /home/dumps/test/dump_test_001
- docker exec monapp ./node_modules/.bin/env-cmd -f ./env/.env.builded-test npx jasmine spec/auth_queries.spec.js
in the .gitlab-ci.yml
but still don't work when I push it to gitlab, it give me :
sql: could not connect to server: Connection refused
Is the server running on host "postgres" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
any ideas ? :)
Maybe you need to wait for PostgreSQL service to be up and running.
Can you add a 10 seconds delay before trying the psql stuff? Something like:
- sleep 10
If it works, then you can use a more specific solution to wait for PostgreSQL to be initialized, like Docker wait for postgresql to be running

Connecting to a MongoCryptD instance in docker environment with Mongoose

I've searched over the web but couldn't find my answer anywhere.
I'm trying to run an API web service, using NestJS framework.
I'm running docker-compose that spins up the API server, a MongoDB instance, and a mongocryptd instance to allow Client-Side Field Level Encryption on my app.
I'm able to connect to the MongoDB instance, but not to the mongocryptd instance.
Docker-Compose file:
version: "3.7"
services:
api:
build:
context: .
dockerfile: Dockerfile
labels:
env: dev
args:
APP: appname
APP_PORT: 3000
ports:
- "3000:3000"
command: ["sh", "-c", "npm run start:app:dev"]
volumes:
- .:/app
mongodb:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
command: ["--auth"]
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: usr
MONGO_INITDB_ROOT_PASSWORD: pwd
ports:
- "27017:27017"
volumes: ["/private/var/services/mongodb:/data/db"]
mongocryptd:
build:
context: .
dockerfile: docker/MongoEP-Dockerfile
labels:
env: dev
args:
MONGO_PACKAGE: mongodb-enterprise
MONGO_REPO: repo.mongodb.com
image: mongo-enterprise:4.2.5
entrypoint: mongocryptd
restart: always
ports:
- "27020:27020"
volumes: ["/private/var/services/mongodb:/data/db"]
The used dockerfile is mongo's official dockerfile, but supplied with args to build an enterprise version of the image which includes the enterprise features.
When trying to connect to the database from the app, I'm running:
MongooseModule.forRoot(`mongodb://usr:pwd#mongodb:27017`, {
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
retryAttempts: 2,
autoEncryption: {
keyVaultNamespace,
kmsProviders,
extraOptions: {
mongocryptdURI: `mongodb://mongocryptd:27020`,
mongocryptdBypassSpawn: true
}
} as any
})
** This is the NestJS version of supplying the configs. it's similar to mongoose - the first argument is the URI and the second is the settings object
Without the autoEncryption options, I'm able to connect without any problems. That means that my database address is correct.
With the autoEncyption options, I'm getting MongooseServerSelectionError: connect ECONNREFUSED 172.25.0.4:27020 (mongocryptd address). That means that the IP is correct (DNS resolved), but the connection is refused. As I showed before, the port (27020) is being published by the docker-compose file, and I even tried to add an EXPOSE step in the build itself.
BUT when I map the network of the containers to host (network_mode: "host"), the application is able to connect without any problems (changing the connections DNS to localhost:27017 and 27020 of course). So that must mean it's a docker-related problem.
Additional things I've tried && a recap of what I tried:
Attach a volume to replace /etc/mongod.conf.orig with the following network configurations:
net:
port: 27017
bindIp: 0.0.0.0
bindIpAll: true
Instead of attaching a volume, replacing it ^ at the build step before launching the mongo service.
I also tried changing the bindIp to the specific application IP that was given by the docker network.
All types of connection strings with & without user credentials, auth source, and default database.
Port 27020 is published in docker-compose & exposed on docker file.
I ran out of ideas. Any help is appreciated! :)
EDIT:
After more debugging, I can see that mongod is running with --bind_ip_all by default so changing the conf file shouldn't have an effect.
Tried also running mongocryptd with mongods docker-entrypoint.sh entrypoint instead of overriding it.
Verify mongocryptd is running (ps awwxu, etc.)
Verify you can connect to it from bash on the same container where it is running using mongo.
Verify you can connect to it from host system using mongo.
Check mongocryptd logs (it's basically a mongod with some extra functionality).
I had similar issues with mongocryptd, but I was installing php node instead of npm. Tried different solutions but didn't managed to succeed. Even tried similar Asaf Kfir docker-composer.yaml with pre-installed mongodb-enterprise-cryptd lib, but had same issue. Keep in mind, in php node via Dockerfile I already been able to install libmongocrypt-dev and
mongodb-enterprise-cryptd. (I will leave php Dockerfile below)
Those three containers I managed to link under the same ip address, tested with ncat and I was able to reach. But when I tried to run tests from php node, it start throwing:
MongoDB\Driver\Exception\BulkWriteException: Bulk write failed due to previous MongoDB\Driver\Exception\RuntimeException: key vault error: Invalid reply to find command.
and had this issue for two weeks. Basically didn't know how to resolve it.
P.S. remember these words: The automatic feature of field level encryption is only available in MongoDB Enterprise 4.2 or later
At that time my docker-compose.yaml file looked like that:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: unless-stopped
tty: true
ports:
- "27017:27017"
- "27020:27020"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: local-mongo-db
container_name: mongodb
restart: unless-stopped
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
#MongoDB Service
mongocryptd:
image: local-mongocryptd
container_name: mongocryptd
entrypoint: mongocryptd
restart: unless-stopped
tty: true
network_mode: service:php
volumes: ["/tmp/mongodb:/data/db"]
volumes:
dbdata:
driver: local
Mongo images where build from here: https://docs.mongodb.com/manual/tutorial/install-mongodb-enterprise-with-docker/
I managed to solve this issue like this:
Instead of building mongo-enterprise image I accidentally build up with official mongo image mongo:4.2 and everything worked well. I don't know why mongo says that enterprise is needed for encryption. Because for me mongo-enterprise encryption didn't worked. The original mongo:4.2 image worked perfectly.
working docker-compose.yaml:
version: '3'
services:
#PHP Service
php:
image: local-base-php
container_name: app
restart: always
tty: true
ports:
- "27017:27017"
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www/projects
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
#MongoDB Service
mongodb:
image: mongo:4.2
container_name: mongodb
restart: always
tty: true
environment:
MONGO_INITDB_DATABASE: test
MONGO_INITDB_USERNAME: root
MONGO_INITDB_PASSWORD: rootpassword
network_mode: service:php
php node Dockerfile:
FROM php:7.4
RUN apt-get update && apt-get install -y zip unzip libzip-dev git mercurial zlib1g-dev libicu-dev libcurl4-gnutls-dev libssl-dev libssh2-1-dev libgmp-dev libpng-dev uuid-dev
RUN cd /tmp && git clone https://github.com/php/pecl-networking-ssh2 && cd /tmp/pecl-networking-ssh2 \
&& phpize && ./configure && make && make install \
&& echo "extension=ssh2.so" > /usr/local/etc/php/conf.d/ext-ssh2.ini \
&& rm -rf /tmp/ssh2
RUN docker-php-ext-configure gmp
RUN docker-php-ext-install zip json pdo pdo_mysql curl opcache bcmath sockets gmp gd
RUN docker-php-ext-install -j$(nproc) intl
RUN pecl install uuid pcov redis mongodb
RUN docker-php-ext-enable uuid pcov redis mongodb
RUN curl -sS https://get.symfony.com/cli/installer | bash -s -- --install-dir /usr/local/bin
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN curl -L -sS "https://github.com/splitsh/lite/releases/download/v1.0.1/lite_linux_amd64.tar.gz" | tar xvz -C /usr/local/bin
RUN apt-get update
RUN apt-get install -y curl gpg wget
RUN sh -c 'curl -s https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg'
RUN echo "deb https://libmongocrypt.s3.amazonaws.com/apt/ubuntu bionic/libmongocrypt/1.0 universe" | tee /etc/apt/sources.list.d/libmongocrypt.list
RUN wget -qO - mongodb.org/static/pgp/server-4.2.asc | apt-key add -
RUN echo "deb http://repo.mongodb.com/apt/debian stretch/mongodb-enterprise/4.2 main" | tee /etc/apt/sources.list.d/mongodb-enterprise.list
RUN apt-get update
RUN apt-get install -y libmongocrypt-dev && apt-get install --no-install-recommends -y mongodb-enterprise-cryptd
I hope I helped. Cheers!
UPDATE:
My tests are running without libmongocrypt-dev lib, so I guess you only need mongodb-enterprise-cryptd.
I decided to use socat forwarding traffic
Dockerfile:
# Build stage
FROM ubuntu:focal
ENV ENTRY_FILE=docker-entrypoint.sh
ENV MONGODB_PATH=/usr/src/mongodb
ENV ENTRY_POINT=$MONGODB_PATH/$ENTRY_FILE
RUN apt-get update && apt-get install sudo
RUN sudo apt-get install -y curl telnet vim socat libcurl4 libgssapi-krb5-2 libldap-2.4-2 libwrap0 libsasl2-2 libsasl2-modules libsasl2-modules-gssapi-mit snmp openssl liblzma5
RUN curl -k -o mongodb.tgz "https://downloads.mongodb.com/linux/mongodb-linux-$(arch)-enterprise-ubuntu2004-5.0.10.tgz"
RUN tar -xf mongodb.tgz --strip-components=1
RUN sudo ln -s $MONGODB_PATH/bin/* /usr/local/bin/
RUN sudo mkdir -p /data/db
RUN sudo mkdir -p /data/log
RUN sudo chown `whoami` /data/db
RUN sudo chown `whoami` /data/log
COPY ./$ENTRY_FILE $ENTRY_POINT
RUN chmod +x $ENTRY_POINT
ENTRYPOINT $ENTRY_POINT
docker-entrypoint.sh:
#!/bin/sh
socat -d -d TCP-LISTEN:27017,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17017 &
socat -d -d TCP-LISTEN:27018,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17018 &
socat -d -d TCP-LISTEN:27019,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17019 &
socat -d -d TCP-LISTEN:27020,fork,bind=$(hostname -I | awk '{print $1}') TCP:127.0.0.1:17020 &
./bin/mongod --port 17017 --dbpath /data/db --logpath /data/log/mongod.log &
./bin/mongocryptd --port 17020 --logpath /data/log/mongocryptd.log
$(hostname -I | awk '{print $1}') is the remote ip(my docker host ip, 172.2.x.x), you can change to your another container ip.
docker-compose.yml:
version: "3.8"
networks:
ABC:
external: false
name: ABC
services:
mongo-database:
container_name: MongoDB
privileged: true
build:
context: .docker/db
dockerfile: Dockerfile
volumes:
- .mongo/db:/data/db
- .mongo/log:/data/log
ports:
- 27017-27020:27017-27020
networks:
- ABC

Receiving an error from a docker-compose that the user must own the data directory

Every time I try to build my image, I get the following error:
The server must be started by the user that owns the data directory.
The following is my docker file:
version: "3.7"
services:
db:
image: postgres
container_name: xxxxxxxxxxxx
volumes:
- ./postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
nginx:
image: nginx:latest
restart: always
container_name: xxxxxxxxxxxx-nginx
volumes:
- ./deployment/nginx:/etc/nginx
logging:
driver: none
depends_on: ["radio"]
ports:
- 8080:80
- 8081:443
radio:
build:
context: .
dockerfile: "./deployment/Dockerfile"
image: test-radio
command: './manage.py runserver 0:3000'
container_name: xxxxxxxxxxxxxxx
restart: always
depends_on: ["db"]
volumes:
- type: bind
source: ./api
target: /app/api
- type: bind
source: ./xxxxxx
target: /app/xxxxx
environment:
POSTGRES_DB: $POSTGRES_DB
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_HOST: $POSTGRES_HOST
AWS_KEY_ID: $AWS_KEY_ID
AWS_ACCESS_KEY: $AWS_ACCESS_KEY
AWS_S3_BUCKET_NAME: $AWS_S3_BUCKET_NAME
networks:
default:
The image is built with the following run.sh file:
#!/usr/bin/env sh
if [ ! -f .pass ]; then
openssl rand -base64 32 > .pass
fi
#export POSTGRES_DB="xxxxxxxxxxxxxxxxx"
#export POSTGRES_USER="xxxxxxxxxxxxxx"
#export POSTGRES_PASSWORD="xxxxxxxxxxxxxxxxxxxx"
#export POSTGRES_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export POSTGRES_DB="xxxxxxxxxxxxxxxxxx"
export POSTGRES_USER="xxxxxxxxxxxxxxxxxxxx"
export POSTGRES_PASSWORD="`cat .pass`"
export POSTGRES_HOST="db"
export AWS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_S3_BUCKET_NAME=""
echo "Your psql password is in .pass do not commit this file."
echo "The app will be available on localhost:8080 shortly"
if [ -z "$1" ]; then
docker-compose up
else
docker-compose up $1
fi
I'm wondering if my error is being caused by attempting to use a bash script to deploy the service on a Windows machine?
Details on the issue
The behavior observed by the OP definetely comes from a UID/GID mismatch, given that the specification
volumes:
- ./postgres-data:/var/lib/postgresql/data
(which can be viewed as a docker-compose equivalent of docker run -v "$PWD/postgres-data:/var/lib/postgresql/data" …) bind-mounts the $PWD/postgres-data folder inside the container, giving access to its files as is (including owner/group metadata).
Also, note that the handling of owner/group metadata between host and containers only relies on the numeric UID and GID, not on the owner and group names.
For more information about UIDs and GIDs in a Docker context, see also that article on Medium.
Workarounds if the bind-mount is necessary
For completeness, several possible solutions to workaround the bind-mount UID-mismatch issue (including the most straightforward one that consists in changing the files' UID :) are described in this answer on StackOverflow:
How to have host and container read/write the same files with Docker?
Other solutions
Following #ParanoidPenguin's comment, you may want to use a named volume, which mainly consists in using:
the docker volume command
and/or the docker run option -v …:….
Remarks:
docker run -v PATH1:PATH2 … triggers a bind-mount of PATH1 (host) to PATH2 (container) if and only if PATH1 is absolute (i.e., starts with a /) (e.g., -v "$PWD:$PWD" is a common idiom)
docker run -v NAME:PATH2 … mounts volume NAME to PATH2 (container) if and only if NAME does not contain any / (i.e., matches regexp [a-zA-Z0-9][a-zA-Z0-9_.-]).
even if we don't run docker volume create foo beforehand by hand, docker run -v foo:/data --rm -it debian will create the named volume foo if need be.
in order to populate the files of a named volume (or respectively, backup them) you can use an ephemeral container of image debian, ubuntu or so, combining at the same time a bind-mount and a volume mount:
Add a file /home/user/bar.txt in a new volume foo
file1=/home/user/bar.txt # initial file
uid=2000 # target User-ID in the volume
gid=2000 # target Group-ID in the volume
docker pull debian
docker run -v "$file1:$file1:ro" -v foo:/data \
-e file1="$file1" -e uid="$uid" -e gid="$gid" \
--rm -it debian bash -exc \
'cp -v -- "$file1" /data/bar.txt && chown -v $uid:$gid /data/bar.txt'
docker volume ls
Backup the foo volume in a tarball
date=$(date +'%Y%m%d_%H%M%S')
back="backup_$date.tar.gz"
destdir=/home/user/backup
mkdir -p "$destdir"
docker run -v foo:/data -v "$destdir:/backup" -e back="$back" \
--rm -it debian bash -exc 'tar cvzf "/backup/$back" /data'

Docker - PostgreSQL could not connect to server: Connection refused 127.0.0.1:5432

Trying to dockerise my Symfony 4 app, running with PostgreSQL
But when I'm running :
$ sudo docker-compose build
I'm having this error :
In AbstractPostgreSQLDriver.php line 73:
An exception occurred in driver: SQLSTATE[08006] [7] could not connect to server : Connection refused :
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Here's my docker-compose.yml file :
version: '3.7'
services:
db:
image: ${POSTGRES_IMAGE}
restart: always
environment:
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
php:
build:
context: .
dockerfile: ./docker/php/Dockerfile
restart: on-failure
user: 1000:1000
volumes:
- ./docker/php/uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
- .:/var/www/symfony
working_dir: /var/www/symfony
depends_on:
- db
Content of my .env file
DATABASE_NAME=db
DATABASE_HOST=127.0.0.1
DATABASE_PORT=5432
DATABASE_USER=postgres
DATABASE_PASSWORD=root
## Docker images (name and version)
PHP_IMAGE=php:7.3-fpm
POSTGRES_IMAGE=postgres
Also FYI :
$ sudo docker-compose config
debian#debian:~/dev/symfony$ sudo docker-compose config
services:
db:
environment:
POSTGRES_DB: mrd
POSTGRES_PASSWORD: root
POSTGRES_USER: postgres
image: postgres
restart: always
php:
build:
context: /home/debian/dev/symfony
dockerfile: ./docker/php/Dockerfile
depends_on:
- db
restart: on-failure
user: 1000:1000
volumes:
- /home/debian/dev/symfony:/var/www/symfony:rw
working_dir: /var/www/symfony
version: '3.7'
My Dockerfile:
FROM php:7.3-fpm
WORKDIR /var/www/symfony
RUN apt-get update
# Install Postgres PDO
RUN apt-get install -y libpq-dev \
&& apt-get install -y zip \
&& docker-php-ext-install pgsql pdo_pgsql \
&& apt-get install -y git
RUN pecl install apcu
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === 'a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php --filename=composer \
&& php -r "unlink('composer-setup.php');" \
&& mv composer /usr/local/bin/composer
RUN ls -larth
COPY . /var/www/symfony
RUN PATH=$PATH:/var/www/symfony/vendor/bin:bin
RUN pwd \
&& ls \
&& composer install --no-interaction --no-ansi --optimize-autoloader\
&& php bin/console doctrine:database:create \
&& php bin/console doctrine:schema:update --no-interaction \
&& php bin/console doctrine:fixtures:load --no-interaction
Has anybody a clue on why it does this ? And how to solve it ?
Searched a lot, and couldn't find anything that worked. Thought the depends_on: db would do the trick, but no.
Try to run:
docker-compose ps
see Container name
Set DATABASE_HOST=Container name
DATABASE_NAME=db
DATABASE_HOST=## paste Container name
DATABASE_PORT=5432
DATABASE_USER=postgres
DATABASE_PASSWORD=root
## Docker images (name and version)
PHP_IMAGE=php:7.3-fpm
POSTGRES_IMAGE=postgres
The pastes in your question indicate a number of things to be addressed with your docker-compose.yml file:
You are missing an env_file: entry for your .env file
You are missing an image for the php service
You specify . as your build context (which should be a directory that indicates where your Dockerfile is located)
You specify a path as your Dockerfile (which should be a filename, not a directory) -- I guess this might all work, but it's a bit confusing to read
Indentation for -db and - /home/debian/dev/symfony:/var/www/symfony:rw is off -- this might not be a problem, but it's still difficult to read
From your question about build failing with an error regarding inability to connect to a database -- could you update your question and share your Dockerfile for symfony? I suspect that you need to remove a reference to a database connection.

Docker wait for postgresql to be running

I am using postgresql with django in my project. I've got them in different containers and the problem is that i need to wait for postgres before running django. At this time i am doing it with sleep 5 in command.sh file for django container. I also found that netcat can do the trick but I would prefer way without additional packages. curl and wget can't do this because they do not support postgres protocol.
Is there a way to do it?
I've spent some hours investigating this problem and I got a solution.
Docker depends_on just consider service startup to run another service. Than it happens because as soon as db is started, service-app tries to connect to ur db, but it's not ready to receive connections. So you can check db health status in app service to wait for connection. Here is my solution, it solved my problem. :)
Important: I'm using docker-compose version 2.1.
version: '2.1'
services:
my-app:
build: .
command: su -c "python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
depends_on:
db:
condition: service_healthy
links:
- db
volumes:
- .:/app_directory
db:
image: postgres:10.5
ports:
- "5432:5432"
volumes:
- database:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
database:
In this case it's not necessary to create a .sh file.
This will successfully wait for Postgres to start. (Specifically line 6). Just replace npm start with whatever command you'd like to happen after Postgres has started.
services:
practice_docker:
image: dockerhubusername/practice_docker
ports:
- 80:3000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:password#db:5432/practicedocker
- PORT=3000
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=practicedocker
If you have psql you could simply add the following code to your .sh file:
RETRIES=5
until psql -h $PG_HOST -U $PG_USER -d $PG_DATABASE -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES--)) remaining attempts..."
sleep 1
done
The simplest solution is a short bash script:
while ! nc -z HOST PORT; do sleep 1; done;
./run-smth-else;
Problem with your solution tiziano is that curl is not installed by default and i wanted to avoid installing additional stuff. Anyway i did what bereal said. Here is the script if anyone would need it.
import socket
import time
import os
port = int(os.environ["DB_PORT"]) # 5432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while True:
try:
s.connect(('myproject-db', port))
s.close()
break
except socket.error as ex:
time.sleep(0.1)
In your Dockerfile add wait and change your start command to use it:
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
CMD /wait && npm start
Then, in your docker-compose.yml add a WAIT_HOSTS environment variable for your api service:
services:
api:
depends_on:
- postgres
environment:
- WAIT_HOSTS: postgres:5432
postgres:
image: postgres
ports:
- "5432:5432"
This has the advantage that it supports waiting for multiple services:
environment:
- WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
For more details, please read their documentation.
wait-for-it small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
can be cloned in Dockerfile by below command
RUN git clone https://github.com/vishnubob/wait-for-it.git
docker-compose.yml
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it/wait-for-it.sh", "db:5432", "--", "npm", "start"]
db:
image: postgres
Why not curl?
Something like this:
while ! curl http://$POSTGRES_PORT_5432_TCP_ADDR:$POSTGRES_PORT_5432_TCP_PORT/ 2>&1 | grep '52'
do
sleep 1
done
It works for me.
I have managed to solve my issue by adding health check to docker-compose definition.
db:
image: postgres:latest
ports:
- 5432:5432
healthcheck:
test: "pg_isready --username=postgres && psql --username=postgres --list"
timeout: 10s
retries: 20
then in the dependent service you can check the health status:
my-service:
image: myApp:latest
depends_on:
kafka:
condition: service_started
db:
condition: service_healthy
source: https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
If the backend application itself has a PostgreSQL client, you can use the pg_isready command in an until loop. For example, suppose we have the following project directory structure,
.
├── backend
│   └── Dockerfile
└── docker-compose.yml
with a docker-compose.yml
version: "3"
services:
postgres:
image: postgres
backend:
build: ./backend
and a backend/Dockerfile
FROM alpine
RUN apk update && apk add postgresql-client
CMD until pg_isready --username=postgres --host=postgres; do sleep 1; done \
&& psql --username=postgres --host=postgres --list
where the 'actual' command is just a psql --list for illustration. Then running docker-compose build and docker-compose up will give you the following output:
Note how the result of the psql --list command only appears after pg_isready logs postgres:5432 - accepting connections as desired.
By contrast, I have found that the nc -z approach does not work consistently. For example, if I replace the backend/Dockerfile with
FROM alpine
RUN apk update && apk add postgresql-client
CMD until nc -z postgres 5432; do echo "Waiting for Postgres..." && sleep 1; done \
&& psql --username=postgres --host=postgres --list
then docker-compose build followed by docker-compose up gives me the following result:
That is, the psql command throws a FATAL error that the database system is starting up.
In short, using an until pg_isready loop (as also recommended here) is the preferable approach IMO.
There are couple of solutions as other answers mentioned.
But don't make it complicated, just let it fail-fast combined with restart: on-failure. Your service will open connection to the db and may fail at the first time. Just let it fail. Docker will restart your service until it green. Keep your service simple and business-focused.
version: '3.7'
services:
postgresdb:
hostname: postgresdb
image: postgres:12.2
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=Ceo
migrate:
image: hanh/migration
links:
- postgresdb
environment:
- DATA_SOURCE=postgres://user:secret#postgresdb:5432/Ceo
command: migrate sql --yes
restart: on-failure # will restart until it's success
Check out restart policies.
None of other solution worked, except for the following:
version : '3.8'
services :
postgres :
image : postgres:latest
environment :
- POSTGRES_DB=mydbname
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
healthcheck :
test: [ "CMD", "pg_isready", "-q", "-d", "mydbname", "-U", "myusername" ]
interval : 5s
timeout : 5s
retries : 5
otherservice:
image: otherserviceimage
depends_on :
postgres:
condition: service_healthy
Thanks to this thread: https://github.com/peter-evans/docker-compose-healthcheck/issues/16
Sleeping until pg_isready returns true unfortunately is not always reliable. If your postgres container has at least one initdb script specified, postgres restarts after it is started during it's bootstrap procedure, and so it might not be ready yet even though pg_isready already returned true.
What you can do instead, is to wait until docker logs for that instance return a PostgreSQL init process complete; ready for start up. string, and only then proceed with the pg_isready check.
Example:
start_postgres() {
docker-compose up -d --no-recreate postgres
}
wait_for_postgres() {
until docker-compose logs | grep -q "PostgreSQL init process complete; ready for start up." \
&& docker-compose exec -T postgres sh -c "PGPASSWORD=\$POSTGRES_PASSWORD PGUSER=\$POSTGRES_USER pg_isready --dbname=\$POSTGRES_DB" > /dev/null 2>&1; do
printf "\rWaiting for postgres container to be available ... "
sleep 1
done
printf "\rWaiting for postgres container to be available ... done\n"
}
start_postgres
wait_for_postgres
You can use the manage.py command "check" to check if the database is available (and wait 2 seconds if not, and check again).
For instance, if you do this in your command.sh file before running the migration, Django has a valid DB connection while running the migration command:
...
echo "Waiting for db.."
python manage.py check --database default > /dev/null 2> /dev/null
until [ $? -eq 0 ];
do
sleep 2
python manage.py check --database default > /dev/null 2> /dev/null
done
echo "Connected."
# Migrate the last database changes
python manage.py migrate
...
PS: I'm not a shell expert, please suggest improvements.
#!/bin/sh
POSTGRES_VERSION=9.6.11
CONTAINER_NAME=my-postgres-container
# start the postgres container
docker run --rm \
--name $CONTAINER_NAME \
-e POSTGRES_PASSWORD=docker \
-d \
-p 5432:5432 \
postgres:$POSTGRES_VERSION
# wait until postgres is ready to accept connections
until docker run \
--rm \
--link $CONTAINER_NAME:pg \
postgres:$POSTGRES_VERSION pg_isready \
-U postgres \
-h pg; do sleep 1; done
An example for Nodejs and Postgres api.
#!/bin/bash
#entrypoint.dev.sh
echo "Waiting for postgres to get up and running..."
while ! nc -z postgres_container 5432; do
# where the postgres_container is the hos, in my case, it is a Docker container.
# You can use localhost for example in case your database is running locally.
echo "waiting for postgress listening..."
sleep 0.1
done
echo "PostgreSQL started"
yarn db:migrate
yarn dev
# Dockerfile
FROM node:12.16.2-alpine
ENV NODE_ENV="development"
RUN mkdir -p /app
WORKDIR /app
COPY ./package.json ./yarn.lock ./
RUN yarn install
COPY . .
CMD ["/bin/sh", "./entrypoint.dev.sh"]
If you want to run it with a single line command. You can just connect to the container and check if postgres is running
docker exec -it $DB_NAME bash -c "\
until psql -h $HOST -U $USER -d $DB_NAME-c 'select 1'>/dev/null 2>&1;\
do\
echo 'Waiting for postgres server....';\
sleep 1;\
done;\
exit;\
"
echo "DB Connected !!"
Inspired by #tiziano answer and the lack of nc or pg_isready, it seems that in a recent docker python image (python:3.9 here) that curl is installed by default and I have the following check running in my entrypoint.sh:
postgres_ready() {
$(which curl) http://$DBHOST:$DBPORT/ 2>&1 | grep '52'
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available.'
Trying with a lot of methods, Dockerfile, docker compose yaml, bash script. Only last of method help me: with makefile.
docker-compose up --build -d postgres
sleep 2
docker-compose up --build -d app