Docker-compose postgresql password authentication failed - postgresql

I'm trying to get a setup going with a webservice that consumes a postgres database. Should be simple to setup, but I'm getting errors. So, first thing I want to make sure is that the database I set up is actually there and running.
To test this I substitute the "consumer" or "client" for an alpine interactive shell like so:
version: '3'
services:
db:
image: postgres:10.1-alpine
container_name: db
expose:
- 5432
volumes:
- "dbdata:/var/lib/postgresql/data"
environment:
- POSTGES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
web:
image: alpine:latest
stdin_open: true
tty: true
entrypoint: /bin/sh
depends_on:
- db
volumes:
dbdata:
Then I run the following command to get into the interactive shell:
docker-compose run web
and the following command to get in the database:
apk --update add postgresql-client && rm -rf /var/cache/apk/*
psql -h db -U user db
I get a plain denial from postgresql:
psql: FATAL: password authentication failed for user "user"
Same error message for each combo of username/password/databasename I try. Not much helpful.
What am I doing wrong here?

You have a typo in your docker-compose file. You mispelled POSTGRES here:
POSTGES_USER=user
That means the user user isn't being created. If I correct that typo, so that I have:
version: '3'
services:
db:
image: postgres:10.1-alpine
expose:
- 5432
volumes:
- "dbdata:/var/lib/postgresql/data"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
web:
image: alpine:latest
stdin_open: true
tty: true
entrypoint: /bin/sh
depends_on:
- db
volumes:
dbdata:
Start the environment:
docker-compose up -d
Attach to the web contained and install the postgresql client:
$ docker attach project_web_1
/ # apk add --update postgresql-client
Then I can connect without a problem:
/ # psql -h db -U user db
Password for user user:
psql (11.2, server 10.1)
Type "help" for help.
db=#

Related

Could not connect to local postgres DB from docker using pgadmin4

I tried connecting to my local postgres DB from docker using pgadmin4 but it failed with unable to connect to server: timeout expired. I have my server running, and I used the same properties that are mentioned in the docker-compose.yml file while connecting to server in pgadmin4.
This is my docker-compose.yml
version: "3"
services:
web:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=secretpassword
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secretpassword
This is the screenshot of the error I get in pgadmin4
I tried changing the host address to something like localhost, host.docker.internal, 127.0.0.1 and the IPAddress by inspecting docker container. But I get the same result every time. I also tried adding pgadmin4 as a service in my docker-compose.yml file and tried, but got the same result there too.
I am confused what I am missing here.
Thanks in advance.
First, you didn't export any port from Postgres Container which might be 5432 as default, then you can't connect 8000 because that is binding for your application from your host.
Here is some description from docker-compose ports
When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a base-60 value. For this reason, we recommend always explicitly specifying your port mappings as strings.
so you can try to export port "5432:5432" from Postgres DB from container which your Pgadmin might need to use 5432 port.
version: "3"
services:
web:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=secretpassword
depends_on:
- db
db:
image: postgres:10-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secretpassword

Unable to communicate with the postgreSQL database using Docker

I don't know why I am not able to fetch data from my postgreSQL database. I always got the same error when I do :
docker logs web I got that :
django.db.utils.OperationalError: could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? could not connect to server: Address not available
I tried a lot of things but without effects...
Here is my docker-compose :
version: '3.7'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 5432:5432
stdin_open: true
depends_on:
- db
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=mydb
- POSTGRES_HOST=localhost
volumes:
postgres_data:
And this is the dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
# copy project
ADD . .
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
in my settings.py I have this for the database :
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'mydb',
'USER': 'admin',
'PASSWORD': 'pass',
'HOST': 'localhost',
'PORT': '5432',
}
}
Could you help me please ?
I have absolutely no ideas how to solve that, I tried a lot of things but without any effects...
For instance, I tried to change in settings this parameter :
'HOST': 'localhost' to 'HOST':'db'
But I got that errors when I type that docker logs web :
django.db.utils.OperationalError: FATAL: password authentication failed for user "admin"
Thank you very much
have you tried to EXPOSE the port 5432 on the database container?
something like
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=mydb
- POSTGRES_HOST=localhost
expose:
- "5432"
or inside the Dockerfile EXPOSE 5432/tcp
here is a link where you can check out the difference between ports: and expose:
What is the difference between docker-compose ports vs expose

Docker losing Postgres server data when I shutdown the backend container

I've got the following docker-compose file and it serves up the application on port 80 fine.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static:
postgres_data:
Once in there I can log into the admin and add an extra user which gets saved to the database as I can reload the page and the user is still there. Once I stop the backend docker container however that user is gone. Given Postgres is operating in a different container and I'm not bringing it down I'm unsure how stopping the backend container and restarting it is causing the data not to be available.
Thanks in advance.
EDIT:
I'm bringing up the docker container with the following command.
docker-compose -f docker-compose.prod.yml up -d
I'm bringing down the container by just using docker desktop and stopping the container.
I'm running DJANGO 3 for the backend and I've also tried adding a superuser in the terminal when the container is running:
# python manage.py createsuperuser
Username (leave blank to use 'root'): mikey
Email address:
Password:
Password (again):
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Which works and the user appears while the container is running. However, once again when I shut the container down via docker desktop and then restart it that user that was just created is gone.
FURTHER EDIT:
settings.py using dotenv "from dotenv import load_dotenv"
DATABASES = {
"default": {
"ENGINE": os.getenv("SQL_ENGINE"),
"NAME": os.getenv("SQL_DATABASE"),
"USER": os.getenv("SQL_USER"),
"PASSWORD": os.getenv("SQL_PASSWORD"),
"HOST": os.getenv("SQL_HOST"),
"PORT": os.getenv("SQL_PORT"),
}
}
with the .env.prod file having the following values:
DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=db
SQL_PORT=5432
SOLUTION:
Read the comments to see the diagnosis by other legends but updated docker-compose file looks like this. Note the "depends_on" block.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
depends_on:
- db
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- "5432"
volumes:
static:
postgres_data:
FINAL EDIT:
Added the following code to my entrypoint.sh file to ensure Postgres is ready to accept connections by the backend container.
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi

Swift Vapor 3 + PostgreSQL + Docker-Compose Correct configuration?

Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]

How to create postgres database and run migration when docker-compose up

I'm setting up a simple backend that perform CRUD action with postgres database and want to create database and migration automatically when the docker-compose up runs.
I have already tried to add below code to Dockerfile or entrypoint.sh but none of them work.
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
createdb db:migrate
This code will work if run separately after docker is fully up
I have already tried to add - ./db-init:/docker-entrypoint-initdb.d to the volumes but that didn't work either
This is the Dockerfile
FROM node:10.12.0
# Create app directory
RUN mkdir -p /restify-pg
WORKDIR /restify-pg
EXPOSE 1337
ENTRYPOINT [ "./entrypoint.sh" ]
This is my docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/psotgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
entrypoint.sh (in here I get createdb: command not found)
#!/bin/bash
cd app
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
sequelize db:migrate
npm install
npm run dev
I expect that when I run docker, the migration and the db creation would happen.
entrypoint.sh (in here I get createdb: command not found)
Running createdb in the nodejs container will not work because it is postgres specific command and it's not installed by default in the nodejs image.
If you specify POSTGRES_DB: pg_development env var on postgres container, the database will be created automatically when the container starts. So no need to run createdb anyway in entrypoint.sh that is mounted in the nodejs container.
In order to make sequelize db:migrate work you need to:
add sequelize-cli to dependencies in package.json
run npm install so it gets installed
run npx sequelize db:migrate
Here is a proposal:
# docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
working_dir: /restify-pg
entrypoint: ["/bin/bash", "./entrypoint.sh"]
image: node:10.12.0
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
# package.json
{
...
"dependencies": {
...
"pg": "^7.9.0",
"pg-hstore": "^2.3.2",
"sequelize": "^5.2.9",
"sequelize-cli": "^5.4.0"
}
}
# entrypoint.sh
npm install
npx sequelize db:migrate
npm run dev
If you can run your migrations from nodejs instead of docker then consider this solution instead