Docker losing Postgres server data when I shutdown the backend container - postgresql

I've got the following docker-compose file and it serves up the application on port 80 fine.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static:
postgres_data:
Once in there I can log into the admin and add an extra user which gets saved to the database as I can reload the page and the user is still there. Once I stop the backend docker container however that user is gone. Given Postgres is operating in a different container and I'm not bringing it down I'm unsure how stopping the backend container and restarting it is causing the data not to be available.
Thanks in advance.
EDIT:
I'm bringing up the docker container with the following command.
docker-compose -f docker-compose.prod.yml up -d
I'm bringing down the container by just using docker desktop and stopping the container.
I'm running DJANGO 3 for the backend and I've also tried adding a superuser in the terminal when the container is running:
# python manage.py createsuperuser
Username (leave blank to use 'root'): mikey
Email address:
Password:
Password (again):
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Which works and the user appears while the container is running. However, once again when I shut the container down via docker desktop and then restart it that user that was just created is gone.
FURTHER EDIT:
settings.py using dotenv "from dotenv import load_dotenv"
DATABASES = {
"default": {
"ENGINE": os.getenv("SQL_ENGINE"),
"NAME": os.getenv("SQL_DATABASE"),
"USER": os.getenv("SQL_USER"),
"PASSWORD": os.getenv("SQL_PASSWORD"),
"HOST": os.getenv("SQL_HOST"),
"PORT": os.getenv("SQL_PORT"),
}
}
with the .env.prod file having the following values:
DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=db
SQL_PORT=5432
SOLUTION:
Read the comments to see the diagnosis by other legends but updated docker-compose file looks like this. Note the "depends_on" block.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
depends_on:
- db
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- "5432"
volumes:
static:
postgres_data:
FINAL EDIT:
Added the following code to my entrypoint.sh file to ensure Postgres is ready to accept connections by the backend container.
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi

Related

Could not connect to local postgres DB from docker using pgadmin4

I tried connecting to my local postgres DB from docker using pgadmin4 but it failed with unable to connect to server: timeout expired. I have my server running, and I used the same properties that are mentioned in the docker-compose.yml file while connecting to server in pgadmin4.
This is my docker-compose.yml
version: "3"
services:
web:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=secretpassword
depends_on:
- db
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secretpassword
This is the screenshot of the error I get in pgadmin4
I tried changing the host address to something like localhost, host.docker.internal, 127.0.0.1 and the IPAddress by inspecting docker container. But I get the same result every time. I also tried adding pgadmin4 as a service in my docker-compose.yml file and tried, but got the same result there too.
I am confused what I am missing here.
Thanks in advance.
First, you didn't export any port from Postgres Container which might be 5432 as default, then you can't connect 8000 because that is binding for your application from your host.
Here is some description from docker-compose ports
When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a base-60 value. For this reason, we recommend always explicitly specifying your port mappings as strings.
so you can try to export port "5432:5432" from Postgres DB from container which your Pgadmin might need to use 5432 port.
version: "3"
services:
web:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=secretpassword
depends_on:
- db
db:
image: postgres:10-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secretpassword

Docker / Postgres - Trying to run 2 databases and 2 apis, cannot connect

I am trying to use my docker-compose file to run 2 instances of both my database, and my rest api, so that I can run tests on a test instance of the database.
version: "3.8"
services:
db:
image: postgres:13.2-alpine
container_name: "db-prod"
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
networks:
- fullstack
volumes:
- database_postgres:/var/lib/postgresql/data
db_test:
image: postgres:13.2-alpine
container_name: "db-test"
ports:
- "5433:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
networks:
- fullstack-test
volumes:
- database_postgres_test:/var/lib/postgresql/data
api:
build: .
container_name: "rest-api"
environment:
DB_USERNAME: "postgres"
DB_PASSWORD: "password"
DB_HOST: "db-prod"
DB_TABLE: "postgres"
DB_DB: "postgres"
DB_PORT: "5432"
ports:
- "8080:8080"
depends_on:
- db
networks:
- fullstack
api_test:
build: .
container_name: "rest-api-test"
environment:
DB_USERNAME: "postgres"
DB_PASSWORD: "password"
DB_HOST: "db-test"
DB_TABLE: "postgres"
DB_DB: "postgres"
DB_PORT: "5433"
ports:
- "8081:8080"
depends_on:
- db_test
networks:
- fullstack-test
volumes:
database_postgres:
database_postgres_test:
networks:
fullstack:
driver: bridge
fullstack-test:
driver: bridge
When i run this, my prod database starts, and my regular API connects to it fine.
My test DB also starts, and I can connect to it using
psql -U postgres -h localhost -p 5433
however my test rest API wi
dial tcp 192.168.112.2:5433: connect: connection refused
The goal is to set up my go tests to run on the test DB and just clear after each test as needed, and not affect the prod db.
I am not sure if I am going about this the right way - perhaps there is a better construct for this - and if so please correct me. But regardless, I do not understand why im getting this error?
I dont get why one connection works well and the other fails?
Edit: Also interesting, i just noticed if i change the api_test container to use:
DB_HOST: "host.docker.internal"
it works. But i still dont understand why one can use a container name and the other cannot? And i cant leave it this way as it needs to work on a mac as well, and host.docker.internal doesnt work on my mac (hence why the first one was changed to the container name)

Airflow via docker-compose keeps trying to access sqlite although postgres configured

I try to set up a Dockerized airflow instance, but whatever I do (so far..) it keeps trying to access some sqlite3 database where I do not know where the instruction comes from. I point to the Postgres instance everywhere (deemed) possible through AIRFLOW__CORE__SQL_ALCHEMY_CONN, and even AIRFLOW_CONN_METADATA_DB.
A typical error message when starting up is like:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: job
Full docker-compose.yml:
version: '3'
x-airflow-common:
&airflow-common
image: apache/airflow:2.0.0
environment:
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW__CORE__FERNET_KEY=FB0o_zt4e3Ziq3LdUUO7F2Z95cvFFx16hU8jTeR1ASM=
- AIRFLOW__CORE__LOAD_EXAMPLES=True
- AIRFLOW__CORE__LOGGING_LEVEL=INFO
volumes:
- /home/x/docker/airflow/dags:/opt/airflow/dags
- /home/x/docker/airflow/airflow-data/logs:/opt/airflow/logs
- /home/x/docker/airflow/airflow-data/plugins:/opt/airflow/plugins
- /home/x/docker/airflow/airflow-data/airflow.cfg:/opt/airlfow/airflow.cfg
depends_on:
- db
services:
db:
image: postgres:12
#image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=airflow
- POSTGRES_PORT=9501
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 9501:9501
command:
- -p 9501
airflow-init:
<< : *airflow-common
container_name: airflow_init
entrypoint: /bin/bash
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
command:
- -c
- airflow users list || ( airflow db init &&
airflow users create
--role Admin
--username airflow
--password airflow
--email airflow#airflow.com
--firstname airflow
--lastname airflow )
restart: on-failure
airflow-webserver:
<< : *airflow-common
command: airflow webserver
ports:
- 9500:8080
container_name: airflow_webserver
environment:
- AIRFLOW_USERNAME=airflow
- AIRFLOW_PASSWORD=airflow
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
airflow-scheduler:
<< : *airflow-common
command: airflow scheduler
container_name: airflow_scheduler
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
Solved by following this docker-compose.yaml file:
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
And instead of trying to tweak the ports of postgres (and redis) used the "expose" option, which avoids conflicts with other containers on the same host.
So not:
environment:
POSTGRES_PORT: 9501
ports:
- 9501:9501
But: run it (internally) with the default ports and do not try to share them external:
expose:
- 5432
Still not sure what was the problem with using the higher ports. It may be some default fallback to sqlite when the configured DB for some reason cannot be connected.

Swift Vapor 3 + PostgreSQL + Docker-Compose Correct configuration?

Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards