Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]
Related
I have the Docker Compose file below. I'm trying to run the following:
Set up Postgres
Run Entity Framework to set up my schemas/tables
Set up PG Admin
Run some SQL scripts on the database.
The I can get the first three items done no problem, but I'm not sure where to put the running of my SQL scripts. Right now it's on the last line of the YAML, but I'm sure this is wrong. Where would I put this? I'm not sure how to reference the database I'd set up earlier to run the SQL on.
version: '3.8'
services:
#SET UP POSTGRES
db:
image: postgres
restart: always
environment:
POSTGRES_USER: marmalade
POSTGRES_PASSWORD: marmalade
POSTGRES_DB: marmalade
ports:
- "15432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U marmalade"]
interval: 5s
timeout: 5s
retries: 5
#RUN ENTITY FRAMEWORK TO INITIALIZE DATABASE
db-migrator:
image: ${DOCKER_REGISTRY-}db-migrator
build:
context: ../../../
dockerfile: src/marmalade/Dockerfile
environment:
- DOTNET_ENVIRONMENT=IntegrationTest
depends_on:
db:
condition: service_healthy
#SET UP PGADMIN
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: marmalade
ports:
- "5050:80"
volumes:
- ./servers.json:/pgadmin4/servers.json # preconfigured servers/connections
- ./sql/admin_schema.sql:/docker-entrypoint-initdb.d/admin_schema.sql #<- WHERE DO I PUT THIS?
Its correct but it needs to be in your Db service
Example:
services:
my_db:
image: postgres:latest
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
UPDATE:
problem with running in any other service is that its not going to have the credentials to connect to the database. So you can just create a shell script and run it the old fashioned way like so:
services:
some_service:
image: your_image
volume: ./init.sh:/init.sh
entrypoint: sh -c "/init.sh"
assuming of course that you have the shell already installed in your image
I have tried to search other questions, but the solutions arent cutting it.
I have a java spring boot application running inside docker, using the command below:
docker run -p 8080:80 -v C:/Users/USER/Desktop/brapi:/home/brapi/properties --network=brapi_network -d brapicoordinatorselby/brapi-java-server:v2
Container is running. However, when I click 'open in browser', browser says:
This page isn’t working
localhost didn’t send any data. (ERR_EMPTY_RESPONSE)
What am I missing here? I tried to find my yaml file but I couldnt (beginner here)
Any help would be much appreciated
Spring boot default port is 8080.
My best guess is you're trying to map docker port 8080 to your port 80. In that case, flip your port syntax.
If you have any arguments, then don't forget to provide them in syntax:
docker run \
-p 80:8080 \
-e JAVA_OPTS="-Dspring.profiles.active=dev" \
-v C:/Users/USER/Desktop/brapi:/home/brapi/properties \
--network=brapi_network \
-d brapicoordinatorselby/brapi-java-server:v2
I faced the same issue
you need to have pgadmin(graphical user interface client)in your docker-compose.yml file
In my case I wrote my docker-compose.yml like this and it works properly
services:
postgres:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: amigoscode
POSTGRES_PASSWORD: password
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
networks:
- postgres
restart: unless-stopped
networks:
postgres:
driver: bridge
volumes:
postgres:
pgadmin:
I am trying to use my docker-compose file to run 2 instances of both my database, and my rest api, so that I can run tests on a test instance of the database.
version: "3.8"
services:
db:
image: postgres:13.2-alpine
container_name: "db-prod"
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
networks:
- fullstack
volumes:
- database_postgres:/var/lib/postgresql/data
db_test:
image: postgres:13.2-alpine
container_name: "db-test"
ports:
- "5433:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
networks:
- fullstack-test
volumes:
- database_postgres_test:/var/lib/postgresql/data
api:
build: .
container_name: "rest-api"
environment:
DB_USERNAME: "postgres"
DB_PASSWORD: "password"
DB_HOST: "db-prod"
DB_TABLE: "postgres"
DB_DB: "postgres"
DB_PORT: "5432"
ports:
- "8080:8080"
depends_on:
- db
networks:
- fullstack
api_test:
build: .
container_name: "rest-api-test"
environment:
DB_USERNAME: "postgres"
DB_PASSWORD: "password"
DB_HOST: "db-test"
DB_TABLE: "postgres"
DB_DB: "postgres"
DB_PORT: "5433"
ports:
- "8081:8080"
depends_on:
- db_test
networks:
- fullstack-test
volumes:
database_postgres:
database_postgres_test:
networks:
fullstack:
driver: bridge
fullstack-test:
driver: bridge
When i run this, my prod database starts, and my regular API connects to it fine.
My test DB also starts, and I can connect to it using
psql -U postgres -h localhost -p 5433
however my test rest API wi
dial tcp 192.168.112.2:5433: connect: connection refused
The goal is to set up my go tests to run on the test DB and just clear after each test as needed, and not affect the prod db.
I am not sure if I am going about this the right way - perhaps there is a better construct for this - and if so please correct me. But regardless, I do not understand why im getting this error?
I dont get why one connection works well and the other fails?
Edit: Also interesting, i just noticed if i change the api_test container to use:
DB_HOST: "host.docker.internal"
it works. But i still dont understand why one can use a container name and the other cannot? And i cant leave it this way as it needs to work on a mac as well, and host.docker.internal doesnt work on my mac (hence why the first one was changed to the container name)
I've got the following docker-compose file and it serves up the application on port 80 fine.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static:
postgres_data:
Once in there I can log into the admin and add an extra user which gets saved to the database as I can reload the page and the user is still there. Once I stop the backend docker container however that user is gone. Given Postgres is operating in a different container and I'm not bringing it down I'm unsure how stopping the backend container and restarting it is causing the data not to be available.
Thanks in advance.
EDIT:
I'm bringing up the docker container with the following command.
docker-compose -f docker-compose.prod.yml up -d
I'm bringing down the container by just using docker desktop and stopping the container.
I'm running DJANGO 3 for the backend and I've also tried adding a superuser in the terminal when the container is running:
# python manage.py createsuperuser
Username (leave blank to use 'root'): mikey
Email address:
Password:
Password (again):
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Which works and the user appears while the container is running. However, once again when I shut the container down via docker desktop and then restart it that user that was just created is gone.
FURTHER EDIT:
settings.py using dotenv "from dotenv import load_dotenv"
DATABASES = {
"default": {
"ENGINE": os.getenv("SQL_ENGINE"),
"NAME": os.getenv("SQL_DATABASE"),
"USER": os.getenv("SQL_USER"),
"PASSWORD": os.getenv("SQL_PASSWORD"),
"HOST": os.getenv("SQL_HOST"),
"PORT": os.getenv("SQL_PORT"),
}
}
with the .env.prod file having the following values:
DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=db
SQL_PORT=5432
SOLUTION:
Read the comments to see the diagnosis by other legends but updated docker-compose file looks like this. Note the "depends_on" block.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
depends_on:
- db
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- "5432"
volumes:
static:
postgres_data:
FINAL EDIT:
Added the following code to my entrypoint.sh file to ensure Postgres is ready to accept connections by the backend container.
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
Here my simple scenario, I have a simple Flaskapp that connect to a postgres this way:
SQLALCHEMY_DATABASE_URI='postgresql://username:secretpassword#postgres:5432/myproj'
And I have a simple docker-compose.yml:
version: '2'
services:
postgres:
image: postgres:latest
volumes_from:
- data
environment:
POSTGRES_PASSWORD: secretpassword
POSTGRES_USER: username
POSTGRES_DB: myproj
ports:
- "5432:5432"
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
data:
image: postgres:latest
volumes:
- /var/lib/postgresql/data
command: "true"
app:
build: .
volumes:
- .:/myproj
command: "true"
I need to lunch a made by myself flask script, that creates the tables for my app:
export FLASK_APP='./myproj/__init__.py'
flask createdbs
I have put these 2 operation in the Dockerfile of my web service but because my service and the postgres service have a depends_on relationship, the postgres db host is not available during the building phase.
Any suggestion on the best way to achieve this ? I want to avoid hacks, I would prefer respect a correct Docker workflow.
One way to do it is to use the "command" keyword:
https://docs.docker.com/compose/compose-file/#/command
(look also at entrypoint keyword)
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
command: "export FLASK_APP='./myproj/__init__.py' && flask createdbs"
or using command just to launch your flask script and let your export in your dockerfile.
Note that "depends_on" only start one container before the other, but do not wait your postgres database to be ready. If you want to wait until postgres is ready to answer, you can use scripts like "wait-for-it.sh postgres:5432" that are well explained in docker-compose doc: https://docs.docker.com/compose/startup-order/