Here my simple scenario, I have a simple Flaskapp that connect to a postgres this way:
SQLALCHEMY_DATABASE_URI='postgresql://username:secretpassword#postgres:5432/myproj'
And I have a simple docker-compose.yml:
version: '2'
services:
postgres:
image: postgres:latest
volumes_from:
- data
environment:
POSTGRES_PASSWORD: secretpassword
POSTGRES_USER: username
POSTGRES_DB: myproj
ports:
- "5432:5432"
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
data:
image: postgres:latest
volumes:
- /var/lib/postgresql/data
command: "true"
app:
build: .
volumes:
- .:/myproj
command: "true"
I need to lunch a made by myself flask script, that creates the tables for my app:
export FLASK_APP='./myproj/__init__.py'
flask createdbs
I have put these 2 operation in the Dockerfile of my web service but because my service and the postgres service have a depends_on relationship, the postgres db host is not available during the building phase.
Any suggestion on the best way to achieve this ? I want to avoid hacks, I would prefer respect a correct Docker workflow.
One way to do it is to use the "command" keyword:
https://docs.docker.com/compose/compose-file/#/command
(look also at entrypoint keyword)
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
command: "export FLASK_APP='./myproj/__init__.py' && flask createdbs"
or using command just to launch your flask script and let your export in your dockerfile.
Note that "depends_on" only start one container before the other, but do not wait your postgres database to be ready. If you want to wait until postgres is ready to answer, you can use scripts like "wait-for-it.sh postgres:5432" that are well explained in docker-compose doc: https://docs.docker.com/compose/startup-order/
Related
I am new to Azure cloud services so excuse me if this is a dumb question.
I have a docker-compose file with a .Net core webapi and postgres database. I have it running on Azure as a web-app and its working (I can see when I query the API that there's data in the database). However I would like to get access to the database remotely so that I can inspect and see the data in the database via pgAdmin or something similar.
I did bind a port to my pgAdmin site in my docker-compose but it does not seem like that port is open. I've read somewhere that only port 80 and 443 can be exposed from Azure web-apps when using multi-image containers. (This docker-compose works locally 100% and I can access the pgAdmin site and see the database with all its tables).
So my question is, how do I run my web-api with my postgres database on azure and have visibility to my database?
Docker-compose file:
version: '3.8'
services:
web:
container_name: 'bootcampapi'
image: 'myimage'
build:
context: .
dockerfile: backend.dockerfile
restart: always
ports:
- 8080:80
depends_on:
postgres:
condition: service_healthy
networks:
- bootcampbackend-network
postgres:
container_name: 'postgres'
restart: always
image: 'postgres:latest'
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
environment:
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
- POSTGRES_DB=database-name
- PGDATA=database-data
networks:
- bootcampbackend-network
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- bootcampbackend-network
volumes:
- database-other:/var/lib/pgadmin/
networks:
bootcampbackend-network:
driver: bridge
As you have found, App Service only listens on one port. One solution around that is to use a reverse proxy like Nginx to route the traffic to both your containers.
BTW, build, depends_on and networks are unsupported. See doc
I have the Docker Compose file below. I'm trying to run the following:
Set up Postgres
Run Entity Framework to set up my schemas/tables
Set up PG Admin
Run some SQL scripts on the database.
The I can get the first three items done no problem, but I'm not sure where to put the running of my SQL scripts. Right now it's on the last line of the YAML, but I'm sure this is wrong. Where would I put this? I'm not sure how to reference the database I'd set up earlier to run the SQL on.
version: '3.8'
services:
#SET UP POSTGRES
db:
image: postgres
restart: always
environment:
POSTGRES_USER: marmalade
POSTGRES_PASSWORD: marmalade
POSTGRES_DB: marmalade
ports:
- "15432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U marmalade"]
interval: 5s
timeout: 5s
retries: 5
#RUN ENTITY FRAMEWORK TO INITIALIZE DATABASE
db-migrator:
image: ${DOCKER_REGISTRY-}db-migrator
build:
context: ../../../
dockerfile: src/marmalade/Dockerfile
environment:
- DOTNET_ENVIRONMENT=IntegrationTest
depends_on:
db:
condition: service_healthy
#SET UP PGADMIN
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: marmalade
ports:
- "5050:80"
volumes:
- ./servers.json:/pgadmin4/servers.json # preconfigured servers/connections
- ./sql/admin_schema.sql:/docker-entrypoint-initdb.d/admin_schema.sql #<- WHERE DO I PUT THIS?
Its correct but it needs to be in your Db service
Example:
services:
my_db:
image: postgres:latest
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
UPDATE:
problem with running in any other service is that its not going to have the credentials to connect to the database. So you can just create a shell script and run it the old fashioned way like so:
services:
some_service:
image: your_image
volume: ./init.sh:/init.sh
entrypoint: sh -c "/init.sh"
assuming of course that you have the shell already installed in your image
Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]
I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards
I have a multiple container application, that is using the postgres image in docker-compose.yml file. Postgres container has volume on host machine for persistent storage.
When I run docker-compose up at first time all is fine, postgres creates db files in my host folder.
After it I need to shut down application temporarily with docker-compose down if I'll change code of web container.
When I run docker-compose up second time, postgres overwriting all db files, but I need that data not changes. How can I solve this issue?
My docker-compose.yml
version: '2'
services:
web:
build: ./web
command: python3 main.py
volumes:
- ./web:/app
ports:
- "80:80"
depends_on:
- db
- redis
links:
- db:db
- redis:redis
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD:0000
volumes:
- ./pgdb:/var/lib/postgresql/data
redis:
image: redis
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- ./redisdb:/data
I solve this problem. It occurs probably because I changed permissions for pgdb directory with host root user. By default I couldn't open pgdb in host machine because owner is postgres user. I could be wrong but after I stopped to change the resolutions the problem was gone.