How do I run SQL Scripts after DB Initialized from Docker-Compose? - postgresql

I have the Docker Compose file below. I'm trying to run the following:
Set up Postgres
Run Entity Framework to set up my schemas/tables
Set up PG Admin
Run some SQL scripts on the database.
The I can get the first three items done no problem, but I'm not sure where to put the running of my SQL scripts. Right now it's on the last line of the YAML, but I'm sure this is wrong. Where would I put this? I'm not sure how to reference the database I'd set up earlier to run the SQL on.
version: '3.8'
services:
#SET UP POSTGRES
db:
image: postgres
restart: always
environment:
POSTGRES_USER: marmalade
POSTGRES_PASSWORD: marmalade
POSTGRES_DB: marmalade
ports:
- "15432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U marmalade"]
interval: 5s
timeout: 5s
retries: 5
#RUN ENTITY FRAMEWORK TO INITIALIZE DATABASE
db-migrator:
image: ${DOCKER_REGISTRY-}db-migrator
build:
context: ../../../
dockerfile: src/marmalade/Dockerfile
environment:
- DOTNET_ENVIRONMENT=IntegrationTest
depends_on:
db:
condition: service_healthy
#SET UP PGADMIN
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: marmalade
ports:
- "5050:80"
volumes:
- ./servers.json:/pgadmin4/servers.json # preconfigured servers/connections
- ./sql/admin_schema.sql:/docker-entrypoint-initdb.d/admin_schema.sql #<- WHERE DO I PUT THIS?

Its correct but it needs to be in your Db service
Example:
services:
my_db:
image: postgres:latest
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
UPDATE:
problem with running in any other service is that its not going to have the credentials to connect to the database. So you can just create a shell script and run it the old fashioned way like so:
services:
some_service:
image: your_image
volume: ./init.sh:/init.sh
entrypoint: sh -c "/init.sh"
assuming of course that you have the shell already installed in your image

Related

DOCKER - Airflow How can i Init my postgres scripts in Airflow DB when i docker compose

I am testing some stuff where I have to init my Postgres DB DDL into airflow Postgres DB when I compose-up it should automatically init for one time as it will be cached afterward as airflow DB works usually. Thanks
As requested in the last comment: Adding your own database to the Airflow Docker-compose file:
Put this piece of code as a service somewhere amongst the other services:
mypostgres:
image: postgres:13
environment:
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: mydb
volumes:
- ./database:/var/lib/postgresql/data
- ./init-database.sh:/docker-entrypoint-initdb.d/init-database.sh
restart: always
Make sure you have a database-directory and a init-database.sh file in the current directory (otherwise the volume mappings fail)
I have found a solution that works and init your scripts when you docker composed up.
pro TIP:
If you want to add more files and you have already init the airflow DB or your DB what you can do is docker-compose down --volume what this will do will automatically remove all the data in the data directory. and for init to work Postgres data directory have to be empty
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
ports:
- "5432:5432"
volumes:
- postgres-db-volume:/var/lib/postgresql/data
- /path/to/my/host/folder/filename.sql:/docker-entrypoint-initdb.d/filename.sql
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
volumes:
postgres-db-volume:

why is PGDATA directory empty?

I created a docker image with the following docker-compose.yml file
version: "3.7"
services:
db:
image: postgres:alpine
container_name: db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_INITDB_ARGS: '-A md5'
volumes:
- ./pgdata:/var/lib/postgressql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "psql", "postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#db/${POSTGRES_DB}"]
interval: 30s
timeout: 5s
retries: 5
api:
build: api
container_name: api
volumes:
- ./api/migrations:/migrations
ports:
- "8080:8080"
links:
- db
depends_on:
db:
condition: service_healthy
when I do docker-compose up everything is working fine. I am able to connect to Postgres, I can create tables. I can query those tables.
The only problem is that the ./pgdata directory is empty! why is that? since I have done volumes: - ./pgdata:/var/lib/postgressql/data I should have some files getting created in this directory as I create databases and tables right?
I did ls -al command in the pgdata directory and it shows nothing.
I entered the docker-hub-psql-site and checked the working directory.
https://hub.docker.com/_/postgres
In the case of postgresql, it was set as the PGDATA environment variable, and it was as follows.
Dockerfile
...
ENV PGDATA=/var/lib/postgresql/data
...
In other words, I have confirmed that there is a typo postgressql in your docker-compose.yaml, and it will be fixed if you modify it as follows.
version: "3.7"
services:
db:
image: postgres:alpine
container_name: db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_INITDB_ARGS: '-A md5'
volumes:
# - ./pgdata:/var/lib/postgressql/data
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "psql", "postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#db/${POSTGRES_DB}"]
interval: 30s
timeout: 5s
retries: 5
[NOTE]
Apart from the question, I know that the healthcheck option in docker-compose only supports version 2.
Please refer to the article below stackoverflow/docker-compose-healthcheck-does-not-work

How to connect to dbeaver using Docker in Windows 10 environment?

I am studying docker.
but i have a issue. because I can't see the db using dbeaver.
when you are using previous Linux, i could write the same and then it is working
but now i use window so not working
i made docker compose file
version: "3.8"
services:
backend:
build:
context: .
dockerfile: Dockerfile
# network: auth-module
volumes:
- ./src:/server/src
ports:
- 4000:4000
env_file:
- ./.env.dev
# command: "npm run prod"
links:
- postgres
postgres:
image: postgres:12
environment:
POSTGRES_USERNAME: "postgres"
POSTGRES_DB: "auth"
POSTGRES_PASSWORD: "1234"
ports:
- 5432:5432
networks:
default:
external:
name: auth-module
What is the problem?
and How to fix it?

Swift Vapor 3 + PostgreSQL + Docker-Compose Correct configuration?

Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]

How to initialize a database on a data volume container?

Here my simple scenario, I have a simple Flaskapp that connect to a postgres this way:
SQLALCHEMY_DATABASE_URI='postgresql://username:secretpassword#postgres:5432/myproj'
And I have a simple docker-compose.yml:
version: '2'
services:
postgres:
image: postgres:latest
volumes_from:
- data
environment:
POSTGRES_PASSWORD: secretpassword
POSTGRES_USER: username
POSTGRES_DB: myproj
ports:
- "5432:5432"
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
data:
image: postgres:latest
volumes:
- /var/lib/postgresql/data
command: "true"
app:
build: .
volumes:
- .:/myproj
command: "true"
I need to lunch a made by myself flask script, that creates the tables for my app:
export FLASK_APP='./myproj/__init__.py'
flask createdbs
I have put these 2 operation in the Dockerfile of my web service but because my service and the postgres service have a depends_on relationship, the postgres db host is not available during the building phase.
Any suggestion on the best way to achieve this ? I want to avoid hacks, I would prefer respect a correct Docker workflow.
One way to do it is to use the "command" keyword:
https://docs.docker.com/compose/compose-file/#/command
(look also at entrypoint keyword)
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
command: "export FLASK_APP='./myproj/__init__.py' && flask createdbs"
or using command just to launch your flask script and let your export in your dockerfile.
Note that "depends_on" only start one container before the other, but do not wait your postgres database to be ready. If you want to wait until postgres is ready to answer, you can use scripts like "wait-for-it.sh postgres:5432" that are well explained in docker-compose doc: https://docs.docker.com/compose/startup-order/