I am working on a project which uses postgres as backend. Jenkins is used for ci. So far only unit tests were included as part of the daily build, but now we want to include integration tests as well. This requires an api to access a database. The database is Postgres,liquibase is used as a publisher.
What I want to do is via Jenkins
1. Create database,
2. Publish it using liquibase
3. Run Integration tests
I created this docker-compose
version: '3.4'
services:
xyz.api:
image: xyz.api
build:
context: .
dockerfile: Dockerfile.ci
ports:
- 3001:3001
depends_on:
- publisher
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "xyz"
POSTGRES_DB: "xyz_db"
publisher:
image: publish
build:
context: .
dockerfile: Dockerfile-publish.ci
entrypoint: ""
depends_on:
- postgresql
command: ["./wait-for-it.sh", "db:5433"]
postgresql:
image: postgres:10.2
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "xyz"
POSTGRES_DB: "xyz_db"
ports:
- target: 5432
published: 5433
protocol: tcp
networks:
xyzapi:
dockerfile-publish
FROM sequenceiq/liquibase:latest
WORKDIR /workspace
# copy project and restore as distinct layers
RUN liquibase --driver=org.postgresql.Driver --classpath=/usr/local/bin/postgresql-9.3-1102.jdbc41.jar --changeLogFile=xyz_ChangeLog.xml --url=jdbc:postgresql://postgresql:5433/xyz_db --username=postgres --password=xyz update
docker-compose up throws
---> Running in 18c564dac2de
Unexpected error running Liquibase: org.postgresql.util.PSQLException: The connection attempt failed.
After trying a number of options including docker-compose I still could not get it working. I don't think I am on right track here and am asking for hint or direction on how to run integration tests in Jenkins for a project that depends on a database.
Related
I have the Docker Compose file below. I'm trying to run the following:
Set up Postgres
Run Entity Framework to set up my schemas/tables
Set up PG Admin
Run some SQL scripts on the database.
The I can get the first three items done no problem, but I'm not sure where to put the running of my SQL scripts. Right now it's on the last line of the YAML, but I'm sure this is wrong. Where would I put this? I'm not sure how to reference the database I'd set up earlier to run the SQL on.
version: '3.8'
services:
#SET UP POSTGRES
db:
image: postgres
restart: always
environment:
POSTGRES_USER: marmalade
POSTGRES_PASSWORD: marmalade
POSTGRES_DB: marmalade
ports:
- "15432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U marmalade"]
interval: 5s
timeout: 5s
retries: 5
#RUN ENTITY FRAMEWORK TO INITIALIZE DATABASE
db-migrator:
image: ${DOCKER_REGISTRY-}db-migrator
build:
context: ../../../
dockerfile: src/marmalade/Dockerfile
environment:
- DOTNET_ENVIRONMENT=IntegrationTest
depends_on:
db:
condition: service_healthy
#SET UP PGADMIN
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: marmalade
ports:
- "5050:80"
volumes:
- ./servers.json:/pgadmin4/servers.json # preconfigured servers/connections
- ./sql/admin_schema.sql:/docker-entrypoint-initdb.d/admin_schema.sql #<- WHERE DO I PUT THIS?
Its correct but it needs to be in your Db service
Example:
services:
my_db:
image: postgres:latest
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
UPDATE:
problem with running in any other service is that its not going to have the credentials to connect to the database. So you can just create a shell script and run it the old fashioned way like so:
services:
some_service:
image: your_image
volume: ./init.sh:/init.sh
entrypoint: sh -c "/init.sh"
assuming of course that you have the shell already installed in your image
I am trying to make a docker compose file that include a sonarqube and a Postgre database, and deploy it to Azure App service.
Below is the docker-compose file :
version: '3.3'
services:
db:
image: postgres
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: always
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
sonarqube:
depends_on:
- db
image: sonarqube
ports:
- "9000:9000"
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
restart: always
environment:
SONARQUBE_JDBC_URL: jdbc:postgresql://db:5432/sonar
SONARQUBE_JDBC_USERNAME: sonar
SONARQUBE_JDBC_PASSWORD: sonar
volumes:
postgresql:
postgresql_data:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
in my local machine, everything is working as expected and I can access Sonarqube. However, once I try to apply the docker-compose file in Azure App service I got the following entries in the log :
I tried to check if I can increase vm.max_map_count in App service, but I didn't find a way to do so.
How can I resolve this issue ? and is there at least a way to bypass this bootstrap check of vm.max_map_count ?
It's not possible to increase vm.max_map_count in Azure App Service. You can bypass this bootstrap check by adding the following line in the environment variables section of the SonarQube service:
SONAR_ES_BOOTSTRAP_CHECKS_DISABLE: 'true'
Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]
I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.
Here my simple scenario, I have a simple Flaskapp that connect to a postgres this way:
SQLALCHEMY_DATABASE_URI='postgresql://username:secretpassword#postgres:5432/myproj'
And I have a simple docker-compose.yml:
version: '2'
services:
postgres:
image: postgres:latest
volumes_from:
- data
environment:
POSTGRES_PASSWORD: secretpassword
POSTGRES_USER: username
POSTGRES_DB: myproj
ports:
- "5432:5432"
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
data:
image: postgres:latest
volumes:
- /var/lib/postgresql/data
command: "true"
app:
build: .
volumes:
- .:/myproj
command: "true"
I need to lunch a made by myself flask script, that creates the tables for my app:
export FLASK_APP='./myproj/__init__.py'
flask createdbs
I have put these 2 operation in the Dockerfile of my web service but because my service and the postgres service have a depends_on relationship, the postgres db host is not available during the building phase.
Any suggestion on the best way to achieve this ? I want to avoid hacks, I would prefer respect a correct Docker workflow.
One way to do it is to use the "command" keyword:
https://docs.docker.com/compose/compose-file/#/command
(look also at entrypoint keyword)
web:
build: .
volumes_from:
- app
ports:
- "5000:5000"
depends_on:
- postgres
command: "export FLASK_APP='./myproj/__init__.py' && flask createdbs"
or using command just to launch your flask script and let your export in your dockerfile.
Note that "depends_on" only start one container before the other, but do not wait your postgres database to be ready. If you want to wait until postgres is ready to answer, you can use scripts like "wait-for-it.sh postgres:5432" that are well explained in docker-compose doc: https://docs.docker.com/compose/startup-order/