Start database container with mounted config file in Github actions workflow - github

I'd like to run some integration tests against a real database, but I fail to start an additional container (for the db), because I need to mount a config file that is in my repo before it is starting up.
This is how I use the database on my local computer (docker-compose):
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-
I guess I cannot use a service container as the code is not available at the time the service container is started, therefore it won't pick up my configuration.
That's why I tried to run a container within my container using --network host (see below) and the container seems to be running fine, still I'm not able to curl it.
- name: Start DB for tests
run: |
docker run -d \
--network host \
-v ${{ github.workspace }}/dev/conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml \
-v ${{ github.workspace }}/dev/conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties \
tinkerpop/gremlin-server:3.5
- name: Test connection
run: |
curl "localhost:8182/gremlin?gremlin=g.V().valueMap()"
According to the documentation about the job context the id of the container network should be available ({{job.container.network}}) but is empty if you don’t use any job-level service or container.
Any ideas what I could try next?

This is what I ended up with: I'm now using docker-compose to run the integration tests (on my local computer as well as on GitHub Actions). I'm just mounting the entire directory/repo in the test container. Pulling the node:14-slim delays the build by some seconds, but I guess it's still the best option:
version: "3.2"
services:
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./data/:/opt/gremlin-server/data/
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties
- ./conf/initData.groovy:/opt/gremlin-server/scripts/initData.groovy
test:
image: node:14-slim
working_dir: /app
depends_on:
- gremlin-server
networks:
- graphdb_net
volumes:
- ../:/app
environment:
- NEPTUNE_CONNECTION_STRING=ws://gremlin-server:8182
command:
yarn test
networks:
graphdb_net:
driver: bridge
and I'm running them like this in my workflow:
- name: Spin up test environment
run: |
docker compose -f dev/docker-compose.yaml pull
docker compose -f dev/docker-compose.yaml build
- name: Run tests
run: |
docker compose -f dev/docker-compose.yaml run test
It's based on #DannyB's suggestion and his answer here so all props go to him.

Related

Bug Gitlab-runner restart always

I have a problem with the deployment of Gitlab-runner on my structure with Docker-compose.
I want to register my gitlab-runner automaticaly but when i start my compose, all is good, and after that, my container is destroy.
That's my compose :
`version: '3.6'
services:
gitlab-runner:
image: gitlab/gitlab-runner:latest
container_name: gitlab-runner
restart: 'no'
depends_on:
- gitlab
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /myrepository/gitlab-runner:/etc/gitlab-runner
command :
- register
- --non-interactive
- --url=MY_GITLAB_URL
- --registration-token=MY_TOKEN
- --executor=docker
- --docker-image=ruby:2.7
- --name=myrunner
- --docker-pull-policy=always
- --locked=false
- --run-untagged=false
- --docker-privileged=false
- --limit=0
- --tag-list=general,test
networks:
- gitlab
`
If i launch my runner without the "command" bloc of my docker-compose, it's ok, it stay alive and i can docker exec "gitlab-runner register" without lost my container.
If i launch my docker-compose with "command" bloc, my container will be create a new runner (i can see the runner created on my gitlab) but the container of gitlab runner is instant destroy.
Do you have any explication about that and a solution?
Thanks
I guess that the container is destroyed as soon as the command ends, because it's the way containers work. You must have a process running inside the container if you want it to be alive.
Try something like this:
tty: true
command:
- /bin/sh
- -c
- |
register --non-interactive --url=MY_GITLAB_URL --registration-token=MY_TOKEN --executor=docker --docker-image=ruby:2.7 --name=myrunner --docker-pull-policy=always --locked=false --run-untagged=false --docker-privileged=false --limit=0 --tag-list=general,test &&
sleep infinity
More info:
Docker Compose keep container running
How chain sleep command in docker compose?

How to copy mongo data that's stored in a remote server to my local docker container?

I'm new to mongo and I have a web app the uses mongo to store data. I can get the app to run the docker compose but data gets left out of it when I do. The mongo data is in a remote host and I need to copy all of that data and store it into the mongo container so that dockerized app runs with the same data
I've attempted to dump the data from the remote host on to the container, based on some code I found while researching for this.
# Backup DB
docker run \
--rm \
--link running_mongo:mongo \
-v /data/mongo/backup:/backup \
mongo \
bash -c ‘mongodump --out /backup --host 10.22.150.7:27017’
# Download the dump
scp -r jsikala#10.22.150.7:/data/mongo/backup ./backup
The result I got from doing that is
[jsikala#koala-jsikala koala]$ docker run --rm --link running_mongo:3.2.0 -v /data/mongo/backup:/backup mongo bash -c ‘mongodump --out /backup --host 10.22.150.7:27017’
Unable to find image 'mongo:latest' locally
latest: Pulling from library/mongo
Digest: sha256:93c98ffc714faa1fa501297d35670a62835dbb7e62243cee0c491433ea523f30
Status: Image is up to date for mongo:latest
docker: Error response from daemon: could not get container for running_mongo: No such container: running_mongo.
See 'docker run --help'.
I'm assume I did something trivial wrong.
This is my docker-compose file for a bit of context on what is suppose to happen
version: "3"
volumes:
data:
external:
name: ${MONGO_VOLUME_NAME}
services:
rails:
image: rails2
container_name: koala_rails_${USER}
environment:
- KOALA_ENV
- RAILS_PORT
- KOALA_INGEST_URL=${INGEST_PROTOCOL}://ingest:${INGEST_PORT}
- KOALA_MONGO_URL=mongo_service:27017
- KOALA_REDIS_URL=redis_service:6379
- KOALA_PKI_IN_DEV
- KOALA_USER_ID_HEADER
- USER
- USERNAME
- KOALA_REGISTER_USER_URL
- KOALA_SECURITY_VALIDATOR_URL
- CERT_FILE_PEM=/usr/src/app/certs/public.pem
- PRIVATE_CERT_FILE_PEM=/usr/src/app/certs/private-key.pem
- SSL_CA_FILE=/usr/src/app/certs/ca.pem
- LOGNAME
- KOALA_SECRET_KEY_BASE
- KOALA_MONGO_USERNAME
- KOALA_MONGO_PASSWORD
- KOALA_HELP_URL
- KOALA_CONTACT_EMAIL
- KOALA_USE_CERTS
- BUNDLE_GEMFILE
- KOALA_SERVER_URL
- RAILS_SERVE_STATIC_FILES
- RAILS_LOG_TO_STDOUT
ports:
- "${RAILS_PORT}:${RAILS_PORT}"
volumes:
- ${CERT_FILE_PEM}:/usr/src/app/certs/public.pem
- ${PRIVATE_CERT_FILE_PEM}:/usr/src/app/certs/private-key.pem
- ${SSL_CA_FILE}:/usr/src/app/certs/ca.pem
links:
- mongo_service
- redis_service
- ingest
depends_on:
- mongo_service
- redis_service
mongo_service:
image: mongo:3.2.0
volumes:
- data:/data/db
ports:
- "27017:27017"
redis_service:
image: redis
restart: always
ports:
- "6379:6379"
ingest:
image: ingest
container_name: koala_ingest_${USER}
extra_hosts:
- csie.as.northgrum.com:10.8.131.12
environment:
- KOALA_ENV
- KOALA_CONFIG_FILE=/go/config.yml
- INGEST_PORT
- LOGNAME
- KOALA_JIRA_URL
- KOALA_JIRA_SESSION_URL
- CERT_FILE_PEM=/go/certs/public.pem
- PRIVATE_CERT_FILE_PEM=/go/certs/private-key.pem
- SSL_CA_FILE=/go/certs/ca.pem
- KOALA_REDIS_URL=redis_service:6379
- KOALA_MONGO_URL=mongo_service:27017
- KOALA_USE_CERTS
- KOALA_MONGO_USERNAME
- KOALA_MONGO_PASSWORD
- JIRA_USERNAME=jsikala
- JIRA_PASSWORD=changeme123
ports:
- "${INGEST_PORT}:${INGEST_PORT}"
volumes:
- ${CERT_FILE_PEM}:/go/certs/public.pem
- ${PRIVATE_CERT_FILE_PEM}:/go/certs/private-key.pem
- ${SSL_CA_FILE}:/go/certs/ca.pem
links:
- mongo_service
depends_on:
- mongo_service
- redis_service
Essentially the once the docker-compose file is ran then, the app deploys with some data, just like it does on the remote host. Since I can't seem to get the data that's in the remote host export/dumped on to my container, the app doesn't have the data that it needs.
docker run is the command you'd use to start a mongo container running. You shouldn't need to start running a new container to dump data from an existing container.
If you want to run a command within the container, you'll want to run docker ps to find the name of your container & then docker exec to run a command within the container (or connect to a shell within the container).
You shouldn't need to connect to the container at all to run mongoexport though -- you should be able to just run mongoexport with the correct port and creds to dump your data.

Gitlab CI with Docker images - Flask microservice testing database

I am trying to deploy a Flack app/service which is built into a docker container to Gitlab CI. I am able to get everything working via docker-compose except when I try to run tests against the postgres database I am getting the below error:
Is the server running on host "events_db" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
Presumably this is because the containers can't see each other. I've tried many different methods. But below is my latest. I have attempted to have docker-compose spin up both containers (just like it does on local), run the postgres db as a git lab service, run from a python image instead of a docker image, use a docker.prod.yml where I remove the volumes and variables.
Nothing is working. I've checked just about every link that shows up on google when you look for 'gitlab ci docker flask postgres' and I believe that I am massively misunderstanding the implementation.
I do have gitlab runner up and going.
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
- postgres:latest
stages:
- test
variables:
POSTGRES_DB: events_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
DATABASE_URL: postgres://postgres#postgres:5432/events_test
FLASK_ENV: development
APP_SETTINGS: app.config.TestingConfig
DOCKER_COMPOSE_VERSION: 1.23.2
before_script:
#- rm /usr/local/bin/docker-compose
- apk add --no-cache py-pip python-dev libffi-dev openssl-dev gcc libc-dev make
- pip install docker-compose
#- mv docker-compose /usr/local/bin
- docker-compose up -d --build
test:
stage: test
#coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
script:
- docker-compose exec -T events python manage.py test
after_script:
- docker-compose down
docker-compose.yml
version: '3.3'
services:
events:
build:
context: ./services/events
dockerfile: Dockerfile
volumes:
- './services/events:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=app.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#events_db:5432/events_dev # new
- DATABASE_TEST_URL=postgres://postgres:postgres#events_db:5432/events_test # new
events_db:
build:
context: ./services/events/app/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
What is the executor type of your Gitlab Runner?
If you're using the Kubernetes executor, add this variable:
DOCKER_HOST: tcp://localhost:2375/
For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
Also, the Gitlab Runner should be in "privileged" mode.
More info:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#help-and-feedback
Hope that helps!

Get docker-compose up to only run certain containers

So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)

How to move a docker-compose environment to other computer

I am developing a service using docker-compose and I deploy the the containers to a remote host using this commands:
eval $(docker-machine env digitaloceanserver)
docker-compose build && docker-compose stop && docker-compose rm -f && docker-compose up -d
My problem is that I'm changing laptop and I exported the docker-machines to the new laptop and I can activate them.
But when I try to deploy new changes it raises these errors:
Creating postgres ... error Creating redis ...ERROR: for postgres
Cannot create container for service postgres: b'Conflict. The
container name "/postgres" is already in use by container
"612f3887544224aeCreating redis ... errorERROR: for redis Cannot
create container for service redis: b'Conflict. The container name
"/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for postgres Cannot create container for service
postgres: b'Conflict. The container name "/postgres" is already in use
by container
"612f3887544224ae79f67e29552b4d97e246104b8a057b3a03d39f6546dbbd38".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for redis Cannot create container for service redis:
b'Conflict. The container name "/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.' ERROR: Encountered errors while bringing up the project.
My docker-compose.yml is this:
services:
nginx:
build: './docks/nginx/.'
ports:
- '80:80'
- "443:443"
volumes:
- letsencrypt_certs:/etc/nginx/certs
- letsencrypt_www:/var/www/letsencrypt
volumes_from:
- web:ro
depends_on:
- web
letsencrypt:
build: './docks/certbot/.'
command: /bin/true
volumes:
- letsencrypt_certs:/etc/letsencrypt
- letsencrypt_www:/var/www/letsencrypt
web:
build: './sources/.'
image: 'websource'
ports:
- '127.0.0.1:8000:8000'
env_file: '.env'
command: 'gunicorn cuidum.wsgi:application -w 2 -b :8000 --reload --capture-output --enable-stdio-inheritance --log-level=debug --access-logfile=- --log-file=-'
volumes:
- 'cachedata:/cache'
- 'mediadata:/media'
depends_on:
- postgres
- redis
celery_worker:
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum worker -l debug'
volumes_from:
- web
depends_on:
- web
celery_beat:
container_name: 'celery_beat'
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum beat --pidfile= -l debug'
volumes_from:
- web
depends_on:
- web
postgres:
container_name: 'postgres'
image: 'mdillon/postgis'
ports:
- '127.0.0.1:5432:5432'
volumes:
- 'pgdata:/var/lib/postgresql/data/'
redis:
container_name: 'redis'
image: 'redis:3.2.0'
ports:
- '127.0.0.1:6379:6379'
volumes:
- 'redisdata:/data'
volumes:
pgdata:
redisdata:
cachedata:
mediadata:
staticdata:
letsencrypt_certs:
letsencrypt_www:
You’re seeing those errors because you’re explicitly setting container_name:, and those same container names are used elsewhere. Remove those explicit settings. (You don’t need them even for inter-container DNS; Docker Compose automatically creates an alias for you using the name of the service block.)
There are still potential issues from port conflicts. If your other PostgreSQL container is listening on the same (default) host port 5432 then the one you declare in this docker-compose.yml file will conflict with it. You might be able to just not expose your database container ports, or you might need to change the port numbers in this file.