I have a problem with the deployment of Gitlab-runner on my structure with Docker-compose.
I want to register my gitlab-runner automaticaly but when i start my compose, all is good, and after that, my container is destroy.
That's my compose :
`version: '3.6'
services:
gitlab-runner:
image: gitlab/gitlab-runner:latest
container_name: gitlab-runner
restart: 'no'
depends_on:
- gitlab
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /myrepository/gitlab-runner:/etc/gitlab-runner
command :
- register
- --non-interactive
- --url=MY_GITLAB_URL
- --registration-token=MY_TOKEN
- --executor=docker
- --docker-image=ruby:2.7
- --name=myrunner
- --docker-pull-policy=always
- --locked=false
- --run-untagged=false
- --docker-privileged=false
- --limit=0
- --tag-list=general,test
networks:
- gitlab
`
If i launch my runner without the "command" bloc of my docker-compose, it's ok, it stay alive and i can docker exec "gitlab-runner register" without lost my container.
If i launch my docker-compose with "command" bloc, my container will be create a new runner (i can see the runner created on my gitlab) but the container of gitlab runner is instant destroy.
Do you have any explication about that and a solution?
Thanks
I guess that the container is destroyed as soon as the command ends, because it's the way containers work. You must have a process running inside the container if you want it to be alive.
Try something like this:
tty: true
command:
- /bin/sh
- -c
- |
register --non-interactive --url=MY_GITLAB_URL --registration-token=MY_TOKEN --executor=docker --docker-image=ruby:2.7 --name=myrunner --docker-pull-policy=always --locked=false --run-untagged=false --docker-privileged=false --limit=0 --tag-list=general,test &&
sleep infinity
More info:
Docker Compose keep container running
How chain sleep command in docker compose?
Related
Running rundeck from docker (default backend), but noticed there are no logs. This documentation seems not complete / not valid for docker deployment: https://docs.rundeck.com/docs/administration/maintenance/logs.html
All the logs inside docker:/home/rundeck/server/logs have 0 size.
How to review the logs when running as a docker ?
Thanks,
The execution logs are stored at the /home/rundeck/var/logs/rundeck path, so, a good idea is to mount it as a volume (to see them in your local filesystem), take a look at this docker-compose example:
version: '3'
services:
rundeck:
image: rundeck/rundeck:4.2.1
environment:
RUNDECK_GRAILS_URL: http://localhost:4440
RUNDECK_DATABASE_DRIVER: org.mariadb.jdbc.Driver
RUNDECK_DATABASE_USERNAME: rundeck
RUNDECK_DATABASE_PASSWORD: rundeck
RUNDECK_DATABASE_URL: jdbc:mariadb://mysql/rundeck?autoReconnect=true&useSSL=false&allowPublicKeyRetrieval=true
RUNDECK_LOGGING_STRATEGY: FILE
volumes:
- ./data/logs/:/home/rundeck/var/logs/rundeck/
ports:
- 4440:4440
tty: true
mysql:
image: mysql:8
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=rundeck
- MYSQL_USER=rundeck
- MYSQL_PASSWORD=rundeck
The service.log is available in the docker logs command, to see it just do docker logs <container_id> -f.
I'd like to run some integration tests against a real database, but I fail to start an additional container (for the db), because I need to mount a config file that is in my repo before it is starting up.
This is how I use the database on my local computer (docker-compose):
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-
I guess I cannot use a service container as the code is not available at the time the service container is started, therefore it won't pick up my configuration.
That's why I tried to run a container within my container using --network host (see below) and the container seems to be running fine, still I'm not able to curl it.
- name: Start DB for tests
run: |
docker run -d \
--network host \
-v ${{ github.workspace }}/dev/conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml \
-v ${{ github.workspace }}/dev/conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties \
tinkerpop/gremlin-server:3.5
- name: Test connection
run: |
curl "localhost:8182/gremlin?gremlin=g.V().valueMap()"
According to the documentation about the job context the id of the container network should be available ({{job.container.network}}) but is empty if you don’t use any job-level service or container.
Any ideas what I could try next?
This is what I ended up with: I'm now using docker-compose to run the integration tests (on my local computer as well as on GitHub Actions). I'm just mounting the entire directory/repo in the test container. Pulling the node:14-slim delays the build by some seconds, but I guess it's still the best option:
version: "3.2"
services:
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./data/:/opt/gremlin-server/data/
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties
- ./conf/initData.groovy:/opt/gremlin-server/scripts/initData.groovy
test:
image: node:14-slim
working_dir: /app
depends_on:
- gremlin-server
networks:
- graphdb_net
volumes:
- ../:/app
environment:
- NEPTUNE_CONNECTION_STRING=ws://gremlin-server:8182
command:
yarn test
networks:
graphdb_net:
driver: bridge
and I'm running them like this in my workflow:
- name: Spin up test environment
run: |
docker compose -f dev/docker-compose.yaml pull
docker compose -f dev/docker-compose.yaml build
- name: Run tests
run: |
docker compose -f dev/docker-compose.yaml run test
It's based on #DannyB's suggestion and his answer here so all props go to him.
So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)
I am using docker-compose up command to spin-up few containers on AWS AMI RHEL 7.6 instance. I observe that in whichever containers there's a volume mounting, they are exiting with status Exiting(1) immediately after starting and remaining containers remain up. I tried using tty: true and stdin_open: true, but it didn't help. Surprisingly, the set-up works fine in another instance which basically I am trying to replicate in this new one.
The stopped containers are Fabric v1.2 peers, CAs and orderer.
Docker-compose.yml file which is in root folder where I use docker-compose up command
version: '2.1'
networks:
gcsbc:
name: gcsbc
services:
ca.org1.example.com:
extends:
file: fabric/docker-compose.yml
service: ca.org1.example.com
fabric/docker-compose.yml
version: '2.1'
networks:
gcsbc:
services:
ca.org1.example.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
ports:
- '7054:7054'
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./artifacts/channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerorg1
networks:
- gcsbc
hostname: ca.org1.example.com
I have a main service in my docker-compose file that uses postgres's image and, though I seem to be successfully connecting to the database, the data that I'm writing to it is not being kept beyond the lifetime of the container (what I did is based on this tutorial).
Here's my docker-compose file:
main:
build: .
volumes:
- .:/code
links:
- postgresdb
command: python manage.py insert_into_database
environment:
- DEBUG=true
postgresdb:
build: utils/sql/
volumes_from:
- postgresdbdata
ports:
- "5432"
environment:
- DEBUG=true
postgresdbdata:
build: utils/sql/
volumes:
- /var/lib/postgresql
command: true
environment:
- DEBUG=true
and here's the Dockerfile I'm using for the postgresdb and postgresdbdata services (which essentially creates the database and adds a user):
FROM postgres
ADD make-db.sh /docker-entrypoint-initdb.d/
How can I get the data to stay after the main service has finished running, in order to be able to use it in the future (such as when I call something like python manage.py retrieve_from_database)? Is /var/lib/postgresql even the right directory, and would boot2docker have access to it given that it's apparently limited to /Users/?
Thank you!
The problem is that Compose creates a new version of the postgresdbdata container each time it restarts, so the old container and its data gets lost.
A secondary issue is that your data container shouldn't actually be running; data containers are really just a namespace for a volume that can be imported with --volumes-from, which still works with stopped containers.
For the time being the best solution is to take the postgresdbdata container out of the Compose config. Do something like:
$ docker run --name postgresdbdata postgresdb echo "Postgres data container"
Postgres data container
The echo command will run and the container will exit, but as long as don't docker rm it, you will still be able to refer to it in --volumes-from and your Compose application should work fine.