Getting error while run the hyperledger using docker-composer - docker-compose

actually I am want to step up hyperledger network using fabric CA, however my CA docker get Exited whenever I up the docker-composer.
I found root cause as it happens because "FABRIC_CA_SERVER_CA_KEYFILE" changing every time however I put it manually.
" - FABRIC_CA_SERVER_CA_KEYFILE=/var/hyperledger/fabric-ca-server-
config/bac7a78da8de5b5071c62cefa3ada1c978dadcce333cb92b5ea9e7c3462ca477_sk". In that variable fact is
"bac7a78da8de5b5071c62cefa3ada1c978dadcce333cb92b5ea9e7c3462ca477_sk"
How can I make it fixed with some variable ID in docker-composer file
My docker-composer-file
version: '2'
networks:
dfarm:
services:
#ca
ca.dfarmadmin.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/var/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.dfarmadmin.com
- FABRIC_CA_SERVER_CA_CERTFILE=/var/hyperledger/fabric-ca-server-config/localhost-7054.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/var/hyperledger/fabric-ca-server-config/bac7a78da8de5b5071c62cefa3ada1c978dadcce333cb92b5ea9e7c3462ca477_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw'
volumes:
- ${PWD}/caserver/admin/msp/cacerts:/var/hyperledger/fabric-ca-server-config
container_name: ca.dfarmadmin.com
networks:
- dfarm
# Orderer
orderer.dfarmadmin.com:
container_name: orderer.dfarmadmin.com
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
- FABRIC_CFG_PATH=/var/hyperledger/config
# - ORDERER_GENERAL_LOGLEVEL=DEBUG
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=orderer.dfarmadmin.com
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/genesis/dfarm-genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/msp
- ORDERER_FILELEDGER_LOCATION=/var/ledger
working_dir: $HOME
command: orderer
volumes:
# Folder with genesis block
- ${PWD}/config/orderer:/var/hyperledger/genesis
# Map the folder with MSP for orderer
- ${PWD}/client/orderer/orderer//msp:/var/hyperledger/msp
# Map the current folder to cfg
- ${PWD}/config/orderer:/var/hyperledger/config
- ${HOME}/ledgers/ca/orderer.dfarmadmin.com:/var/ledger
ports:
- 7050:7050
networks:
- dfarm
# Dfarmadmin peer1
dfarmadmin-peer1.dfarmadmin.com:
container_name: dfarmadmin-peer1.dfarmadmin.com
image: hyperledger/fabric-peer:$IMAGE_TAG
environment:
- FABRIC_CFG_PATH=/var/hyperledger/config
# - CORE_LOGGING_LEVEL=debug
- FABRIC_LOGGING_SPEC=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_PEER_ID=dfarmadmin-peer1.dfarmadmin.com
# - CORE_PEER_LISTENADDRESS=dfarmretail-peer1.dfarmretail.com:7051
- CORE_PEER_ADDRESS=dfarmadmin-peer1.dfarmadmin.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=dfarmadmin-peer1.dfarmadmin.com:7051
# - CORE_PEER_ADDRESS=0.0.0.0:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=0.0.0.0:7051
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_PEER_TLS_ENABLED=false
# - CORE_PEER_GOSSIP_USELEADERELECTION=true
# - CORE_PEER_GOSSIP_ORGLEADER=false
# - CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_FILESYSTEMPATH=/var/ledger
working_dir: $HOME
# command: peer node start --peer-chaincodedev=true
command: peer node start
volumes:
# Folder with channel create tx file
- ${PWD}/config:/var/hyperledger/channeltx
# Map the folder with MSP for Peer
- ${PWD}/client/dfarmadmin/peer1/msp:/var/hyperledger/msp
# Map the current folder to cfg
- ${PWD}/config:/var/hyperledger/config
- /var/run/:/host/var/run/
# Ledger folder for the peer
- ${HOME}/ledgers/ca/dfarmadmin-peer1.dfarmadmin.com/:/var/ledger
depends_on:
- orderer.dfarmadmin.com
ports:
- 7051:7051
- 7052:7052
- 7053:7053
networks:
- dfarm
# Dfarmretail peer1
dfarmretail-peer1.dfarmretail.com:
container_name: dfarmretail-peer1.dfarmretail.com
image: hyperledger/fabric-peer:$IMAGE_TAG
environment:
- FABRIC_CFG_PATH=/var/hyperledger/config
# - CORE_LOGGING_LEVEL=debug
- FABRIC_LOGGING_SPEC=INFO
- CORE_CHAINCODE_LOGGING_LEVEL=info
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_PEER_ID=dfarmretail-peer1.dfarmretail.com
- CORE_PEER_ADDRESS=dfarmretail-peer1.dfarmretail.com:8051
# - CORE_PEER_LISTENADDRESS=dfarmretail-peer1.dfarmretail.com:8051
- CORE_PEER_LISTENADDRESS=dfarmretail-peer1.dfarmretail.com:8051
- CORE_PEER_CHAINCODELISTENADDRESS=dfarmretail-peer1.dfarmretail.com:8052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=dfarmretail-peer1.dfarmretail.com:8051
- CORE_PEER_LOCALMSPID=DfarmretailMSP
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_PEER_TLS_ENABLED=false
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# - CORE_PEER_GOSSIP_USELEADERELECTION=true
# - CORE_PEER_GOSSIP_ORGLEADER=false
# - CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_FILESYSTEMPATH=/var/ledger
working_dir: $HOME
# command: peer node start --peer-chaincodedev=true
command: peer node start
volumes:
# Folder with channel create tx file
- ${PWD}/config:/var/hyperledger/channeltx
# Map the folder with MSP for Peer
- ${PWD}/client/dfarmretail/peer1/msp:/var/hyperledger/msp
# Map the current folder to cfg
- ${PWD}/config:/var/hyperledger/config
- /var/run/:/host/var/run/
# Ledger folder for the peer
- ${HOME}/ledgers/ca/dfarmretail-peer1.dfarmretail.com:/var/ledger
depends_on:
- orderer.dfarmadmin.com
ports:
- 8051:8051
- 8052:8052
- 8053:8053
networks:
- dfarm
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- dfarm
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=info
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=dfarmadmin-peer1.dfarmadmin.com:7051
- CORE_PEER_LOCALMSPID=DfarmadminMSP
#- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/
peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.exampl
e.com/msp
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_CHAINCODE_KEEPALIVE=10
working_dir: $HOME
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/
- ./crypto-
config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
networks:
- dfarm
depends_on:
- orderer.example.com
- peer0.org1.example.com
- couchdb

there is a file called docker-template.yaml file in fabric samples/firstnetwork/ . Check it. So one template file acts a backup and one docker-composer.yaml file gets updated with new certificates.If you still face error, let me know I can help you from this 😊

Related

volume mounting clickhouse docker image to override config.xml to connect with tabix

clickhouse:
build: ./db/clickhouse
restart: unless-stopped
volumes:
# Store data to HDD
- ./clickhouse-data:/var/lib/clickhouse/
# Base Clickhouse cfg
- ./clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse/users.xml:/etc/clickhouse-server/users.xml
ports:
- "8123:8123" # for http clients
- "9000:9000" # for console client
environment:
- CLICKHOUSE_USER=oussema
- CLICKHOUSE_PASSWORD=root
- CLICKHOUSE_DB=DWH
- CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1
ulimits:
nofile:
soft: 262144
hard: 262144
tabix:
image: spoonest/clickhouse-tabix-web-client
ports:
- "8080:80"
depends_on:
- clickhouse
restart: unless-stopped
environment:
- CH_NAME=clickhouse
- CH_HOST=https://127.0.0.1:8123
- CH_LOGIN=oussema
- CH_PASSWORD=root
It just working example for test purpose:
docker-compose.yml
version: "3.0"
services:
clickhouse:
image: yandex/clickhouse-server
ports:
- "8123:8123"
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
environment:
- CLICKHOUSE_USER=default
- CLICKHOUSE_PASSWORD=12345
tabix:
image: spoonest/clickhouse-tabix-web-client
ports:
- "8080:80"
depends_on:
- clickhouse
restart: unless-stopped
environment:
- CH_NAME=clickhouse
- CH_HOST=http://localhost:8123
- CH_LOGIN=default
- CH_PASSWORD=12345
It needs:
# run container
docker compose up
# browse the tabix endpoint
http://localhost:8080/

Add route in docker compose

I have VM with docker containers in a cloud.
It have 2 containers: wireguard and redmine.
I have LDAP-authorization in redmine.
LDAP-server locates in private LAN (behind NAT), and I have VPN via wireguard to this LAN.
I need add route in Redmine-container so that redmine has access to a private LAN via Wireguard-container.
Now I make it by hand after containers start I write docker-compose exec redmine ip route add 192.168.42.0/23 via 172.20.0.50
Could you advice me, how implement it to my pipeline?
P.S. redmine-container already has entrypoint and cmd directives in Dockerfile.
version: '3.9'
services:
wireguard:
image: linuxserver/wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
volumes:
- ./wireguard-config:/config
- /lib/modules:/lib/modules
networks:
default:
ipv4_address: 172.20.0.50
sysctls:
- net.ipv4.conf.all.src_valid_mark=1 # for clients mode
restart: unless-stopped
postgres:
image: postgres:14.2-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- 'POSTGRES_PASSWORD=MySUperSecret'
- 'POSTGRES_DB=redmine'
redmine:
image: redmine:5.0.1-alpine
cap_add:
- NET_ADMIN
volumes:
- redmine-files:/usr/src/redmine/files
- ./redmine-plugins:/usr/src/redmine/plugins
- ./configuration.yml:/usr/src/redmine/config/configuration.yml
ports:
- 80:3000
depends_on:
- postgres
environment:
- 'REDMINE_DB_POSTGRES=postgres'
- 'REDMINE_DB_DATABASE=redmine'
- 'REDMINE_DB_PASSWORD=MySUperSecret'
- 'REDMINE_PLUGINS_MIGRATE=true'
restart: unless-stopped
networks:
default:
ipam:
config:
- subnet: 172.20.0.0/24
volumes:
postgres-data:
redmine-files:
I solve my problem:
services:
wireguard:
image: linuxserver/wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
ports:
- 3000:3000
environment:
- TZ=Europe/Moscow
volumes:
- ./wireguard-config:/config
- /lib/modules:/lib/modules
sysctls:
- net.ipv4.conf.all.src_valid_mark=1 # for clients mode
restart: unless-stopped
postgres:
image: postgres:14.2-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- 'POSTGRES_PASSWORD=MySUperSecret'
- 'POSTGRES_DB=redmine'
redmine:
image: redmine:5.0.2-alpine
network_mode: service:wireguard
volumes:
- redmine-files:/usr/src/redmine/files
- ./redmine-plugins:/usr/src/redmine/plugins
- ./configuration.yml:/usr/src/redmine/config/configuration.yml
# ports:
# - 80:3000
depends_on:
- postgres
environment:
- 'REDMINE_DB_POSTGRES=postgres'
- 'REDMINE_DB_DATABASE=redmine'
- 'REDMINE_DB_PASSWORD=MySUperSecret'
- 'REDMINE_PLUGINS_MIGRATE=true'
restart: unless-stopped
volumes:
postgres-data:
redmine-files:

The AirFlow 1.10: the scheduler does not apper to be running

I run AirFlow on local machine with docker-compose:
version: '2'
services:
postgresql:
image: bitnami/postgresql:10
volumes:
- postgresql_data:/bitnami/postgresql
environment:
- POSTGRESQL_DATABASE=bitnami_airflow
- POSTGRESQL_USERNAME=bn_airflow
- POSTGRESQL_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
redis:
image: bitnami/redis:5.0
volumes:
- redis_data:/bitnami
environment:
- ALLOW_EMPTY_PASSWORD=yes
airflow-scheduler:
image: bitnami/airflow-scheduler:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_scheduler_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow-worker:
image: bitnami/airflow-worker:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_worker_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow:
image: bitnami/airflow:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_USERNAME=user
- AIRFLOW_PASSWORD=password
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=yes
ports:
- '8080:8080'
volumes:
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
volumes:
airflow_scheduler_data:
driver: local
airflow_worker_data:
driver: local
airflow_data:
driver: local
postgresql_data:
driver: local
redis_data:
driver: local
But when I sing in the UI interface, I see
"The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled."
Why?
I use official docker images and there is no problem with this.
And another problem - while I don't switch AIRFLOW_LOAD_EXAMPLES=yes or no and restart docker-compose I don't see the updated DAGs' list. (
When I have got for AirFlow 1 docker-compose by Puckel, everything worked: https://github.com/puckel/docker-airflow/blob/master/README.md

Launching more containers with a Traefik+Odoo+Postgres .yml

I'm using a .yml to launch an Odoo instance & its postgresql through a traefik and it works perfectly fine. It assigns the domain and subdomain with a Let's Encrypt certificate, perfect.
My issue now is that I wish to launch more of these but I haven't succeeded in doing so.
I've tried:
Changing the ports: in the odoo section
Removing the ports: section from the odoo section
Editing the ports: in the traefik section
The .yml:
version: "3"
services:
odoo:
image: odoo:12.0
depends_on:
- db
restart: unless-stopped
networks:
- internal
ports:
- "8069:8069"
- "8072:8072"
environment:
- HOST=db
- USER=${ODOO_USER}
- PASSWORD=${ODOO_PASS}
volumes:
- ./odoo/odoo-web-data:/var/lib/odoo
- ./odoo/config:/etc/odoo
- ./odoo/addons:/mnt/extra-addons
- ./odoo/logs:/var/log/odoo
labels:
- 'traefik.http.routers.odoo.rule=Host(`${ODOO_TRAEFIK_URL}`)'
- 'traefik.http.routers.odoo.entrypoints=websecure'
- 'traefik.http.routers.odoo.tls.certresolver=odoo'
- 'traefik.port=8069'
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=web"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https#docker"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
db:
image: postgres:10
restart: unless-stopped
networks:
- internal
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=${ODOO_USER}
- POSTGRES_PASSWORD=${ODOO_PASS}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./pgdata:/var/lib/postgresql/data/pgdata
traefik:
image: traefik:v2.0
networks:
- internal
- web
ports:
# The HTTP port
- "80:80"
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
- "./traefik/letsencrypt:/letsencrypt"
- "./traefik/traefik.yml:/etc/traefik.yml"
- "/var/run/docker.sock:/var/run/docker.sock"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker"
- "--providers.docker.defaultRule=Host(`{{ trimPrefix `/` .Name }}.${TRAEFIK_DEFAULT_DOMAIN}`)"
- "--entryPoints.web.address=:80"
- "--entryPoints.websecure.address=:443"
- "--certificatesResolvers.odoo.acme.httpchallenge=true"
- "--certificatesresolvers.odoo.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.odoo.acme.email=${ACME_EMAIL}"
- "--certificatesresolvers.odoo.acme.storage=/letsencrypt/acme.json"
networks:
internal:
web:
external: true
Assuming I already have a Traefik + Odoo + Postgres already running, how could I go about launching more instances of these together?

Auto-register GitLab runner

I have a docker-compose.yml file that sets up Gitlab, Container Registry and a Gitlab Runner.
version: '2'
services:
redis:
restart: always
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- redis:/var/lib/redis:Z
postgresql:
restart: always
image: sameersbn/postgresql:9.5-3
volumes:
- postgresql:/var/lib/postgresql:Z
environment:
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- DB_EXTENSION=pg_trgm
gitlab:
restart: always
image: sameersbn/gitlab:10.1.1
volumes:
- gitlab-data:/home/git/data:Z
- gitlab-logs:/var/log/gitlab
- ./certs:/certs
depends_on:
- redis
- postgresql
ports:
- "80:80"
- "2222:22"
external_links:
- "registry:registry"
environment:
- DEBUG=false
- DB_ADAPTER=postgresql
- DB_HOST=postgresql
- DB_PORT=5432
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- REDIS_HOST=redis
- REDIS_PORT=6379
- GITLAB_HTTPS=false # <---
- SSL_SELF_SIGNED=true # <---
- GITLAB_HOST=192.168.99.100 # <---
- GITLAB_PORT=80
- GITLAB_SSH_PORT=2222
- GITLAB_SHELL_SSH_PORT=2222
- GITLAB_RELATIVE_URL_ROOT=
- GITLAB_SECRETS_DB_KEY_BASE=secret
- GITLAB_SECRETS_SECRET_KEY_BASE=secret
- GITLAB_SECRETS_OTP_KEY_BASE=secret
- GITLAB_REGISTRY_ENABLED=true
- GITLAB_REGISTRY_HOST=localhost # <---
- GITLAB_REGISTRY_PORT=4567
- GITLAB_REGISTRY_API_URL=https://localhost:4567/ # Internal address to the registry, will be used by GitLab to directly communicate with API.
- GITLAB_REGISTRY_CERT_PATH=/certs/localhost-auth.crt # <---
- GITLAB_REGISTRY_KEY_PATH=/certs/localhost-auth.key # <---
# Read here --> https://hub.docker.com/r/sameersbn/gitlab-ci-multi-runner/
runner:
restart: always
image: gitlab/gitlab-runner:latest
external_links:
- "gitlab:gitlab" # <---
environment:
- CI_SERVER_URL=http://192.168.99.100:80/ci/
- RUNNER_TOKEN=1XoJuQeyyN3EZxAt7pkn # < ------------------- different every time
- RUNNER_DESCRIPTION=default_runner
- RUNNER_EXECUTOR=shell
registry:
restart: always
image: registry:2.4.1
ports:
- "4567:5000" # <---
volumes:
- registry-data:/var/lib/registry
- ./certs:/certs
external_links:
- "gitlab:gitlab" # <---
environment:
- REGISTRY_LOG_LEVEL=info
- REGISTRY_STORAGE_DELETE_ENABLED=true
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry
- REGISTRY_AUTH_TOKEN_REALM=http://localhost/jwt/auth # <---
- REGISTRY_AUTH_TOKEN_SERVICE=container_registry
- REGISTRY_AUTH_TOKEN_ISSUER=localhost
- REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE=/certs/localhost-auth.crt # <---
- SSL_REGISTRY_KEY_PATH=/certs/localhost-auth.key # <---
- SSL_REGISTRY_CERT_PATH=/certs/localhost-auth.crt # <---
- REGISTRY_HTTP_TLS_CERTIFICATE=/certs/localhost-auth.crt # <---
- REGISTRY_HTTP_TLS_KEY=/certs/localhost-auth.key # <---
- REGISTRY_HTTP_SECRET=secret
portainer:
image: portainer/portainer
ports:
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/opt/portainer:/data"
volumes:
gitlab-data:
gitlab-logs:
postgresql:
redis:
registry-data:
The problem is that the runner is not registered and I have to do it every time manually (not succeeded yet though). I would like to be registered automatically to the Gitlab server with the auto-generated token so I [or the arbitrary dev that would use the docker-compose.yml file] do not care about that.
I am trying to find a way to grab the token and feed it to the runner. Is it possible in any way?
You can either (1) mount your /etc/gitlab-runner directory and keep it persistent or (2) create an entrypoint script that registers the runner every time the container starts.
For example, you may have an entrypoint script like this:
#!/usr/bin/env bash
# entrypoint.sh
gitlab-runner register \
--non-interactive \
--url "${CI_SERVER_URL}/" \
--registration-token "${RUNNER_TOKEN}" \
--executor "${RUNNER_EXECUTOR}" \
--descritpion="${RUNNER_DESCRIPTION}" \
--config="/etc/gitlab-runner/config.toml"
# call original gitlab-runner entrypoint with CMD args
exec /usr/bin/dumb-init /entrypoint "$#"
And a dockerfile for the runner like this:
FROM gitlab/gitlab-runner:v14.8.2
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["./docker-entrypoint.sh"]
# Need to redefine original CMD provided by the parent image after setting ENTRYPOINT
CMD ["run", "--user=gitlab-runner", "--working-directory=/home/gitlab-runner"]
This is just one way of expressing the solution. In principle, you don't need to custom-build the image -- you could make an equivalent entrypoint: key in your compose file and skip the custom dockerfile.