Auto-register GitLab runner - docker-compose

I have a docker-compose.yml file that sets up Gitlab, Container Registry and a Gitlab Runner.
version: '2'
services:
redis:
restart: always
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- redis:/var/lib/redis:Z
postgresql:
restart: always
image: sameersbn/postgresql:9.5-3
volumes:
- postgresql:/var/lib/postgresql:Z
environment:
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- DB_EXTENSION=pg_trgm
gitlab:
restart: always
image: sameersbn/gitlab:10.1.1
volumes:
- gitlab-data:/home/git/data:Z
- gitlab-logs:/var/log/gitlab
- ./certs:/certs
depends_on:
- redis
- postgresql
ports:
- "80:80"
- "2222:22"
external_links:
- "registry:registry"
environment:
- DEBUG=false
- DB_ADAPTER=postgresql
- DB_HOST=postgresql
- DB_PORT=5432
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- REDIS_HOST=redis
- REDIS_PORT=6379
- GITLAB_HTTPS=false # <---
- SSL_SELF_SIGNED=true # <---
- GITLAB_HOST=192.168.99.100 # <---
- GITLAB_PORT=80
- GITLAB_SSH_PORT=2222
- GITLAB_SHELL_SSH_PORT=2222
- GITLAB_RELATIVE_URL_ROOT=
- GITLAB_SECRETS_DB_KEY_BASE=secret
- GITLAB_SECRETS_SECRET_KEY_BASE=secret
- GITLAB_SECRETS_OTP_KEY_BASE=secret
- GITLAB_REGISTRY_ENABLED=true
- GITLAB_REGISTRY_HOST=localhost # <---
- GITLAB_REGISTRY_PORT=4567
- GITLAB_REGISTRY_API_URL=https://localhost:4567/ # Internal address to the registry, will be used by GitLab to directly communicate with API.
- GITLAB_REGISTRY_CERT_PATH=/certs/localhost-auth.crt # <---
- GITLAB_REGISTRY_KEY_PATH=/certs/localhost-auth.key # <---
# Read here --> https://hub.docker.com/r/sameersbn/gitlab-ci-multi-runner/
runner:
restart: always
image: gitlab/gitlab-runner:latest
external_links:
- "gitlab:gitlab" # <---
environment:
- CI_SERVER_URL=http://192.168.99.100:80/ci/
- RUNNER_TOKEN=1XoJuQeyyN3EZxAt7pkn # < ------------------- different every time
- RUNNER_DESCRIPTION=default_runner
- RUNNER_EXECUTOR=shell
registry:
restart: always
image: registry:2.4.1
ports:
- "4567:5000" # <---
volumes:
- registry-data:/var/lib/registry
- ./certs:/certs
external_links:
- "gitlab:gitlab" # <---
environment:
- REGISTRY_LOG_LEVEL=info
- REGISTRY_STORAGE_DELETE_ENABLED=true
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry
- REGISTRY_AUTH_TOKEN_REALM=http://localhost/jwt/auth # <---
- REGISTRY_AUTH_TOKEN_SERVICE=container_registry
- REGISTRY_AUTH_TOKEN_ISSUER=localhost
- REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE=/certs/localhost-auth.crt # <---
- SSL_REGISTRY_KEY_PATH=/certs/localhost-auth.key # <---
- SSL_REGISTRY_CERT_PATH=/certs/localhost-auth.crt # <---
- REGISTRY_HTTP_TLS_CERTIFICATE=/certs/localhost-auth.crt # <---
- REGISTRY_HTTP_TLS_KEY=/certs/localhost-auth.key # <---
- REGISTRY_HTTP_SECRET=secret
portainer:
image: portainer/portainer
ports:
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/opt/portainer:/data"
volumes:
gitlab-data:
gitlab-logs:
postgresql:
redis:
registry-data:
The problem is that the runner is not registered and I have to do it every time manually (not succeeded yet though). I would like to be registered automatically to the Gitlab server with the auto-generated token so I [or the arbitrary dev that would use the docker-compose.yml file] do not care about that.
I am trying to find a way to grab the token and feed it to the runner. Is it possible in any way?

You can either (1) mount your /etc/gitlab-runner directory and keep it persistent or (2) create an entrypoint script that registers the runner every time the container starts.
For example, you may have an entrypoint script like this:
#!/usr/bin/env bash
# entrypoint.sh
gitlab-runner register \
--non-interactive \
--url "${CI_SERVER_URL}/" \
--registration-token "${RUNNER_TOKEN}" \
--executor "${RUNNER_EXECUTOR}" \
--descritpion="${RUNNER_DESCRIPTION}" \
--config="/etc/gitlab-runner/config.toml"
# call original gitlab-runner entrypoint with CMD args
exec /usr/bin/dumb-init /entrypoint "$#"
And a dockerfile for the runner like this:
FROM gitlab/gitlab-runner:v14.8.2
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["./docker-entrypoint.sh"]
# Need to redefine original CMD provided by the parent image after setting ENTRYPOINT
CMD ["run", "--user=gitlab-runner", "--working-directory=/home/gitlab-runner"]
This is just one way of expressing the solution. In principle, you don't need to custom-build the image -- you could make an equivalent entrypoint: key in your compose file and skip the custom dockerfile.

Related

How To Pass Environment variables in Compose ( docker compose )

There are multiple parts of Compose that deal with environment variables in one sense or another. So how do I pass Environment variables in Compose ( docker-compose )
According to the documentation If you have multiple environment variables, you can substitute them by adding them to a default environment variable file named .env or by providing a path to your environment variables file using the --env-file command line option.
version: '3.9'
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
postgres:
container_name: postgres
image: postgres:13-alpine
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
volumes:
- ./pgdata:/var/lib/postgresql/data
- ./database/app.sql:/docker-entrypoint-initdb.d/app.sql
restart: always
ports:
- "35000:5432"
networks:
- app_network
app-api:
container_name: app-api
build:
dockerfile: Dockerfile
context: ./app-api
target: production
environment:
- DB_TYPE=${DATABASE_TYPE}
- POSTGRES_HOST=${DB_HOST}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASS=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
- POSTGRES_PORT=${DB_PORT}
- APP_PORT=${SERVER_PORT}
- NODE_ENV:production
## AWS
- AWS_S3_ACCESS_KEY=${AWS_S3_ACCESS_KEY}
- AWS_S3_SECRET_ACCESS_KEY=${AWS_S3_SECRET_ACCESS_KEY}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- AWS_S3_REGION=${AWS_S3_REGION}
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_LISTEN_PORT=${PGADMIN_LISTEN_PORT}
restart: always
ports:
- "5400:5400"
depends_on:
- postgres
links:
- postgres
networks:
- app_network
you can use it like this. you have to pass value in each environment variable.
server:
environment:
- AWS_S3_ACCESS_KEY=ABCJQHWEQJHWQ
- AWS_S3_SECRET_ACCESS_KEY=ASKJHDAKJHNAWKLHEN
- AWS_S3_BUCKET=abc-text
- AWS_S3_REGION=eu-west-1
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
When you run docker-compose up, the web service defined above uses the image from the defined Dockerfile. You can verify this with the convert command, which prints your resolved application config to the terminal:
You can use this command to verify if you are pathing the proper environment variables
$ docker compose convert
https://docs.docker.com/compose/environment-variables/

The AirFlow 1.10: the scheduler does not apper to be running

I run AirFlow on local machine with docker-compose:
version: '2'
services:
postgresql:
image: bitnami/postgresql:10
volumes:
- postgresql_data:/bitnami/postgresql
environment:
- POSTGRESQL_DATABASE=bitnami_airflow
- POSTGRESQL_USERNAME=bn_airflow
- POSTGRESQL_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
redis:
image: bitnami/redis:5.0
volumes:
- redis_data:/bitnami
environment:
- ALLOW_EMPTY_PASSWORD=yes
airflow-scheduler:
image: bitnami/airflow-scheduler:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_scheduler_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow-worker:
image: bitnami/airflow-worker:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_worker_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow:
image: bitnami/airflow:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_USERNAME=user
- AIRFLOW_PASSWORD=password
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=yes
ports:
- '8080:8080'
volumes:
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
volumes:
airflow_scheduler_data:
driver: local
airflow_worker_data:
driver: local
airflow_data:
driver: local
postgresql_data:
driver: local
redis_data:
driver: local
But when I sing in the UI interface, I see
"The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled."
Why?
I use official docker images and there is no problem with this.
And another problem - while I don't switch AIRFLOW_LOAD_EXAMPLES=yes or no and restart docker-compose I don't see the updated DAGs' list. (
When I have got for AirFlow 1 docker-compose by Puckel, everything worked: https://github.com/puckel/docker-airflow/blob/master/README.md

Setting up localstack resources in docker compose file results in connection aborted failure

I have a docker compose file that looks like the following:
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- "4567-4597:4567-4597"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=s3
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/private${TMPDIR}:/tmp/localstack"
networks:
- my_awesome_network
setup-resources:
image: mesosphere/aws-cli
volumes:
- ./dev_env:/project/dev_env
environment:
- AWS_ACCESS_KEY_ID=dummyaccess
- AWS_SECRET_ACCESS_KEY=dummysecret
- AWS_DEFAULT_REGION=us-east-1
entrypoint: /bin/sh -c
command: >
"
sleep 10;
# aws kinesis create-stream --endpoint-url=http://localstack:4568 --stream-name my_stream --shard-count 1;
aws --endpoint-url=http://localhost:4572 s3 mb s3://demo-bucket
"
networks:
- my_awesome_network
depends_on:
- localstack
networks:
my_awesome_network:
which has been copied from this blog post that I have found, but when I run docker-compose up the bucket fails to create with the following error: ('Connection aborted.', error(99, 'Address not available'))
I ran it with small changes and it's correctly working, change the localhost to localstack
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- '4568-4576:4568-4576'
- '8055:8080'
environment:
- SERVICES=s3
- DOCKER_HOST=unix:///var/run/docker.sock
- DEFAULT_REGION=us-east-1
- DEBUG=1
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/private${TMPDIR}:/tmp/localstack"
networks:
- my_awesome_network
setup-resources:
image: mesosphere/aws-cli
volumes:
- ./dev_env:/project/dev_env
environment:
- AWS_ACCESS_KEY_ID=dummyaccess
- AWS_SECRET_ACCESS_KEY=dummysecret
- AWS_DEFAULT_REGION=us-east-1
entrypoint: /bin/sh -c
command: >
"
sleep 10;
aws --endpoint-url=http://localstack:4572 s3 mb s3://demo-bucket
"
networks:
- my_awesome_network
depends_on:
- localstack
networks:
my_awesome_network:
Small detail here but it should not be aws --endpoint-url=http://localhost:4572 s3 mb s3://demo-bucket
it should instead be aws --endpoint-url=http://localstack:4572 s3 mb s3://demo-bucket, that's right localhost becomes localstack
Starting with version 0.11.0,
All APIs are exposed via a single edge service,
which is accessible on
http://localhost:4566 by default
and EDGE_PORT=4566.
Found on this
Article

Getting error while run the hyperledger using docker-composer

actually I am want to step up hyperledger network using fabric CA, however my CA docker get Exited whenever I up the docker-composer.
I found root cause as it happens because "FABRIC_CA_SERVER_CA_KEYFILE" changing every time however I put it manually.
" - FABRIC_CA_SERVER_CA_KEYFILE=/var/hyperledger/fabric-ca-server-
config/bac7a78da8de5b5071c62cefa3ada1c978dadcce333cb92b5ea9e7c3462ca477_sk". In that variable fact is
"bac7a78da8de5b5071c62cefa3ada1c978dadcce333cb92b5ea9e7c3462ca477_sk"
How can I make it fixed with some variable ID in docker-composer file
My docker-composer-file
version: '2'
networks:
dfarm:
services:
#ca
ca.dfarmadmin.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/var/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.dfarmadmin.com
- FABRIC_CA_SERVER_CA_CERTFILE=/var/hyperledger/fabric-ca-server-config/localhost-7054.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/var/hyperledger/fabric-ca-server-config/bac7a78da8de5b5071c62cefa3ada1c978dadcce333cb92b5ea9e7c3462ca477_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw'
volumes:
- ${PWD}/caserver/admin/msp/cacerts:/var/hyperledger/fabric-ca-server-config
container_name: ca.dfarmadmin.com
networks:
- dfarm
# Orderer
orderer.dfarmadmin.com:
container_name: orderer.dfarmadmin.com
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
- FABRIC_CFG_PATH=/var/hyperledger/config
# - ORDERER_GENERAL_LOGLEVEL=DEBUG
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=orderer.dfarmadmin.com
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/genesis/dfarm-genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/msp
- ORDERER_FILELEDGER_LOCATION=/var/ledger
working_dir: $HOME
command: orderer
volumes:
# Folder with genesis block
- ${PWD}/config/orderer:/var/hyperledger/genesis
# Map the folder with MSP for orderer
- ${PWD}/client/orderer/orderer//msp:/var/hyperledger/msp
# Map the current folder to cfg
- ${PWD}/config/orderer:/var/hyperledger/config
- ${HOME}/ledgers/ca/orderer.dfarmadmin.com:/var/ledger
ports:
- 7050:7050
networks:
- dfarm
# Dfarmadmin peer1
dfarmadmin-peer1.dfarmadmin.com:
container_name: dfarmadmin-peer1.dfarmadmin.com
image: hyperledger/fabric-peer:$IMAGE_TAG
environment:
- FABRIC_CFG_PATH=/var/hyperledger/config
# - CORE_LOGGING_LEVEL=debug
- FABRIC_LOGGING_SPEC=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_PEER_ID=dfarmadmin-peer1.dfarmadmin.com
# - CORE_PEER_LISTENADDRESS=dfarmretail-peer1.dfarmretail.com:7051
- CORE_PEER_ADDRESS=dfarmadmin-peer1.dfarmadmin.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=dfarmadmin-peer1.dfarmadmin.com:7051
# - CORE_PEER_ADDRESS=0.0.0.0:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=0.0.0.0:7051
- CORE_PEER_LOCALMSPID=DfarmadminMSP
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_PEER_TLS_ENABLED=false
# - CORE_PEER_GOSSIP_USELEADERELECTION=true
# - CORE_PEER_GOSSIP_ORGLEADER=false
# - CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_FILESYSTEMPATH=/var/ledger
working_dir: $HOME
# command: peer node start --peer-chaincodedev=true
command: peer node start
volumes:
# Folder with channel create tx file
- ${PWD}/config:/var/hyperledger/channeltx
# Map the folder with MSP for Peer
- ${PWD}/client/dfarmadmin/peer1/msp:/var/hyperledger/msp
# Map the current folder to cfg
- ${PWD}/config:/var/hyperledger/config
- /var/run/:/host/var/run/
# Ledger folder for the peer
- ${HOME}/ledgers/ca/dfarmadmin-peer1.dfarmadmin.com/:/var/ledger
depends_on:
- orderer.dfarmadmin.com
ports:
- 7051:7051
- 7052:7052
- 7053:7053
networks:
- dfarm
# Dfarmretail peer1
dfarmretail-peer1.dfarmretail.com:
container_name: dfarmretail-peer1.dfarmretail.com
image: hyperledger/fabric-peer:$IMAGE_TAG
environment:
- FABRIC_CFG_PATH=/var/hyperledger/config
# - CORE_LOGGING_LEVEL=debug
- FABRIC_LOGGING_SPEC=INFO
- CORE_CHAINCODE_LOGGING_LEVEL=info
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_dfarm
- CORE_PEER_ID=dfarmretail-peer1.dfarmretail.com
- CORE_PEER_ADDRESS=dfarmretail-peer1.dfarmretail.com:8051
# - CORE_PEER_LISTENADDRESS=dfarmretail-peer1.dfarmretail.com:8051
- CORE_PEER_LISTENADDRESS=dfarmretail-peer1.dfarmretail.com:8051
- CORE_PEER_CHAINCODELISTENADDRESS=dfarmretail-peer1.dfarmretail.com:8052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=dfarmretail-peer1.dfarmretail.com:8051
- CORE_PEER_LOCALMSPID=DfarmretailMSP
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_PEER_TLS_ENABLED=false
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# - CORE_PEER_GOSSIP_USELEADERELECTION=true
# - CORE_PEER_GOSSIP_ORGLEADER=false
# - CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_FILESYSTEMPATH=/var/ledger
working_dir: $HOME
# command: peer node start --peer-chaincodedev=true
command: peer node start
volumes:
# Folder with channel create tx file
- ${PWD}/config:/var/hyperledger/channeltx
# Map the folder with MSP for Peer
- ${PWD}/client/dfarmretail/peer1/msp:/var/hyperledger/msp
# Map the current folder to cfg
- ${PWD}/config:/var/hyperledger/config
- /var/run/:/host/var/run/
# Ledger folder for the peer
- ${HOME}/ledgers/ca/dfarmretail-peer1.dfarmretail.com:/var/ledger
depends_on:
- orderer.dfarmadmin.com
ports:
- 8051:8051
- 8052:8052
- 8053:8053
networks:
- dfarm
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- dfarm
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=info
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=dfarmadmin-peer1.dfarmadmin.com:7051
- CORE_PEER_LOCALMSPID=DfarmadminMSP
#- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/
peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.exampl
e.com/msp
- CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp
- CORE_CHAINCODE_KEEPALIVE=10
working_dir: $HOME
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/
- ./crypto-
config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
networks:
- dfarm
depends_on:
- orderer.example.com
- peer0.org1.example.com
- couchdb
there is a file called docker-template.yaml file in fabric samples/firstnetwork/ . Check it. So one template file acts a backup and one docker-composer.yaml file gets updated with new certificates.If you still face error, let me know I can help you from this 😊

Environment variable set twice by docker-compose up command

I have defined a environment variable JAVA_TOOL_OPTIONS in my yaml file. When I start up my containers with the command "docker-compose -f up" I get the following error:
Picked up JAVA_TOOL_OPTIONS: -agentlib:jdwp=transport=dt_socket,address=8010,server=y,suspend=n
Listening for transport dt_socket at address: 8010
Picked up JAVA_TOOL_OPTIONS: -agentlib:jdwp=transport=dt_socket,address=8010,server=y,suspend=n
ERROR: Cannot load this JVM TI agent twice, check your java command line for duplicate jdwp options.
Error occurred during initialization of VM
agent library failed to init: jdwp
It looks like the JAVA_TOOL_OPTIONS is set twice. Any help will be very much appreciated
Below is my .yaml file:
version: '3.1'
services:
blumeglobal:
build:
context: .
dockerfile: Dockerfile
image: corda-3.2
environment:
- JAVA_OPTIONS=-Xmx512m
ports:
- "10003:10003"
- "10004:10004"
- "10005:10005"
image: corda:3.2
container_name: blumeglobal
volumes:
- ./build/nodes/BlumeGlobal/network-parameters:/opt/corda/network-parameters
- ./build/nodes/BlumeGlobal/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/BlumeGlobal/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/BlumeGlobal/node.conf:/opt/corda/node.conf
- ./build/nodes/BlumeGlobal/cordapps/:/opt/corda/cordapps/
- ./build/nodes/BlumeGlobal/certificates/:/opt/corda/certificates/
networks:
- mynetwork
bestbuy:
build:
context: .
dockerfile: Dockerfile
environment:
- JAVA_OPTIONS=-Xmx512m
ports:
- "10012:10002"
- "10013:10003"
- "10014:10004"
- "10015:10015"
image: corda:3.2
container_name: bestbuy
volumes:
- ./build/nodes/Bestbuy/network-parameters:/opt/corda/network-parameters
- ./build/nodes/Bestbuy/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/Bestbuy/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/Bestbuy/node.conf:/opt/corda/node.conf
- ./build/nodes/Bestbuy/cordapps/:/opt/corda/cordapps/
- ./build/nodes/Bestbuy/certificates/:/opt/corda/certificates/
depends_on:
- blumeglobal
networks:
- mynetwork
expeditors:
build:
context: .
dockerfile: Dockerfile
environment:
- JAVA_OPTIONS=-Xmx512m
- JAVA_TOOL_OPTIONS=-agentlib:jdwp=transport=dt_socket,address=8010,server=y,suspend=n
ports:
- "10022:10002"
- "10023:10003"
- "10024:10004"
- "10025:10025"
- "8010:8010"
image: corda:3.2
container_name: expeditors
volumes:
- ./build/nodes/Expeditors/network-parameters:/opt/corda/network-parameters
- ./build/nodes/Expeditors/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/Expeditors/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/Expeditors/node.conf:/opt/corda/node.conf
- ./build/nodes/Expeditors/cordapps/:/opt/corda/cordapps/
- ./build/nodes/Expeditors/certificates/:/opt/corda/certificates/
depends_on:
- blumeglobal
networks:
- mynetwork
motorcarrier:
build:
context: .
dockerfile: Dockerfile
environment:
- JAVA_OPTIONS=-Xmx512m
ports:
- "10032:10002"
- "10033:10003"
- "10034:10004"
- "10035:10035"
image: corda:3.2
container_name: motorcarrier
volumes:
- ./build/nodes/DTDC/network-parameters:/opt/corda/network-parameters
- ./build/nodes/DTDC/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/DTDC/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/DTDC/node.conf:/opt/corda/node.conf
- ./build/nodes/DTDC/cordapps/:/opt/corda/cordapps/
- ./build/nodes/DTDC/certificates/:/opt/corda/certificates/
depends_on:
- blumeglobal
networks:
- mynetwork
one:
build:
context: .
dockerfile: Dockerfile
environment:
- JAVA_OPTIONS=-Xmx512m
ports:
- "10042:10002"
- "10043:10003"
- "10044:10004"
- "10045:10045"
image: corda:3.2
container_name: one
volumes:
- ./build/nodes/ONE/network-parameters:/opt/corda/network-parameters
- ./build/nodes/ONE/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/ONE/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/ONE/node.conf:/opt/corda/node.conf
- ./build/nodes/ONE/cordapps/:/opt/corda/cordapps/
- ./build/nodes/ONE/certificates/:/opt/corda/certificates/
depends_on:
- blumeglobal
networks:
- mynetwork
cisco:
build:
context: .
dockerfile: Dockerfile
environment:
- JAVA_OPTIONS=-Xmx512m
ports:
- "10052:10002"
- "10053:10003"
- "10054:10004"
- "10055:10055"
image: corda:3.2
container_name: cisco
volumes:
- ./build/nodes/Cisco/network-parameters:/opt/corda/network-parameters
- ./build/nodes/Cisco/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/Cisco/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/Cisco/node.conf:/opt/corda/node.conf
- ./build/nodes/Cisco/cordapps/:/opt/corda/cordapps/
- ./build/nodes/Cisco/certificates/:/opt/corda/certificates/
depends_on:
- blumeglobal
networks:
- mynetwork
foxconn:
build:
context: .
dockerfile: Dockerfile
environment:
- JAVA_OPTIONS=-Xmx512m
ports:
- "10062:10002"
- "10063:10003"
- "10064:10004"
- "10065:10065"
image: corda:3.2
container_name: foxconn
volumes:
- ./build/nodes/Foxconn/network-parameters:/opt/corda/network-parameters
- ./build/nodes/Foxconn/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/Foxconn/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/Foxconn/node.conf:/opt/corda/node.conf
- ./build/nodes/Foxconn/cordapps/:/opt/corda/cordapps/
- ./build/nodes/Foxconn/certificates/:/opt/corda/certificates/
depends_on:
- blumeglobal
networks:
- mynetwork
toshiba:
build:
context: .
dockerfile: Dockerfile
environment:
- JAVA_OPTIONS=-Xmx512m
ports:
- "10072:10002"
- "10073:10003"
- "10074:10004"
- "10075:10075"
image: corda:3.2
container_name: toshiba
volumes:
- ./build/nodes/Toshiba/network-parameters:/opt/corda/network-parameters
- ./build/nodes/Toshiba/persistence.mv.db:/opt/corda/persistence.mv.db
- ./build/nodes/Toshiba/additional-node-infos/:/opt/corda/additional-node-infos/
- ./build/nodes/Toshiba/node.conf:/opt/corda/node.conf
- ./build/nodes/Toshiba/cordapps/:/opt/corda/cordapps/
- ./build/nodes/Toshiba/certificates/:/opt/corda/certificates/
depends_on:
- blumeglobal
networks:
- mynetwork
networks:
mynetwork:
external: true
Below is my Dockerfile
FROM openjdk:8u151-jre-alpine
# Override default value with 'docker build --build-arg BUILDTIME_CORDA_VERSION=version'
# example: 'docker build --build-arg BUILDTIME_CORDA_VERSION=1.0.0 -t corda/node:1.0 .'
ARG BUILDTIME_CORDA_VERSION=3.2-corda
ARG BUILDTIME_JAVA_OPTIONS
ENV CORDA_VERSION=${BUILDTIME_CORDA_VERSION}
ENV JAVA_OPTIONS=${BUILDTIME_JAVA_OPTIONS}
# Set image labels
LABEL net.corda.version = ${CORDA_VERSION} \
maintainer = "<blockchainservice#infosys.com>" \
vendor = "infosys"
RUN apk upgrade --update && \
apk add --update --no-cache bash iputils && \
rm -rf /var/cache/apk/* && \
# Add user to run the app && \
addgroup corda && \
adduser -G corda -D -s /bin/bash corda && \
# Create /opt/corda directory && \
mkdir -p /opt/corda/plugins && \
mkdir -p /opt/corda/logs
# Copy corda jar
ADD --chown=corda:corda https://dl.bintray.com/r3/corda/net/corda/corda/${CORDA_VERSION}/corda-${CORDA_VERSION}.jar /opt/corda/corda.jar
COPY run-corda.sh /run-corda.sh
RUN chmod 777 /opt/corda && chmod +x /run-corda.sh && \
sync && \
chown -R corda:corda /opt/corda
# Expose port for corda (default is 10002) and RPC
EXPOSE 10002
EXPOSE 10003
EXPOSE 10004
EXPOSE 10005 10015 10025 10035 10045 10055 10065 10075
# Working directory for Corda
WORKDIR /opt/corda
ENV HOME=/opt/corda
USER corda
# Start it
CMD ["/run-corda.sh"]
Try another base java image for your dockerfile. (e.g. 'FROM azul/zulu-openjdk:8u192')
Here's an example of what you're trying to do which you can find here: https://github.com/EricMcEvoyR3/corda-docker-compose (you'll notice he doesn't even set the JAVA_TOOL_OPTIONS variables)
You'll also want to note there's a dockerform task in Corda to generate the task for you: https://docs.corda.net/docs/corda-os/4.4/generating-a-node.html#the-dockerform-task
It looks like you're copying over the correct information in the dokcer-compose file, just take another look at these examples and see if you're able to get this to work.
best of luck on the adventure of building blockchain.