Create a docker-compose with loop in ansible - docker-compose

I need to build a docker compose based on a yml file. In the next yml it will be the name, the image and the version of each service.
"services":
- "service": "front"
"image": "acalls-caselog-web-app"
"version": "latest"
- "service": "back"
"image": "acalls-caselog-web-service"
"version": "latest"
- "service": "vb"
"image": "acalls-caselog-vb-service"
"version": "latest"
- "service": "salesforce"
"image": "acalls-caselog-salesforce-app-service"
"version": "latest"
- "service": "tts"
"image": "ydilo-tts-service"
"version": "latest"
- "service": "ai classifier"
"image": "acalls-caselog-ai-classifier-service"
"version": "latest"
Up to now I had an array to set each image in the docker compose, like this
version: "3.3"
services:
front:
image: url/{{services[0].image}}:{{services[0].version}}
ports:
- "81:81"
extra_hosts:
- "backend:172.32.3.46"
environment:
profile: preproduction
back:
image: url/{{services[1].image}}:{{services[1].version}}
ports:
- "20101:20101"
environment:
profile: preproduction
saleforce:
image: url/{{services[2].image}}:{{services[2].version}}
ports:
- "20103:20103"
environment:
profile: preproduction
But I need to find a way to make this dynamically with a loop in the ansible task, for example, without the array position in the docker-compose file.
Main.yml
---
- name: stop container
ignore_errors: yes
become: True
shell:
cmd: "docker-compose down"
chdir: dir
- name: set docker-compose
template:
src: docker-compose-acalls.yml.j2
dest: dir/docker-compose.yml
mode: 0700
- name: Run container
become: True
shell:
cmd: "nohup docker-compose -f docker-compose.yml up -d"
chdir: dir

you create a template file: you have to play with whitespace , %- and -% to adjust the position, i have just given the general idea
version: "3.3"
services:
{% for item in services %}
{{ item.service }}:
{% if item.image is defined %}
image: url/{{item.image}}:{{item.version}}
{%- endif %}
{% if item.ports is defined %}
ports:
- "{{ item.ports[0] }}"
{%- endif %}
{% if item.extra_hosts is defined %}
extra_hosts:
- "{{ item.extra_hosts[0] }}"
{%- endif %}
{% if item.environment is defined %}
environment:
- profile: {{ item.environment.profile }}
{% endif %}
{% endfor %}
your playbook:
- name: test
hosts: localhost
vars_files:
- reference.yml
tasks:
template:
src: fileconf.j2
dest: composedocker.yml
and your reference.yml file:
services:
- service: "front"
image: "acalls-caselog-web-app"
ports:
- "81:81"
extra_hostss:
- "backend:172.32.3.46"
environment:
profile: preproduction
version: "latest"
- service: "back"
image: "acalls-caselog-web-service"
version: "latest"
ports:
- "20101:20101"
environment:
profile: preproduction
- service: "vb"
image: "acalls-caselog-vb-service"
version: "latest"
- service: "salesforce"
image: "acalls-caselog-salesforce-app-service"
version: "latest"
ports:
- "20103:20103"
environment:
profile: preproduction
- service: "tts"
image: "ydilo-tts-service"
version: "latest"
- service: "ai classifier"
image: "acalls-caselog-ai-classifier-service"
version: "latest"
result:
version: "3.3"
services:
front:
image: url/acalls-caselog-web-app:latest
ports:
- "81:81"
extra_hosts:
- "backend:172.32.3.46"
environment:
- profile: preproduction
back:
image: url/acalls-caselog-web-service:latest
ports:
- "20101:20101"
environment:
- profile: preproduction
vb:
image: url/acalls-caselog-vb-service:latest
salesforce:
image: url/acalls-caselog-salesforce-app-service:latest
ports:
- "20103:20103"
environment:
- profile: preproduction
tts:
image: url/ydilo-tts-service:latest
ai classifier:
image: url/acalls-caselog-ai-classifier-service:latest

Related

Ansible loop to create docker networks

Suppose I have my docker networks defined in a variable such as:
docker_networks:
- name: default
driver: bridge
- name: proxy
driver: bridge
ipam_options:
subnet: '192.168.100.0/24'
- name: socket_proxy
driver: bridge
ipam_options:
subnet: '192.168.101.0/24'
How would I go about running this with a loop to create these docker networks?
I tried the following, however the ipam_config parameter causes it to fail if no subnet is defined:
- name: Create networks
docker_network:
name: '{{ item.name }}'
driver: '{{ item.driver | default(omit) }}'
ipam_config:
- subnet: '{{ item.ipam_options.subnet | default(omit) }}'
loop: '{{ docker_networks }}'
If you modify your docker_networks variable so that the value of the ipam_options key is a list of dictionaries:
docker_networks:
- name: proxy
driver: bridge
ipam_options:
- subnet: '192.168.100.0/24'
- name: socket_proxy
driver: bridge
ipam_options:
- subnet: '192.168.101.0/24'
- name: no_subnet
driver: bridge
Then you can rewrite your task like this:
- name: Create networks
community.docker.docker_network:
name: '{{ item.name }}'
driver: '{{ item.driver | default(omit) }}'
ipam_config: "{{ item.ipam_options | default(omit) }}"
loop: '{{ docker_networks }}'
(I would also just rename the ipam_options key to ipam_config, so
that it matches the parameter name.)

How to get Dapr Service to Service Invocation to work when running under docker-compose?

I am receiving the following error when trying to call a service using Dapr SDK.
System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:3500)
---> System.Net.Sockets.SocketException (111): Connection refused
Here is my docker-compose settings of the service I am trying to call:
quest-service:
image: ${DOCKER_REGISTRY-gamification}/quest-service:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
build:
context: .
dockerfile: Services/LW.Gamification.QuestService/Dockerfile
ports:
- "5110:80"
- "50010:50001"
quest-service-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Quest-Service",
"-app-port", "80",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- quest-service
network_mode: "service:quest-service"
And the settings for the caller:
player-service:
image: ${DOCKER_REGISTRY-gamification}/player-service:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
build:
context: .
dockerfile: Services/LW.Gamificaiton.PlayerService/Dockerfile
ports:
- "5109:80"
- "50009:50001"
player-service-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Player-Service",
"-app-port", "80",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- player-service
network_mode: "service:player-service"
And here is the code that is failing to work:
// demo service to service call
var httpClient = DaprClient.CreateInvokeHttpClient("Quest-Service");
var requestUri = $"api/v1/Quest";
var result = await httpClient.GetFromJsonAsync<IEnumerable<string>>(requestUri);
Note: Messaging is working fine. :-)
I am new to Dapr so I must be doing something silly wrong, maybe something to do with ports.. I just don't know!
From following this question :Dapr Client Docker Compose Issue
I managed to get this partly working using the following docker-compose config:
services:
placement:
image: "daprio/dapr"
command: ["./placement", "-port", "50000", "-log-level", "debug"]
ports:
- "50000:50000"
quest-service:
image: ${DOCKER_REGISTRY-gamification}/quest-service:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
- DAPR_GRPC_PORT=50010
build:
context: .
dockerfile: Services/LW.Gamification.QuestService/Dockerfile
ports:
- "5110:80"
- "50010:50010"
depends_on:
- placement
- rabbitmq
- redis
- seq
- zipkin
quest-service-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Quest-Service",
"-app-port", "80",
"-placement-host-address", "placement:50000",
"-dapr-grpc-port", "50010",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- quest-service
network_mode: "service:quest-service"
generatetraffic:
image: ${DOCKER_REGISTRY-gamification}/generatetraffic:${TAG:-latest}
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
- SeqServerUrl=http://seq
- DAPR_GRPC_PORT=50017
build:
context: .
dockerfile: Services/LW.Gamification.GenerateTraffic/Dockerfile
ports:
- "5117:80"
- "50017:50017"
depends_on:
- placement
- rabbitmq
- redis
- seq
- zipkin
generatetraffic-dapr:
image: "daprio/daprd:latest"
command: ["./daprd",
"-app-id", "Generate-Traffic",
"-app-port", "80",
"-placement-host-address", "placement:50000",
"-dapr-grpc-port", "50017",
"-components-path", "/Components",
"-config", "/Configuration/config.yaml"
]
volumes:
- "./Dapr/Components/:/Components"
- "./Dapr/Configuration/:/Configuration"
depends_on:
- generatetraffic
network_mode: "service:generatetraffic"
However I still have issues with some of the documented APIs not working!.
var httpClient = DaprClient.CreateInvokeHttpClient("Quest-Service");
var requestUri = $"api/v1/Quest";
var result = await httpClient.GetAsync(requestUri);
Still fails?

Filter or map in jinja2

I'm trying to filter or map this yml file. Up to now I was able to access it with this
{{services[0].version}}
But I need to access it with it name, without the position.
Yml file
"services":
- "service": "front"
"image": "acalls-caselog-web-app"
"version": "latest"
- "service": "back"
"image": "acalls-caselog-web-service"
"version": "latest"
docker-compose.yml.j2
version: "3.3"
services:
front:
image: url/{{services[0].image}}:{{services[0].version}}
ports:
- "81:81"
extra_hosts:
- "backend:172.32.3.46"
environment:
profile: preproduction
back:
image: url/ {{services[1].image}} : {{services[1].version}}
ports:
- "82:82"
extra_hosts:
- "backend:172.32.3.46"
environment:
profile: preproduction
I really don't now If I need to use map, filter or there is another form like service.service(front).image
You can do this using the selectattr filter:
{{services | selectattr("service", "equalto", "front")}}
In your YAML, you can apply it like this:
version: "3.3"
services:
front:
image: {% with front=services|selectattr("service", "equalto", "front") -%}
url/{{front.image}}:{{front.version}}
{%- endwith %}
ports:
- "81:81"
extra_hosts:
- "backend:172.32.3.46"
environment:
profile: preproduction
back:
image: {% with back = services|selectattr("service", "equalto", "back") -%}
url/ {{back.image}} : {{back.version}}
{%- endwith %}
ports:
- "82:82"
extra_hosts:
- "backend:172.32.3.46"
environment:
profile: preproduction

Multiple Orderer redundancy in Kafka based network

We're stuck configuring a fabric network based on 3 orgs with 1 peer each and 2 kafka-based orderers. For kafka ordering we use 4 kafka nodes with 3 zookeepers. It's deployed on some AWS ec2 instances as follows:
1: Org1
2: Org2
3: Org3
4: orderer0, orderer1, kafka0, kafka1, kafka2, kafka3, zookeeper0, zookeeper1, zookeeper2
The whole of the ordering nodes plus kafka cluster is located in the same machine for connectivity reasons (read somewhere they must be in the same machine to avoid these problems)
During our test, we try taking down the first orderer (orderer0) for redundancy testing with docker stop. We expected the network to continue working through orderer1, but instead it dies and stops working.
Looking at the peer's console, I can see some errors.
Could not connect to any of the endpoints: [orderer0.example.com:7050, orderer1.example.com:8050]
Find attached the content of the files related to the configuration of the system.
Orderer + kafka + zk
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
zookeeper0.example.com:
container_name: zookeeper0.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper0.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
zookeeper1.example.com:
container_name: zookeeper1.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
zookeeper2.example.com:
container_name: zookeeper2.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper2.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka0.example.com:
container_name: kafka0.example.com
extends:
file: docker-compose-base.yaml
service: kafka0.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka1.example.com:
container_name: kafka1.example.com
extends:
file: docker-compose-base.yaml
service: kafka1.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka2.example.com:
container_name: kafka2.example.com
extends:
file: docker-compose-base.yaml
service: kafka2.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka3.example.com:
container_name: kafka3.example.com
extends:
file: docker-compose-base.yaml
service: kafka3.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_LISTEN_PORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt, /etc/hyperledger/crypto/peerOrg3/tls/ca.crt]
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 7050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/crypto/peerOrg1
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peerOrg2
- ./channel/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/:/etc/hyperledger/crypto/peerOrg3
depends_on:
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
orderer1.example.com:
container_name: orderer1.example.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
- ORDERER_GEN ERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_LISTEN_PORT=8050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt, /etc/hyperledger/crypto/peerOrg3/tls/ca.crt]
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 8050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/crypto/peerOrg1
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peerOrg2
- ./channel/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/:/etc/hyperledger/crypto/peerOrg3
depends_on:
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
Peer and Ca from Org2
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
ca.org2.example.com:
image: hyperledger/fabric-ca:x86_64-1.1.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/efa7d0819b7083f6c06eb34da414acbcde79f607b9ce26fb04dee60cf79a389a_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/efa7d0819b7083f6c06eb34da414acbcde79f607b9ce26fb04dee60cf79a389a_sk
ports:
- "8054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerOrg2
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org2.example.com
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
ports:
- 8051:7051
- 8053:7053
volumes:
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peer
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
extra_hosts:
- "orderer0.example.com:xxx.xxx.xxx.xxx"
- "orderer1.example.com:xxx.xxx.xxx.xxx"
- "kafka0.example.com:xxx.xxx.xxx.xxx"
- "kafka1.example.com:xxx.xxx.xxx.xxx"
- "kafka2.example.com:xxx.xxx.xxx.xxx"
- "kafka3.example.com:xxx.xxx.xxx.xxx"
Orderer base
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer-base:
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
- ORDERER_GENERAL_LOGLEVEL=error
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
# kafka
- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka0.example.com,kafka1.example.com,kafka2.example.com,kafka3.example.com]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
Kafka base
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
zookeeper:
image: hyperledger/fabric-zookeeper
environment:
- ZOO_SERVERS=server.1=zookeeper0.example.com:2888:3888 server.2=zookeeper1.example.com:2888:3888 server.3=zookeeper2.example.com:2888:3888
restart: always
kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererMSP
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
AnchorPeers:
- Host: peer0.org2.example.com
Port: 7051
- &Org3
Name: Org3MSP
ID: Org3MSP
MSPDir: crypto-config/peerOrganizations/org3.example.com/msp
AnchorPeers:
- Host: peer0.org3.example.com
Port: 7051
################################################################################
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka0.example.com:9092
- kafka1.example.com:9092
- kafka2.example.com:9092
- kafka3.example.com:9092
Organizations:
################################################################################
Application: &ApplicationDefaults
Organizations:
################################################################################
Profiles:
ThreeOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
- *Org3
ThreeOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
May it be a configuration error? Connection problems are almost discarded because running the same network on a local machine gives the same result.
Thanks in advance
Regards
Finally got it running smooth. Turns out the problem wasn't in docker-compose files, but in the version of fabric sdk for the web service. I was using fabric-client and fabric-ca-client both on version 1.1, and this was missing until 1.2. (More info https://jira.hyperledger.org/browse/FABN-90)
Just to clarify, I was able to see transactions happening on both orderers because of the interconnection between them, but I was only attacking the first one. When that orderer went down, network would go dark.
I understood the way fabric deals with orderers, it points to the first orderer of the list, if it is down or unreachable, moves it to the bottom of the list and targets the next one. This is what's happening since 1.2, in older versions you have to code your own error controller so that it changes to the next orderer.
I'm not sure but it could be because of different network layer. Since it's a different compose file , docker do create different network layer for each composer.
Also, I don't see network mentioned in the yaml files.
Please check list of network layer using "docker network list".

How to set password for Traefik dashboard with CLI argument?

There's a manual in here for that but it's heavily tight for TOML, I need CLI argument, as I'm in docker-swarm with Consul setup and highly available
consul:
image: consul
command: agent -server -bootstrap-expect=1
volumes:
- consul-data:/consul/data
environment:
- CONSUL_LOCAL_CONFIG={"datacenter":"ams3","server":true}
- CONSUL_BIND_INTERFACE=eth0
- CONSUL_CLIENT_INTERFACE=eth0
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
networks:
- traefik
proxy_init:
image: traefik:1.6.3-alpine
command: >
storeconfig
--api
--entrypoints=Name:http Address::80 Redirect.EntryPoint:https
--entrypoints=Name:api Address::8080 Auth.Basic.Users:test:$$apr1$$H6uskkkW$$IgXLP6ewTrSuBkTrqE8wj/ Auth.HeaderField:X-WebAuth-User
--entrypoints=Name:https Address::443 TLS
--defaultentrypoints=http,https
--acme
--acme.storage="traefik/acme/account"
--acme.entryPoint=https
--acme.httpChallenge.entryPoint=http
--acme.onHostRule=true
--acme.acmelogging=true
--acme.onDemand=false
--acme.caServer="https://acme-staging-v02.api.letsencrypt.org/directory"
--acme.email="whatever#gmail.com"
--docker
--docker.swarmMode
--docker.domain=swarm.xxx.io
--docker.endpoint=unix://var/run/docker.sock
--docker.watch
--consul
--consul.watch
--consul.endpoint=consul:8500
--consul.prefix=traefik
--logLevel=DEBUG
--accesslogsfile=/dev/stdout
networks:
- traefik
deploy:
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
depends_on:
- consul
proxy:
image: traefik:1.6.3-alpine
depends_on:
- traefik_init
- consul
command: >
--consul
--consul.endpoint=consul:8500
--consul.prefix=traefik
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
- traefik
ports:
- 80:80
- 443:443
- 8080:8080
deploy:
mode: replicated
replicas: 2
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == manager
update_config:
parallelism: 1
delay: 10s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock
You can also set labels for traefik container too. Traefik can manage own container so you can set http basic auth through label like you do with any other container. The only problem I've had is that DNS challenge from ACME client fails, but it works with self-signed certificates.
deploy:
labels:
- "traefik.docker.network=infra_traefik"
- "traefik.port=8080"
- "traefik.tags=monitoring"
- "traefik.backend.loadbalancer.stickiness=true"
- "traefik.frontend.passHostHeader=true"
- "traefik.frontend.rule=Host:proxy01.swarm.lympo.io,proxy.swarm.lympo.io"
- "traefik.frontend.auth.basic=admin:$$apr1$$Xv0Slw4m$$MqFgCq4Do83fcKIsPTDGu/"
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == manager
This is the configuration I use. I have two different endpoints for ping(8082) and API/Dashboard (8081 with basic auth):
version: "3.4"
services:
traefik_init:
image: traefik:1.7.9
command:
- "storeconfig"
- "--api"
- "--api.entrypoint=foo"
- "--ping"
- "--ping.entrypoint=bar"
- "--accessLog"
- "--logLevel=INFO"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 TLS"
- "--entrypoints=Name:foo Address::8081 Auth.Basic.Users:admin:$$2a$$10$$i9SzMNSHJlab7zKH28z17uicrnXbHfIicWJVPanNBxf6aiNyoMare"
- "--entrypoints=Name:bar Address::8082"
- "--defaultentrypoints=http,https"
Warning: $ character should be escaped with another $ in YAML