Error: got unexpected status: FORBIDDEN -- implicit policy evaluation failed - docker-compose

I'm trying to make network with 3 org(each has 3 peers), two orderer
node with Kafka and zookeeper in fabric 1.4.3.
then, when I do peer create channel with
docker exec cli peer channel create -o orderer0.example.com:7050 -c $CHANNEL_NAME -f $ARTIFACTS_DIR/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
below error occurs in cli
Error: got unexpected status: FORBIDDEN -- implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
and this is docker logs of orderer0
2019-10-12 09:01:16.513 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 011 [channel: channel.first] Setting up the channel consumer for this channel (start offset: -2)...
2019-10-12 09:01:16.524 UTC [orderer.consensus.kafka] startThread -> INFO 012 [channel: channel.first] Channel consumer set up successfully
2019-10-12 09:01:16.543 UTC [orderer.consensus.kafka] startThread -> INFO 013 [channel: channel.first] Start phase completed successfully
2019-10-12 09:01:18.537 UTC [orderer.common.broadcast] ProcessMessage -> WARN 014 [channel: channel.first] Rejecting broadcast of config message from 172.18.0.29:35290 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
2019-10-12 09:01:18.537 UTC [comm.grpc.server] 1 -> INFO 015 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.18.0.29:35290 grpc.code=OK grpc.call_duration=1.888934ms
2019-10-12 09:01:18.541 UTC [common.deliver] Handle -> WARN 016 Error reading from 172.18.0.29:35288: rpc error: code = Canceled desc = context canceled
2019-10-12 09:01:18.542 UTC [comm.grpc.server] 1 -> INFO 017 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.18.0.29:35288 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=10.552989ms
dicectories
|──directories
| └──────artifacts
| | └──────channel.tx
| | └──────genesis.block
| |
| └──────bin
| | └──────crypto-config
| | | └──────...
| | └──────...
| |
| └──────network
| └──────docker-compose-mq.yaml
| └──────docker-compose-orderer.yaml
| └──────...
I read some solutions like me in here ,but I did't solved it yet.
This is my parts of configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: ./crypto-config/ordererOrganizations/example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: ./crypto-config/peerOrganizations/org1.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org1MSP.admin')"
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
and this is docker-compose-cli.yaml
cli:
container_name: cli
image: hyperledger/fabric-tools:1.4.3
tty: true
stdin_open: true
environment:
- SYS_CHANNEL=$SYS_CHANNEL
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- FABRIC_LOGGING_SPEC=DEBUG
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ../artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
- ../chaincode:/opt/gopath/src/github.com/hyperledger/fabric/chaincode
#- ./:/etc/hyperledger/fabric
crypto-config.yaml
OrdererOrgs:
- Name: Orderer
Domain: example.com
EnableNodeOUs: true
Specs:
- Hostname: orderer
Template:
Count: 2
PeerOrgs:
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
Template:
Count: 3
Users:
Count: 1
docker-compose-orderer.yaml
version: '2'
networks:
blockchain_network:
services:
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer:1.4.3
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls:/var/hyperledger/orderer/tls
ports:
- 7050:7050
networks:
- blockchain_network
# orderer1 is same with upside
I want to know why this error occurs and how to solve them.

What's your channel configuration inside configtx.yaml?
Have you tried to run the peer command inside the client bash (I'm not sure that your MSP related environment variables are active the way you are using "docker exec")?
docker exec -it cli bash
peer channel create -o orderer0.example.com:7050 -c $CHANNEL_NAME -f $ARTIFACTS_DIR/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

Related

Multiple Mongo container in ECS

I have made this task definition in ansible to create two container in an ECS cluster:
- name: Create task definition
community.aws.ecs_taskdefinition:
containers:
- name: mongo-db
essential: true
image: "mongo:5.0.9"
environment:
- name: MONGO_INITDB_ROOT_USERNAME
value: "{{ lookup('env', 'DB_USER') }}"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "{{ lookup('env', 'DB_PASSWORD') }}"
mountPoints:
- containerPath: "/data"
sourceVolume: mongodb-data-local
readOnly: true
- containerPath: "/dump"
sourceVolume: mongodb-dump
- containerPath: "/docker-entrypoint-initdb.d/init-mongo.sh"
sourceVolume: db-init
readOnly: true
portMappings:
- containerPort: 27017
hostPort: 27017
- name: roadmap-mongo-db
essential: false
image: "mongo:5.0.9"
command: ["mongod", "--port", "27018"]
environment:
- name: MONGO_INITDB_ROOT_USERNAME
value: "{{ lookup('env', 'DB_USER') }}"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "{{ lookup('env', 'DB_PASSWORD') }}"
mountPoints:
- containerPath: "/data"
sourceVolume: roadmap-mongodb-data-local
readOnly: true
- containerPath: "/dump"
sourceVolume: roadmap-mongodb-dump
- containerPath: "/docker-entrypoint-initdb.d/init-mongo.sh"
sourceVolume: roadmap-db-init
portMappings:
portMappings:
- containerPort: 27018
hostPort: 27018
volumes:
- name: mongodb-data-local
- name: roadmap-mongodb-data-local
- name: mongodb-dump
host:
sourcePath: "/home/ec2-user/dump"
- name: roadmap-mongodb-dump
host:
sourcePath: "/home/ec2-user/roadmap-dump"
- name: db-init
host:
sourcePath: "/home/ec2-user/init-mongo.sh"
- name: roadmap-db-init
host:
sourcePath: "/home/ec2-user/roadmapdb/init-mongo.sh"
cpu: 128
memory: 340
family: showcase-task
force_create: True
state: present
basically I am creating two mongodb containers. The two containers start just fine, the first container (mongo-db) runs properly, I can exec into it and run mongo -u root to connect to it and also the services that interact with it can communicate just fine, so no problem with the first container.
Instead the second container is giving me problems, even if it runs ok and I can exec into it, I cannot do mongo -u root
mongo -u root gives this:
MongoDB shell version v5.0.9
Enter password:
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:372:17
#(connect):2:6
exception: connect failed
exiting with code 1
It seems it still wants to connect to 27017 instead of 27018. What am I doing wrong?
However the logs seems to be fine as it says:
{"t":{"$date":"2022-11-03T12:46:27.654+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27018.sock"}}
{"t":{"$date":"2022-11-03T12:46:27.753+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2022-11-03T12:46:27.756+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27018,"ssl":"off"}}

fabric 2.3.2 : network.sh cannot create all the container

creating container
I wrote a docker-compose.yaml to deploy 4 peers of org1 and 1 peer of org2, it worked. But when I write a docker-compose-100peer.yaml file to start 100 peers of org1 and 1 peer of org2, it always stuck in the situation shown in the picture. It never starts more than 30 peer of org1 even if I wait for a whole afternoon and a whole night. The original yaml file (docker-compose-test-net.yaml) dont't limit the memory an cpu resource.
peer5.org1.example.com:
container_name: peer5.org1.example.com
image: hyperledger/fabric-peer:latest
labels:
service: hyperledger-fabric
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric_test
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer5.org1.example.com
- CORE_PEER_ADDRESS=peer5.org1.example.com:6010
- CORE_PEER_LISTENADDRESS=0.0.0.0:6010
- CORE_PEER_CHAINCODEADDRESS=peer5.org1.example.com:6011
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:6011
- CORE_PEER_GOSSIP_BOOTSTRAP=peer5.org1.example.com:6010
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer5.org1.example.com:6010
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:16010
volumes:
- ${DOCKER_SOCK}/:/host/var/run/docker.sock
- ../organizations/peerOrganizations/org1.example.com/peers/peer5.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../organizations/peerOrganizations/org1.example.com/peers/peer5.org1.example.com/tls:/etc/hyperledger/fabric/tls
- peer5.org1.example.com:/var/hyperledger/production
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 6010:6010
- 16010:16010
networks:
- test
peer6.org1.example.com:
container_name: peer6.org1.example.com
image: hyperledger/fabric-peer:latest
labels:
service: hyperledger-fabric
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric_test
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_ID=peer6.org1.example.com
- CORE_PEER_ADDRESS=peer6.org1.example.com:6012
- CORE_PEER_LISTENADDRESS=0.0.0.0:6012
- CORE_PEER_CHAINCODEADDRESS=peer6.org1.example.com:6013
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:6013
- CORE_PEER_GOSSIP_BOOTSTRAP=peer6.org1.example.com:6012
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer6.org1.example.com:6012
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:16012
volumes:
- ${DOCKER_SOCK}/:/host/var/run/docker.sock
- ../organizations/peerOrganizations/org1.example.com/peers/peer6.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../organizations/peerOrganizations/org1.example.com/peers/peer6.org1.example.com/tls:/etc/hyperledger/fabric/tls
- peer6.org1.example.com:/var/hyperledger/production
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 6012:6012
- 16012:16012
networks:
- test
......
cli:
container_name: cli
image: hyperledger/fabric-tools:latest
labels:
service: hyperledger-fabric
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=INFO
#- FABRIC_LOGGING_SPEC=DEBUG
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- ../organizations:/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations
- ../scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
depends_on:
- peer0.org1.example.com
- peer0.org2.example.com
- peer1.org1.example.com
- peer2.org1.example.com
- peer3.org1.example.com
- peer4.org1.example.com
- peer5.org1.example.com
- peer6.org1.example.com
- peer7.org1.example.com

Could not find the provided job class (com.job.ClassName) in the user lib directory when deploy apache flink 1.11 in kubernetes

Today I am trying to deployment Apache Flink 1.11 in my kubernetes v1.16 cluster following the official doc, and this is my deployment jobs yaml just make a little tweak(original from official content and I reexported from kubernetes):
[root#k8smaster flink]# cat flink-jobmanager.yaml
apiVersion: batch/v1
kind: Job
metadata:
labels:
app: flink
component: jobmanager
job-name: flink-jobmanager
name: flink-jobmanager
namespace: infrastructure
spec:
backoffLimit: 6
completions: 1
parallelism: 1
template:
metadata:
creationTimestamp: null
labels:
app: flink
component: jobmanager
job-name: flink-jobmanager
spec:
containers:
- args:
- standalone-job
- --job-classname
- com.job.ClassName
- <optional arguments>
- <job arguments>
image: flink:1.11.0-scala_2.11
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
tcpSocket:
port: 6123
timeoutSeconds: 1
name: jobmanager
ports:
- containerPort: 6123
name: rpc
protocol: TCP
- containerPort: 6124
name: blob-server
protocol: TCP
- containerPort: 8081
name: webui
protocol: TCP
securityContext:
runAsUser: 9999
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/flink/conf
name: flink-config-volume
- mountPath: /opt/flink/usrlib
name: job-artifacts-volume
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
name: flink-config
name: flink-config-volume
- hostPath:
path: /home
name: job-artifacts-volume
the job created success but when I login the pod and check the log, it shows like this:
Starting Job Manager
sed: couldn't open temporary file /opt/flink/conf/sedktEwGn: Read-only file system
sed: couldn't open temporary file /opt/flink/conf/sedvZ7Znn: Read-only file system
/docker-entrypoint.sh: 72: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml: Permission denied
/docker-entrypoint.sh: 91: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml.tmp: Read-only file system
Starting standalonejob as a console application on host flink-jobmanager-dfhtt.
2020-08-19 16:50:46,850 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-08-19 16:50:46,852 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Preconfiguration:
2020-08-19 16:50:46,852 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] -
JM_RESOURCE_PARAMS extraction logs:
jvm_params: -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456
logs: INFO [] - Loading configuration property: jobmanager.rpc.address, flink-jobmanager
INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 2
INFO [] - Loading configuration property: blob.server.port, 6124
INFO [] - Loading configuration property: jobmanager.rpc.port, 6123
INFO [] - Loading configuration property: taskmanager.rpc.port, 6122
INFO [] - Loading configuration property: queryable-state.proxy.ports, 6125
INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m
INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m
INFO [] - Loading configuration property: parallelism.default, 2
INFO [] - The derived from fraction jvm overhead memory (160.000mb (167772162 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
INFO [] - Final Master Memory configuration:
INFO [] - Total Process Memory: 1.563gb (1677721600 bytes)
INFO [] - Total Flink Memory: 1.125gb (1207959552 bytes)
INFO [] - JVM Heap: 1024.000mb (1073741824 bytes)
INFO [] - Off-heap: 128.000mb (134217728 bytes)
INFO [] - JVM Metaspace: 256.000mb (268435456 bytes)
INFO [] - JVM Overhead: 192.000mb (201326592 bytes)
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneApplicationClusterEntryPoint (Version: 1.11.0, Scala: 2.11, Rev:d04872d, Date:2020-06-29T16:13:14+02:00)
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - OS current user: flink
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Current Hadoop/Kerberos user: <no hadoop dependency found>
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.262-b10
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Maximum heap size: 989 MiBytes
2020-08-19 16:50:46,853 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JAVA_HOME: /usr/local/openjdk-8
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - No Hadoop Dependency available
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM Options:
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xmx1073741824
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xms1073741824
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:MaxMetaspaceSize=268435456
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog.file=/opt/flink/log/flink--standalonejob-0-flink-jobmanager-dfhtt.log
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-console.properties
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Program Arguments:
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --configDir
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - /opt/flink/conf
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --job-classname
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - com.job.ClassName
2020-08-19 16:50:46,854 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - <optional arguments>
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - <job arguments>
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Classpath: /opt/flink/lib/flink-csv-1.11.0.jar:/opt/flink/lib/flink-json-1.11.0.jar:/opt/flink/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/lib/flink-table-blink_2.11-1.11.0.jar:/opt/flink/lib/flink-table_2.11-1.11.0.jar:/opt/flink/lib/log4j-1.2-api-2.12.1.jar:/opt/flink/lib/log4j-api-2.12.1.jar:/opt/flink/lib/log4j-core-2.12.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.12.1.jar:/opt/flink/lib/flink-dist_2.11-1.11.0.jar:::
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-08-19 16:50:46,855 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Registered UNIX signal handlers for [TERM, HUP, INT]
2020-08-19 16:50:46,875 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Could not create application program.
org.apache.flink.util.FlinkException: Could not find the provided job class (com.job.ClassName) in the user lib directory (/opt/flink/usrlib).
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getJobClassNameOrScanClassPath(ClassPathPackagedProgramRetriever.java:140) ~[flink-dist_2.11-1.11.0.jar:1.11.0]
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getPackagedProgram(ClassPathPackagedProgramRetriever.java:123) ~[flink-dist_2.11-1.11.0.jar:1.11.0]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:110) ~[flink-dist_2.11-1.11.0.jar:1.11.0]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:78) [flink-dist_2.11-1.11.0.jar:1.11.0]
the question is:
why tell Read-only file system in the start of log? what should I do to fix?
why shows Could not find the provided job class (com.job.ClassName) in the user lib directory (/opt/flink/usrlib) and what should I do to fix it?
I have already tried in 2 kubernetes cluster(production kubernetes v1.15 cluster and my localhost kubernetes v1.18 cluster) and the error is the same. and I have no clue to fix it now.

Multiple Orderer redundancy in Kafka based network

We're stuck configuring a fabric network based on 3 orgs with 1 peer each and 2 kafka-based orderers. For kafka ordering we use 4 kafka nodes with 3 zookeepers. It's deployed on some AWS ec2 instances as follows:
1: Org1
2: Org2
3: Org3
4: orderer0, orderer1, kafka0, kafka1, kafka2, kafka3, zookeeper0, zookeeper1, zookeeper2
The whole of the ordering nodes plus kafka cluster is located in the same machine for connectivity reasons (read somewhere they must be in the same machine to avoid these problems)
During our test, we try taking down the first orderer (orderer0) for redundancy testing with docker stop. We expected the network to continue working through orderer1, but instead it dies and stops working.
Looking at the peer's console, I can see some errors.
Could not connect to any of the endpoints: [orderer0.example.com:7050, orderer1.example.com:8050]
Find attached the content of the files related to the configuration of the system.
Orderer + kafka + zk
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
zookeeper0.example.com:
container_name: zookeeper0.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper0.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
zookeeper1.example.com:
container_name: zookeeper1.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
zookeeper2.example.com:
container_name: zookeeper2.example.com
extends:
file: docker-compose-base.yaml
service: zookeeper2.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka0.example.com:
container_name: kafka0.example.com
extends:
file: docker-compose-base.yaml
service: kafka0.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka1.example.com:
container_name: kafka1.example.com
extends:
file: docker-compose-base.yaml
service: kafka1.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka2.example.com:
container_name: kafka2.example.com
extends:
file: docker-compose-base.yaml
service: kafka2.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
kafka3.example.com:
container_name: kafka3.example.com
extends:
file: docker-compose-base.yaml
service: kafka3.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- orderer0.example.com
- orderer1.example.com
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_LISTEN_PORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt, /etc/hyperledger/crypto/peerOrg3/tls/ca.crt]
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 7050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/crypto/peerOrg1
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peerOrg2
- ./channel/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/:/etc/hyperledger/crypto/peerOrg3
depends_on:
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
orderer1.example.com:
container_name: orderer1.example.com
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
- ORDERER_GEN ERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_LISTEN_PORT=8050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt, /etc/hyperledger/crypto/peerOrg3/tls/ca.crt]
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 8050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/crypto/peerOrg1
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peerOrg2
- ./channel/crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/:/etc/hyperledger/crypto/peerOrg3
depends_on:
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
Peer and Ca from Org2
#
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
ca.org2.example.com:
image: hyperledger/fabric-ca:x86_64-1.1.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/efa7d0819b7083f6c06eb34da414acbcde79f607b9ce26fb04dee60cf79a389a_sk
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/efa7d0819b7083f6c06eb34da414acbcde79f607b9ce26fb04dee60cf79a389a_sk
ports:
- "8054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./channel/crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerOrg2
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org2.example.com
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
ports:
- 8051:7051
- 8053:7053
volumes:
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peer
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
extra_hosts:
- "orderer0.example.com:xxx.xxx.xxx.xxx"
- "orderer1.example.com:xxx.xxx.xxx.xxx"
- "kafka0.example.com:xxx.xxx.xxx.xxx"
- "kafka1.example.com:xxx.xxx.xxx.xxx"
- "kafka2.example.com:xxx.xxx.xxx.xxx"
- "kafka3.example.com:xxx.xxx.xxx.xxx"
Orderer base
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer-base:
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
- ORDERER_GENERAL_LOGLEVEL=error
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
# kafka
- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka0.example.com,kafka1.example.com,kafka2.example.com,kafka3.example.com]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
Kafka base
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
zookeeper:
image: hyperledger/fabric-zookeeper
environment:
- ZOO_SERVERS=server.1=zookeeper0.example.com:2888:3888 server.2=zookeeper1.example.com:2888:3888 server.3=zookeeper2.example.com:2888:3888
restart: always
kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "3"
configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererMSP
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
AnchorPeers:
- Host: peer0.org2.example.com
Port: 7051
- &Org3
Name: Org3MSP
ID: Org3MSP
MSPDir: crypto-config/peerOrganizations/org3.example.com/msp
AnchorPeers:
- Host: peer0.org3.example.com
Port: 7051
################################################################################
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka0.example.com:9092
- kafka1.example.com:9092
- kafka2.example.com:9092
- kafka3.example.com:9092
Organizations:
################################################################################
Application: &ApplicationDefaults
Organizations:
################################################################################
Profiles:
ThreeOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
- *Org3
ThreeOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
May it be a configuration error? Connection problems are almost discarded because running the same network on a local machine gives the same result.
Thanks in advance
Regards
Finally got it running smooth. Turns out the problem wasn't in docker-compose files, but in the version of fabric sdk for the web service. I was using fabric-client and fabric-ca-client both on version 1.1, and this was missing until 1.2. (More info https://jira.hyperledger.org/browse/FABN-90)
Just to clarify, I was able to see transactions happening on both orderers because of the interconnection between them, but I was only attacking the first one. When that orderer went down, network would go dark.
I understood the way fabric deals with orderers, it points to the first orderer of the list, if it is down or unreachable, moves it to the bottom of the list and targets the next one. This is what's happening since 1.2, in older versions you have to code your own error controller so that it changes to the next orderer.
I'm not sure but it could be because of different network layer. Since it's a different compose file , docker do create different network layer for each composer.
Also, I don't see network mentioned in the yaml files.
Please check list of network layer using "docker network list".

How to set password for Traefik dashboard with CLI argument?

There's a manual in here for that but it's heavily tight for TOML, I need CLI argument, as I'm in docker-swarm with Consul setup and highly available
consul:
image: consul
command: agent -server -bootstrap-expect=1
volumes:
- consul-data:/consul/data
environment:
- CONSUL_LOCAL_CONFIG={"datacenter":"ams3","server":true}
- CONSUL_BIND_INTERFACE=eth0
- CONSUL_CLIENT_INTERFACE=eth0
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
networks:
- traefik
proxy_init:
image: traefik:1.6.3-alpine
command: >
storeconfig
--api
--entrypoints=Name:http Address::80 Redirect.EntryPoint:https
--entrypoints=Name:api Address::8080 Auth.Basic.Users:test:$$apr1$$H6uskkkW$$IgXLP6ewTrSuBkTrqE8wj/ Auth.HeaderField:X-WebAuth-User
--entrypoints=Name:https Address::443 TLS
--defaultentrypoints=http,https
--acme
--acme.storage="traefik/acme/account"
--acme.entryPoint=https
--acme.httpChallenge.entryPoint=http
--acme.onHostRule=true
--acme.acmelogging=true
--acme.onDemand=false
--acme.caServer="https://acme-staging-v02.api.letsencrypt.org/directory"
--acme.email="whatever#gmail.com"
--docker
--docker.swarmMode
--docker.domain=swarm.xxx.io
--docker.endpoint=unix://var/run/docker.sock
--docker.watch
--consul
--consul.watch
--consul.endpoint=consul:8500
--consul.prefix=traefik
--logLevel=DEBUG
--accesslogsfile=/dev/stdout
networks:
- traefik
deploy:
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
depends_on:
- consul
proxy:
image: traefik:1.6.3-alpine
depends_on:
- traefik_init
- consul
command: >
--consul
--consul.endpoint=consul:8500
--consul.prefix=traefik
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
- traefik
ports:
- 80:80
- 443:443
- 8080:8080
deploy:
mode: replicated
replicas: 2
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == manager
update_config:
parallelism: 1
delay: 10s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock
You can also set labels for traefik container too. Traefik can manage own container so you can set http basic auth through label like you do with any other container. The only problem I've had is that DNS challenge from ACME client fails, but it works with self-signed certificates.
deploy:
labels:
- "traefik.docker.network=infra_traefik"
- "traefik.port=8080"
- "traefik.tags=monitoring"
- "traefik.backend.loadbalancer.stickiness=true"
- "traefik.frontend.passHostHeader=true"
- "traefik.frontend.rule=Host:proxy01.swarm.lympo.io,proxy.swarm.lympo.io"
- "traefik.frontend.auth.basic=admin:$$apr1$$Xv0Slw4m$$MqFgCq4Do83fcKIsPTDGu/"
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == manager
This is the configuration I use. I have two different endpoints for ping(8082) and API/Dashboard (8081 with basic auth):
version: "3.4"
services:
traefik_init:
image: traefik:1.7.9
command:
- "storeconfig"
- "--api"
- "--api.entrypoint=foo"
- "--ping"
- "--ping.entrypoint=bar"
- "--accessLog"
- "--logLevel=INFO"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 TLS"
- "--entrypoints=Name:foo Address::8081 Auth.Basic.Users:admin:$$2a$$10$$i9SzMNSHJlab7zKH28z17uicrnXbHfIicWJVPanNBxf6aiNyoMare"
- "--entrypoints=Name:bar Address::8082"
- "--defaultentrypoints=http,https"
Warning: $ character should be escaped with another $ in YAML