Hyperledger - MSP error: the supplied identity is not valid: x509: certificate signed by unknown authority - docker-compose

I am currently working from the hyperledger fabric-samples. I've successfully run first-network and fabcar based on available tutorials. I'm now trying to combine the two to make a network with 3 peers in a single org and use the node sdk to query, etc. A repo of my current fabric-samples dir is available here. I've been able to use byfn.sh to build the network, enrollAdmin.js, and registerUser.js. When attempting to query or invoke I have run into this issue:
Store path:/home/victor/fabric-samples/first-network/hfc-key-store
Successfully loaded user1 from persistence
Assigning transaction_id: bc0240f672d075de2f84d50b292ed0e2214dacc0ef2888d0fa7e25d872a99b03
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 2 UNKNOWN: access denied: channel [mychannel] creator org [Org1MSP]
at new createStatusError (/home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:64:15)
at /home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:583:15
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 2 UNKNOWN: access denied: channel [mychannel] creator org [Org1MSP]
at new createStatusError (/home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:64:15)
at /home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:583:15
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 2 UNKNOWN: access denied: channel [mychannel] creator org [Org1MSP]
at new createStatusError (/home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:64:15)
at /home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:583:15
HERE
Transaction proposal was bad
Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
Failed to invoke successfully :: Error: Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
Using docker logs I looked at the logs for one of the peers and found this:
2018-04-24 19:05:09.370 UTC [msp] getMspConfig -> INFO 001 Loading NodeOUs
2018-04-24 19:05:09.392 UTC [nodeCmd] serve -> INFO 002 Starting peer:
Version: 1.1.0
Go version: go1.9.2
OS/Arch: linux/amd64
Experimental features: false
Chaincode:
Base Image Version: 0.4.6
Base Docker Namespace: hyperledger
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2018-04-24 19:05:09.392 UTC [ledgermgmt] initialize -> INFO 003 Initializing ledger mgmt
2018-04-24 19:05:09.393 UTC [kvledger] NewProvider -> INFO 004 Initializing ledger provider
2018-04-24 19:05:12.811 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 005 Created state database _users
2018-04-24 19:05:13.215 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 006 Created state database _replicator
2018-04-24 19:05:14.086 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 007 Created state database _global_changes
2018-04-24 19:05:14.433 UTC [kvledger] NewProvider -> INFO 008 ledger provider Initialized
2018-04-24 19:05:14.433 UTC [ledgermgmt] initialize -> INFO 009 ledger mgmt initialized
2018-04-24 19:05:14.433 UTC [peer] func1 -> INFO 00a Auto-detected peer address: 172.18.0.9:7051
2018-04-24 19:05:14.433 UTC [peer] func1 -> INFO 00b Returning peer0.org1.example.com:7051
2018-04-24 19:05:14.433 UTC [peer] func1 -> INFO 00c Auto-detected peer address: 172.18.0.9:7051
2018-04-24 19:05:14.434 UTC [peer] func1 -> INFO 00d Returning peer0.org1.example.com:7051
2018-04-24 19:05:14.435 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00e Entering computeChaincodeEndpoint with peerHostname: peer0.org1.example.com
2018-04-24 19:05:14.436 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00f Exit with ccEndpoint: peer0.org1.example.com:7052
2018-04-24 19:05:14.436 UTC [nodeCmd] createChaincodeServer -> WARN 010 peer.chaincodeListenAddress is not set, using peer0.org1.example.com:7052
2018-04-24 19:05:14.436 UTC [eventhub_producer] start -> INFO 011 Event processor started
2018-04-24 19:05:14.437 UTC [chaincode] NewChaincodeSupport -> INFO 012 Chaincode support using peerAddress: peer0.org1.example.com:7052
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 013 system chaincode cscc(github.com/hyperledger/fabric/core/scc/cscc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 014 system chaincode lscc(github.com/hyperledger/fabric/core/scc/lscc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 015 system chaincode escc(github.com/hyperledger/fabric/core/scc/escc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 016 system chaincode vscc(github.com/hyperledger/fabric/core/scc/vscc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 017 system chaincode qscc(github.com/hyperledger/fabric/core/chaincode/qscc) registered
2018-04-24 19:05:14.440 UTC [gossip/service] func1 -> INFO 018 Initialize gossip with endpoint peer0.org1.example.com:7051 and bootstrap set [peer1.org1.example.com:7051]
2018-04-24 19:05:14.442 UTC [msp] DeserializeIdentity -> INFO 019 Obtaining identity
2018-04-24 19:05:14.444 UTC [gossip/discovery] NewDiscoveryService -> INFO 01a Started {peer0.org1.example.com:7051 [] [98 55 107 77 184 123 189 240 183 227 157 211 146 161 226 74 43 48 67 169 32 99 66 147 109 71 222 49 249 172 59 136] peer0.org1.example.com:7051 <nil>} incTime is 1524596714444440316
2018-04-24 19:05:14.444 UTC [gossip/gossip] NewGossipService -> INFO 01b Creating gossip service with self membership of {peer0.org1.example.com:7051 [] [98 55 107 77 184 123 189 240 183 227 157 211 146 161 226 74 43 48 67 169 32 99 66 147 109 71 222 49 249 172 59 136] peer0.org1.example.com:7051 <nil>}
2018-04-24 19:05:14.447 UTC [gossip/gossip] start -> INFO 01c Gossip instance peer0.org1.example.com:7051 started
2018-04-24 19:05:14.449 UTC [cscc] Init -> INFO 01d Init CSCC
2018-04-24 19:05:14.449 UTC [sccapi] deploySysCC -> INFO 01e system chaincode cscc/(github.com/hyperledger/fabric/core/scc/cscc) deployed
2018-04-24 19:05:14.449 UTC [sccapi] deploySysCC -> INFO 01f system chaincode lscc/(github.com/hyperledger/fabric/core/scc/lscc) deployed
2018-04-24 19:05:14.450 UTC [escc] Init -> INFO 020 Successfully initialized ESCC
2018-04-24 19:05:14.450 UTC [sccapi] deploySysCC -> INFO 021 system chaincode escc/(github.com/hyperledger/fabric/core/scc/escc) deployed
2018-04-24 19:05:14.450 UTC [sccapi] deploySysCC -> INFO 022 system chaincode vscc/(github.com/hyperledger/fabric/core/scc/vscc) deployed
2018-04-24 19:05:14.451 UTC [qscc] Init -> INFO 023 Init QSCC
2018-04-24 19:05:14.451 UTC [sccapi] deploySysCC -> INFO 024 system chaincode qscc/(github.com/hyperledger/fabric/core/chaincode/qscc) deployed
2018-04-24 19:05:14.451 UTC [nodeCmd] initSysCCs -> INFO 025 Deployed system chaincodes
2018-04-24 19:05:14.451 UTC [nodeCmd] serve -> INFO 026 Starting peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[peer0.org1.example.com:7051]
2018-04-24 19:05:14.452 UTC [nodeCmd] serve -> INFO 027 Started peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[peer0.org1.example.com:7051]
2018-04-24 19:05:14.452 UTC [nodeCmd] func7 -> INFO 028 Starting profiling server with listenAddress = 0.0.0.0:6060
2018-04-24 19:05:16.371 UTC [ledgermgmt] CreateLedger -> INFO 029 Creating ledger [mychannel] with genesis block
2018-04-24 19:05:16.409 UTC [fsblkstorage] newBlockfileMgr -> INFO 02a Getting block information from block storage
2018-04-24 19:05:16.757 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 02b Created state database mychannel_
2018-04-24 19:05:16.945 UTC [kvledger] CommitWithPvtData -> INFO 02c Channel [mychannel]: Committed block [0] with 1 transaction(s)
2018-04-24 19:05:17.557 UTC [ledgermgmt] CreateLedger -> INFO 02d Created ledger [mychannel] with genesis block
2018-04-24 19:05:17.626 UTC [cscc] Init -> INFO 02e Init CSCC
2018-04-24 19:05:17.627 UTC [sccapi] deploySysCC -> INFO 02f system chaincode cscc/mychannel(github.com/hyperledger/fabric/core/scc/cscc) deployed
2018-04-24 19:05:17.628 UTC [sccapi] deploySysCC -> INFO 030 system chaincode lscc/mychannel(github.com/hyperledger/fabric/core/scc/lscc) deployed
2018-04-24 19:05:17.628 UTC [escc] Init -> INFO 031 Successfully initialized ESCC
2018-04-24 19:05:17.628 UTC [sccapi] deploySysCC -> INFO 032 system chaincode escc/mychannel(github.com/hyperledger/fabric/core/scc/escc) deployed
2018-04-24 19:05:17.629 UTC [sccapi] deploySysCC -> INFO 033 system chaincode vscc/mychannel(github.com/hyperledger/fabric/core/scc/vscc) deployed
2018-04-24 19:05:17.629 UTC [qscc] Init -> INFO 034 Init QSCC
2018-04-24 19:05:17.629 UTC [sccapi] deploySysCC -> INFO 035 system chaincode qscc/mychannel(github.com/hyperledger/fabric/core/chaincode/qscc) deployed
2018-04-24 19:05:27.629 UTC [deliveryClient] try -> WARN 036 Got error: rpc error: code = Canceled desc = context canceled , at 1 attempt. Retrying in 1s
2018-04-24 19:05:27.629 UTC [blocksProvider] DeliverBlocks -> WARN 037 [mychannel] Receive error: Client is closing
2018-04-24 19:05:28.925 UTC [gossip/service] updateEndpoints -> WARN 038 Failed to update ordering service endpoints, due to Channel with mychannel id was not found
2018-04-24 19:05:29.302 UTC [kvledger] CommitWithPvtData -> INFO 039 Channel [mychannel]: Committed block [1] with 1 transaction(s)
2018-04-24 19:05:33.013 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 03a Created state database mychannel_lscc
2018-04-24 19:05:33.016 UTC [lscc] executeInstall -> INFO 03b Installed Chaincode [fabcar] Version [1.0] to peer
2018-04-24 19:05:34.803 UTC [golang-platform] GenerateDockerBuild -> INFO 03c building chaincode with ldflagsOpt: '-ldflags "-linkmode external -extldflags '-static'"'
2018-04-24 19:05:34.804 UTC [golang-platform] GenerateDockerBuild -> INFO 03d building chaincode with tags:
2018-04-24 19:06:06.351 UTC [cceventmgmt] HandleStateUpdates -> INFO 03e Channel [mychannel]: Handling LSCC state update for chaincode [fabcar]
2018-04-24 19:06:06.868 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 03f Created state database mychannel_fabcar
2018-04-24 19:06:07.278 UTC [kvledger] CommitWithPvtData -> INFO 040 Channel [mychannel]: Committed block [2] with 1 transaction(s)
2018-04-24 19:06:15.476 UTC [protoutils] ValidateProposalMessage -> WARN 041 channel [mychannel]: MSP error: the supplied identity is not valid: x509: certificate signed by unknown authority
2018-04-24 19:06:15.689 UTC [protoutils] ValidateProposalMessage -> WARN 042 channel [mychannel]: MSP error: the supplied identity is not valid: x509: certificate signed by unknown authority
These errors lead me to believe I have the certificates configured incorrectly, but searching for information on the issue hasn't been fruitful so far. How can I locate the source of this error? I'll post my docker-compose-cli.yaml here:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.example.com:
ca.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer2.org1.example.com:
networks:
byfn:
services:
ca.example.com:
image: hyperledger/fabric-ca:x86_64-1.1.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.example.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.example.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/4239aa0dcd76daeeb8ba0cda701851d14504d31aad1b2ddddbac6a57365e497c_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.example.com
networks:
- byfn
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- byfn
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
depends_on:
- orderer.example.com
- couchdb0
networks:
- byfn
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
depends_on:
- orderer.example.com
- couchdb1
networks:
- byfn
peer2.org1.example.com:
container_name: peer2.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer2.org1.example.com
depends_on:
- orderer.example.com
- couchdb2
networks:
- byfn
couchdb0:
container_name: couchdb0
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- byfn
couchdb1:
container_name: couchdb1
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 6984:5984
networks:
- byfn
couchdb2:
container_name: couchdb2
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 7984:5984
networks:
- byfn
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer2.org1.example.com
networks:
- byfn

your sdk is getting certificate from a CA which is not configured properly.
Suggestions:
1-> Check that your CA server is getting started with correct pem file.
2-> Correct _sk (private key)
If you are using cryptogen then both of the above file you will get inside corresponding organisation folder provide the correct file to bootstrap CA . It will work fine.

You must go to the container that is complaining about that certificates, open the corresponding terminal and add the CA authorities certificate to the system's trusted CA repository like this, for example.
In ubuntu:
Go to /usr/local/share/ca-certificates/
Create a new folder, i.e.
"sudo mkdir HyperledgerCerts"
Copy the .crt file into the
"HyperledgerCerts" folder
Make sure the permissions are OK (755 for
the folder, 644 for the file)
Run "sudo update-ca-certificates"
This should solve the problem.
Hope this helps

Make sure that FABRIC_CA_CLIENT_HOME is set to the correct directory, especially when using fabric-ca-client outside docker containers.
For example, before calling fabric-ca-client register or fabric-ca-client enroll, you should set
export FABRIC_CA_CLIENT_HOME=/path/to/organizations/peerOrganizations/org1.example.com/

Related

Docker compose with NestJS PostgreSQL not connecting

I have tried to dockerize NestJS app with PostgreSQL. Postgres is refusing connection and is also showing one log that says database system was shut down at <<some timestamp>>. This is docker-compose.yml and the logs.
version: '3'
services:
postgres:
image: postgres
restart: always
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- '5432:5432'
environment:
- POSTGRES_DB=gm
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#gmail.com
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=5050
ports:
- "5050:5050"
api:
image: gm-server
build:
dockerfile: Dockerfile
context: .
volumes:
- .:/home/node
ports:
- '8081:4001'
depends_on:
- postgres
env_file: .env
command: npm run start:prod
volumes:
pgdata:
server-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
server-postgres-1 |
server-postgres-1 | 2023-01-04 09:36:45.249 UTC [1] LOG: starting PostgreSQL 15.0 (Debian 15.0-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
server-postgres-1 | 2023-01-04 09:36:45.250 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
server-postgres-1 | 2023-01-04 09:36:45.250 UTC [1] LOG: listening on IPv6 address "::", port 5432
server-postgres-1 | 2023-01-04 09:36:45.255 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
server-postgres-1 | 2023-01-04 09:36:45.261 UTC [29] LOG: database system was shut down at 2023-01-04 09:36:27 UTC
server-postgres-1 | 2023-01-04 09:36:45.274 UTC [1] LOG: database system is ready to accept connections
server-api-1 |
server-api-1 | > nestjs-app#0.0.1 start:prod
server-api-1 | > node dist/main
server-api-1 |
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [NestFactory] Starting Nest application...
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] MulterModule dependencies initialized +61ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] MulterModule dependencies initialized +1ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +0ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] AppModule dependencies initialized +1ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] ConfigModule dependencies initialized +0ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM ERROR [ExceptionHandler] connect ECONNREFUSED 127.0.0.1:5432
server-api-1 | Error: connect ECONNREFUSED 127.0.0.1:5432
server-api-1 | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1300:16)
server-api-1 exited with code 1
And I have also tried most of the relevant answers (before Stackoverlow stars marking me as duplicate) and they didn't work. Yes, I have tried changing the host to host.docker.internal as suggested by the previous
For more clarity, here is my typeorm-datasource config in NestJS
import { DataSource } from 'typeorm';
export const typeOrmConnectionDataSource = new DataSource({
type: 'postgres',
host: 'host.docker.internal',
port: 5432,
username: 'postgres',
password: 'postgres',
database: 'gm',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
migrations: [__dirname + '/migrations/**/*{.ts,.js}'],
logging: true,
synchronize: false,
migrationsRun: false,
});
Why this problem is different than the other "duplicate" questions?
The reason this problem is different is due to the reason because
the other threads also don't solve the issue.
even if we consider they do, the solutions didn't work for me.
More evidence?
I have tried all solutions for this one
Another solution with didn't work.
Another one, even followed the chat
Apparently the problem lies in NestJS not compiling properly because of my customization to its scripts. Check if that's an issue.
After fixing this issue, Just follow the instructions to use "postgres" as host, and it will work (if you are facing the same issue).

Postgres not accepting connection in Docker

I am running the docker project on docker toolbox windows 7 sp1 and the project does not give any error but still due to postgres not working the whole project in not functional.
The code of Docker compose file is :
version: '3'
services:
postgres:
image: 'postgres:latest'
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: "db"
POSTGRES_PASSWORD: postgres_password
POSTGRES_HOST_AUTH_METHOD: "trust"
DATABASE_URL: postgresql://postgres:p3wrd#postgres:5432/postgres
deploy:
restart_policy:
condition: on-failure
window: 15m
redis:
image: 'redis:latest'
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
depends_on:
- "postgres"
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- ./server/copy:/usr/src/app/data
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
client:
depends_on:
- "postgres"
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- ./client/copy:/usr/src/app/data
- /usr/src/app/node_modules
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- ./worker/copy:/usr/src/app/data
- /usr/src/app/node_modules
depends_on:
- "postgres"
But when i run the project i get this :
redis_1 | 1:C 29 May 2020 05:07:37.909 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 29 May 2020 05:07:37.910 # Redis version=6.0.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 29 May 2020 05:07:37.911 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 29 May 2020 05:07:37.922 * Running mode=standalone, port=6379.
redis_1 | 1:M 29 May 2020 05:07:37.928 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 29 May 2020 05:07:37.929 # Server initialized
redis_1 | 1:M 29 May 2020 05:07:37.929 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 29 May 2020 05:07:37.933 * Loading RDB produced by version 6.0.3
redis_1 | 1:M 29 May 2020 05:07:37.934 * RDB age 8 seconds
redis_1 | 1:M 29 May 2020 05:07:37.934 * RDB memory usage when created 0.81 Mb
redis_1 | 1:M 29 May 2020 05:07:37.934 * DB loaded from disk: 0.001 seconds
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2020-05-29 05:07:38.927 UTC [1] LOG: starting PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-29 05:07:38.928 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-05-29 05:07:38.929 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-05-29 05:07:38.933 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-05-29 05:07:38.993 UTC [24] LOG: database system was shut down at 2020-05-29 05:07:29 UTC
api_1 |
api_1 | > # dev /usr/src/app
api_1 | > nodemon
api_1 |
api_1 | [nodemon] 1.18.3
api_1 | [nodemon] to restart at any time, enter `rs`
api_1 | [nodemon] watching: *.*
With or without data volumes the same error comes and due to which the project is not running. Please Help
I would assume that once the API starts, it attempts to connect to the Postgres. If yes, this is a typical error that many Docker developers experience where the Postgres DB is not yet ready to accept connections and apps are trying to connect to it.
You can solve the problem by trying either of the following approaches:
Make your API layer wait for a certain amount of time (Enough for the Postgres DB to boot up)
Thread.Sleep(60); # should be enough so that Postgres DB can start
Implement a retry mechanism that will wait for let's say 10 seconds everytime the connection fails to establish.
If this doesn't work, I would recommend for you to check whether there is a Postgres DB installed outside of the container that owns the port you were trying to access.
Along with Allan Chua's answer, Please mention a startup dependency on Postgres service in your docker-compose file.
depends_on:
- postgres
Add this to your api service.

Wierd behavior on channel creation with kafka ordering service on hyperledger fabric

So I was creating a channel using the cli container with the command peer channel create -f config/allarewelcome.tx -o orderer.example.com:7050 -c allarewelcome and I get these messages. The channel ends up being created as you can see but I wonder what is happening. Does anyone have an idea?
UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
UTC [cli.common] readBlock -> INFO 002 Got status: &{SERVICE_UNAVAILABLE}
UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
UTC [cli.common] readBlock -> INFO 004 Got status: &{SERVICE_UNAVAILABLE}
UTC [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
UTC [cli.common] readBlock -> INFO 006 Got status: &{SERVICE_UNAVAILABLE}
UTC [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
UTC [cli.common] readBlock -> INFO 008 Got status: &{SERVICE_UNAVAILABLE}
UTC [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
UTC [cli.common] readBlock -> INFO 00a Received block: 0
Here are my kafka clusters and zookeepers definitions on my docker-compose.yaml file:
kafka0.example.com:
container_name: kafka0.example.com
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=2
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
- KAFKA_BROKER_ID=1
ports:
- 9092:9092
- 9093:9093
networks:
- basic
depends_on:
- zookeeper1.example.com
- zookeeper2.example.com
- zookeeper3.example.com
kafka1.example.com:
container_name: kafka1.example.com
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=2
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
- KAFKA_BROKER_ID=2
ports:
- 10092:9092
- 10093:9093
networks:
- basic
depends_on:
- zookeeper1.example.com
- zookeeper2.example.com
- zookeeper3.example.com
zookeeper1.example.com:
container_name: zookeeper0.example.com
image: hyperledger/fabric-zookeeper
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper1.example.com:2888:3888 server.2=zookeeper2.example.com:2888:3888 server.3=zookeeper3.example.com:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
networks:
- basic
zookeeper2.example.com:
container_name: zookeeper1.example.com
image: hyperledger/fabric-zookeeper
environment:
- ZOO_MY_ID=2
- ZOO_SERVERS=server.1=zookeeper1.example.com:2888:3888 server.2=zookeeper2.example.com:2888:3888 server.3=zookeeper3.example.com:2888:3888
ports:
- 12181:2181
- 12888:2888
- 13888:3888
networks:
- basic
zookeeper3.example.com:
container_name: zookeeper2.example.com
image: hyperledger/fabric-zookeeper
environment:
- ZOO_MY_ID=3
- ZOO_SERVERS=server.1=zookeeper1.example.com:2888:3888 server.2=zookeeper2.example.com:2888:3888 server.3=zookeeper3.example.com:2888:3888
ports:
- 22181:2181
- 22888:2888
- 23888:3888
networks:
- basic
And here is my logs from the orderer:
UTC [fsblkstorage] newBlockfileMgr -> INFO 014 Getting block information from block storage
UTC [comm.grpc.server] 1 -> INFO 015 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.0.18:59044 grpc.code=OK grpc.call_duration=107.803049ms
UTC [orderer.consensus.kafka] newChain -> INFO 016 [channel: allarewelcome] Starting chain with last persisted offset -3 and last recorded block [0]
UTC [orderer.commmon.multichannel] newChain -> INFO 017 Created and starting new chain allarewelcome
UTC [orderer.consensus.kafka] setupTopicForChannel -> INFO 018 [channel: allarewelcome] Setting up the topic for this channel...
UTC [common.deliver] deliverBlocks -> WARN 019 [channel: allarewelcome] Rejecting deliver request for 192.168.0.18:59042 because of consenter error
UTC [comm.grpc.server] 1 -> INFO 01a streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.0.18:59042 grpc.code=OK grpc.call_duration=112.653714ms
UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 01b [channel: allarewelcome] Setting up the producer for this channel...
2020-01-08 14:56:58.253 UTC [orderer.consensus.kafka] startThread -> INFO 01c [channel: allarewelcome] Producer set up successfully
UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 01d [channel: allarewelcome] About to post the CONNECT message...
UTC [common.deliver] deliverBlocks -> WARN 01e [channel: allarewelcome] Rejecting deliver request for 192.168.0.18:59046 because of consenter error
UTC [comm.grpc.server] 1 -> INFO 01f streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.0.18:59046 grpc.code=OK grpc.call_duration=186.949863ms
UTC [common.deliver] deliverBlocks -> WARN 020 [channel: allarewelcome] Rejecting deliver request for 192.168.0.18:59052 because of consenter error
UTC [comm.grpc.server] 1 -> INFO 021 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.0.18:59052 grpc.code=OK grpc.call_duration=201.05081ms
UTC [common.deliver] deliverBlocks -> WARN 022 [channel: allarewelcome] Rejecting deliver request for 192.168.0.18:59054 because of consenter error
UTC [comm.grpc.server] 1 -> INFO 023 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.0.18:59054 grpc.code=OK grpc.call_duration=201.212849ms
UTC [orderer.consensus.kafka] startThread -> INFO 024 [channel: allarewelcome] CONNECT message posted successfully
UTC [orderer.consensus.kafka] setupParentConsumerForChannel -> INFO 025 [channel: allarewelcome] Setting up the parent consumer for this channel...
UTC [orderer.consensus.kafka] startThread -> INFO 026 [channel: allarewelcome] Parent consumer set up successfully
UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 027 [channel: allarewelcome] Setting up the channel consumer for this channel (start offset: -2)...
UTC [orderer.consensus.kafka] startThread -> INFO 028 [channel: allarewelcome] Channel consumer set up successfully
UTC [orderer.consensus.kafka] startThread -> INFO 029 [channel: allarewelcome] Start phase completed successfully
UTC [comm.grpc.server] 1 -> INFO 02a streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.0.18:59056 grpc.code=OK grpc.call_duration=203.740188ms
Thanks in advance!
You can get this error if the channel is created yet.
In this case try with:
./byfn.sh -m restart -c channelName
if you are using byfn, otherwise just restart the network clearning the data.
Also, be sure that when you create the network and you create the channel, you are passing channel name to the peer channel create.
Anyways, this could be also an error related to Kafka.
I personally got this message too when creating the network, even if I am not using Kafka, but I remember I read something about this issue related to a kafka environment variable missing.
Consider that they are warning so you can proceed, your channel seems to be created anyways.

"Principal deserialization failure" when attempting to register a new channel on my blank hyperledger network

I'm trying to set up a kafka based hyperledger configuration with a single organization.
I have a working docker network that allows the containers to communicate. The kafka/zookeeper setup also seems to be working fine; orderer logs and recognizes all four kafka nodes. The issue I'm having occurs when trying to create a new channel on an otherwise blank setup.
The commands I run to bootstrap the network are:
cryptogen generate --config=crypto-config.yaml
configtxgen -profile MyFakeOrgGenesis -outputBlock ./channel-artifacts/genesis.block
configtxgen -profile Channel1 -outputCreateChannelTx ./channel-artifacts/channel1.tx -channelID channel1
configtxgen -profile Channel2 -outputCreateChannelTx ./channel-artifacts/channel2.tx -channelID channel2
# has to be done to update the CA certificate filename
SK_FILE=$(basename $(ls ./crypto-config/peerOrganizations/myfakeorg.si/ca/*_sk))
sed -i "s#FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/.*#FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/$SK_FILE#" dockers/docker-compose.yml
docker-compose -f dockers/kafka-compose.yml up -d
docker-compose -f dockers/docker-compose.yml up -d
Then, inside the cli contrainer, I run:
peer channel create -o orderer.myfakeorg.si:7050 channel1 -f /etc/hyperledger/channel-artifacts/channel1.tx --cafile $PWD/crypto/ordererOrganizations/myfakeorg.si/msp/tlscacerts/tlsca.myfakeorg.si-cert.pem -c channel1
This invariably fails with:
2018-05-03 12:34:56.518 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-03 12:34:56.518 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-05-03 12:34:56.529 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-05-03 12:34:56.529 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2018-05-03 12:34:56.531 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2018-05-03 12:34:56.532 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-05-03 12:34:56.532 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-05-03 12:34:56.532 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AFE050A084C756369734D535012F105...0A0E47445052436F6E736F727469756D
2018-05-03 12:34:56.533 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: CBAC9B6A3A06426802DFB81D1196DCF28534A52BA369095151BE6DB954A6A7E3
2018-05-03 12:34:56.534 UTC [msp] GetLocalMSP -> DEBU 00a Returning existing local MSP
2018-05-03 12:34:56.534 UTC [msp] GetDefaultSigningIdentity -> DEBU 00b Obtaining default signing identity
2018-05-03 12:34:56.534 UTC [msp] GetLocalMSP -> DEBU 00c Returning existing local MSP
2018-05-03 12:34:56.534 UTC [msp] GetDefaultSigningIdentity -> DEBU 00d Obtaining default signing identity
2018-05-03 12:34:56.535 UTC [msp/identity] Sign -> DEBU 00e Sign: plaintext: 0AB4060A1408021A0608F083ACD70522...C96F9F62C4F53749F47ECFC3A16F88F4
2018-05-03 12:34:56.535 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: 388F95E877FB3442AA20001CC212777DB2BC38F70799B9D463C19488B64C2B77
Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
As these errors are purposefully vague, as another answer states, heres the parallel log from the orderer container:
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 158 Manager Channel looking up path []
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 159 Manager Channel has managers Application
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 15a Manager Channel has managers Orderer
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 15b Manager Channel looking up path [Application]
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 15c Manager Channel has managers Application
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 15d Manager Channel has managers Orderer
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 15e Manager Channel/Application looking up path []
2018-05-03 11:56:41.344 UTC [policies] Manager -> DEBU 15f Manager Channel/Application has managers MyFakeOrg
2018-05-03 11:56:41.344 UTC [policies] Evaluate -> DEBU 160 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy ==
2018-05-03 11:56:41.345 UTC [policies] Evaluate -> DEBU 161 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2018-05-03 11:56:41.345 UTC [policies] Evaluate -> DEBU 162 == Evaluating *cauthdsl.policy Policy /Channel/Application/MyFakeOrg/Admins ==
2018-05-03 11:56:41.345 UTC [msp] DeserializeIdentity -> INFO 163 Obtaining identity
2018-05-03 11:56:41.345 UTC [msp/identity] newIdentity -> DEBU 164 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICADCCAaegAwIBAgIQH+Mp3PmmcdYcZoN8XriCajAKBggqhkjOPQQDAjBjMQsw
CQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy
YW5jaXNjbzERMA8GA1UEChMIbHVjaXMuc2kxFDASBgNVBAMTC2NhLmx1Y2lzLnNp
MB4XDTE4MDUwMzExNTAzMVoXDTI4MDQzMDExNTAzMVowUzELMAkGA1UEBhMCVVMx
EzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBGcmFuY2lzY28xFzAV
BgNVBAMMDkFkbWluQGx1Y2lzLnNpMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE
LsW17T+/PpQgw18Ay1AvP7UAzgq69Sy7wb1GenjldLY9Hl8t7MhsIFBbLFkKmFNt
C46y7GzQxE8i+0MmLEHQ4qNNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQC
MAAwKwYDVR0jBCQwIoAgMvtJKVZDbuxG0YsjgcPEL/VcD9eFFsHjZ00YvQ/r7ccw
CgYIKoZIzj0EAwIDRwAwRAIgBz1Sy+XFOqO7jv9qUh6q8KPkHnmtNu3Ru2E3sMHG
rGoCIHm9wfwK5jm2bsZUvYNvV0Skch2swe7Fc+EBlwQsVbKR
-----END CERTIFICATE-----
2018-05-03 11:56:41.346 UTC [cauthdsl] deduplicate -> ERRO 165 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.myfakeorg.si")) for identity 0a084c756369734d535012f1052d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d49494341444343416165674177494241674951482b4d7033506d6d636459635a6f4e3858726943616a414b42676771686b6a4f50515144416a426a4d5173770a435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a76636d3570595445574d4251474131554542784d4e5532467549455a790a5957356a61584e6a627a45524d4138474131554543684d496248566a61584d7563326b784644415342674e5642414d5443324e684c6d783159326c7a4c6e4e700a4d423458445445344d4455774d7a45784e54417a4d566f58445449344d44517a4d4445784e54417a4d566f77557a454c4d416b474131554542684d4356564d780a457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e6862694247636d467559326c7a59323878467a41560a42674e5642414d4d446b466b62576c755147783159326c7a4c6e4e704d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a304441516344516741450a4c73573137542b2f507051677731384179314176503755417a6771363953793777623147656e6a6c644c5939486c3874374d6873494642624c466b4b6d464e740a4334367937477a51784538692b304d6d4c45485134714e4e4d45737744675944565230504151482f42415144416765414d41774741315564457745422f7751430a4d4141774b7759445652306a42435177496f41674d76744a4b565a44627578473059736a676350454c2f5663443965464673486a5a30305976512f72376363770a436759494b6f5a497a6a3045417749445277417752414967427a3153792b58464f714f376a76397155683671384b506b486e6d744e75335275324533734d48470a72476f4349486d397766774b356a6d3262735a5576594e765630536b6368327377653746632b45426c77517356624b520a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
2018-05-03 11:56:41.346 UTC [cauthdsl] func1 -> DEBU 166 0xc42000ee48 gate 1525348601346256695 evaluation starts
2018-05-03 11:56:41.346 UTC [cauthdsl] func2 -> DEBU 167 0xc42000ee48 signed by 0 principal evaluation starts (used [false])
2018-05-03 11:56:41.346 UTC [cauthdsl] func2 -> DEBU 168 0xc42000ee48 principal evaluation fails
2018-05-03 11:56:41.346 UTC [cauthdsl] func1 -> DEBU 169 0xc42000ee48 gate 1525348601346256695 evaluation fails
2018-05-03 11:56:41.346 UTC [policies] Evaluate -> DEBU 16a Signature set did not satisfy policy /Channel/Application/MyFakeOrg/Admins
2018-05-03 11:56:41.346 UTC [policies] Evaluate -> DEBU 16b == Done Evaluating *cauthdsl.policy Policy /Channel/Application/MyFakeOrg/Admins
2018-05-03 11:56:41.346 UTC [policies] func1 -> DEBU 16c Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ MyFakeOrg.Admins ]
2018-05-03 11:56:41.347 UTC [policies] Evaluate -> DEBU 16d Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2018-05-03 11:56:41.347 UTC [policies] Evaluate -> DEBU 16e == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
And all the relevant configuration files:
configtx.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
---
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
MyFakeOrgGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *MyFakeOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
ABCDConsortium:
Organizations:
- *MyFakeOrg
Channel1:
Consortium: ABCDConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *MyFakeOrg
Channel2:
Consortium: ABCDConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *MyFakeOrg
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
# SampleOrg defines an MSP using the sampleconfig. It should never be used
# in production but may be used as a template for other definitions
- &MyFakeOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: MyFakeOrg
# ID to load the MSP definition as
ID: MyFakeOrgMSP
# MSPDir is the filesystem path which contains the MSP configuration
MSPDir: crypto-config/ordererOrganizations/myfakeorg.si/msp
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: kafka
Addresses:
- orderer.myfakeorg.si:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 4s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 1000
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 4096 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka0:9092
- kafka1:9092
- kafka2:9092
- kafka3:9092
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
- *MyFakeOrg
Capabilities:
Global: &ChannelCapabilities
V1_1: true
Orderer: &OrdererCapabilities
V1_1: true
crypto-config.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: MyFakeOrg
Domain: myfakeorg.si
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
# ---------------------------------------------------------------------------
# Org1
# ---------------------------------------------------------------------------
- Name: MyFakeOrg
Domain: myfakeorg.si
# ---------------------------------------------------------------------------
# "Specs"
# ---------------------------------------------------------------------------
# Uncomment this section to enable the explicit definition of hosts in your
# configuration. Most users will want to use Template, below
#
# Specs is an array of Spec entries. Each Spec entry consists of two fields:
# - Hostname: (Required) The desired hostname, sans the domain.
# - CommonName: (Optional) Specifies the template or explicit override for
# the CN. By default, this is the template:
#
# "{{.Hostname}}.{{.Domain}}"
#
# which obtains its values from the Spec.Hostname and
# Org.Domain, respectively.
# ---------------------------------------------------------------------------
# Specs:
# - Hostname: foo # implicitly "foo.org1.example.com"
# CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
# - Hostname: bar
# - Hostname: baz
# ---------------------------------------------------------------------------
# "Template"
# ---------------------------------------------------------------------------
# Allows for the definition of 1 or more hosts that are created sequentially
# from a template. By default, this looks like "peer%d" from 0 to Count-1.
# You may override the number of nodes (Count), the starting index (Start)
# or the template used to construct the name (Hostname).
#
# Note: Template and Specs are not mutually exclusive. You may define both
# sections and the aggregate nodes will be created for you. Take care with
# name collisions
# ---------------------------------------------------------------------------
Template:
Count: 1
# Start: 5
# Hostname: {{.Prefix}}{{.Index}} # default
# ---------------------------------------------------------------------------
# "Users"
# ---------------------------------------------------------------------------
# Count: The number of user accounts _in addition_ to Admin
# ---------------------------------------------------------------------------
Users:
Count: 0
docker-compose.yml:
#
# Copyright IBM Corp All Rights Reserved
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
networks:
default:
external:
name: hledger-myfakeorg
services:
ca.myfakeorg.si:
image: hyperledger/fabric-ca:x86_64-1.1.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.myfakeorg.si
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.myfakeorg.si-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/32fb492956436eec46d18b2381c3c42ff55c0fd78516c1e3674d18bd0febedc7_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./../crypto-config/peerOrganizations/myfakeorg.si/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.myfakeorg.si
networks:
- default
orderer.myfakeorg.si:
container_name: orderer.myfakeorg.si
image: hyperledger/fabric-orderer:x86_64-1.1.0
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=MyFakeOrgMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./../channel-artifacts/:/etc/hyperledger/configtx
- ./../crypto-config/ordererOrganizations/myfakeorg.si/orderers/orderer.myfakeorg.si/:/etc/hyperledger/msp/orderer
- ./../crypto-config/peerOrganizations/myfakeorg.si/peers/peer0.myfakeorg.si/:/etc/hyperledger/msp/peer0
networks:
- default
peer0.myfakeorg.si:
container_name: peer0.myfakeorg.si
image: hyperledger/fabric-peer:x86_64-1.1.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.myfakeorg.si
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_PEER_LOCALMSPID=MyFakeOrgMSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_ADDRESS=peer0.myfakeorg.si:7051
# # the following setting starts chaincode containers on the same
# # bridge network as the peers
# # https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_default
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./../crypto-config/peerOrganizations/myfakeorg.si/peers/peer0.myfakeorg.si/msp:/etc/hyperledger/msp/peer
- ./../crypto-config/peerOrganizations/myfakeorg.si/users:/etc/hyperledger/msp/users
- ./../channel-artifacts/:/etc/hyperledger/configtx
depends_on:
- orderer.myfakeorg.si
- couchdb
networks:
- default
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb:x86_64-0.4.7
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- default
cli:
container_name: cli
image: hyperledger/fabric-tools:x86_64-1.1.0
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.myfakeorg.si:7051
- CORE_PEER_LOCALMSPID=MyFakeOrgMSP
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/myfakeorg.si/users/Admin#myfakeorg.si/msp
- CORE_CHAINCODE_KEEPALIVE=10
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../../chaincode/:/opt/gopath/src/github.com/
- ./../crypto-config/:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./../channel-artifacts:/etc/hyperledger/channel-artifacts
networks:
- default
#depends_on:
# - orderer.myfakeorg.si
# - peer0.myfakeorg.si
# - couchdb
kafka-compose-base.yml:
version: '2'
services:
zookeeper:
image: hyperledger/fabric-zookeeper
ports:
- 2181
- 2888
- 3888
kafka:
image: hyperledger/fabric-kafka
environment:
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_DEFAULT_REPLICATION_FACTOR=2
- KAFKA_MIN_INSYNC_REPLICAS=2
ports:
- 9092
kafka-compose.yml:
version: '2'
networks:
default:
external:
name: hledger-myfakeorg
services:
zookeeper0:
extends:
file: kafka-compose-base.yml
service: zookeeper
container_name: zookeeper0
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- default
zookeeper1:
extends:
file: kafka-compose-base.yml
service: zookeeper
container_name: zookeeper1
environment:
- ZOO_MY_ID=2
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- default
zookeeper2:
extends:
file: kafka-compose-base.yml
service: zookeeper
container_name: zookeeper2
environment:
- ZOO_MY_ID=3
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- default
kafka0:
extends:
file: kafka-compose-base.yml
service: kafka
container_name: kafka0
environment:
- KAFKA_BROKER_ID=0
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- default
kafka1:
extends:
file: kafka-compose-base.yml
service: kafka
container_name: kafka1
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- default
kafka2:
extends:
file: kafka-compose-base.yml
service: kafka
container_name: kafka2
environment:
- KAFKA_BROKER_ID=2
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- default
kafka3:
extends:
file: kafka-compose-base.yml
service: kafka
container_name: kafka3
environment:
- KAFKA_BROKER_ID=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- default
There are two specifics I can think of regarding this network.
There is only one "organization", or rather, the peer and the orderer organizations are one and the same (the cryptographic material is seemingly still generated twice).
There's no anchor peers declared for the channels. This feature, as far as I understand it, is intended for cross-org communication, which isn't necessary in this case.
Is any of these two architectural decisions possibly the cause for my problem? Or is there some other technical issue? A similar configuration worked in "solo" mode.
Any help would be greatly appreciated.
There's some issue with your admin certificate as upon decoding the certificate, it is clear that the admin certificate, that you're using to create the channel, is signed by some other CA (ca.lucis.si, lucis.si) not known by the ordering service and the other components and hence it is throwing error saying certificate signed by unknown authority. As the channel creation is done via a transaction through one of the system chaincodes, the identity of the transaction submitter is validated and the identity certificate should be signed by a registered certificate authority and in your case, the certificate you're providing is issued/signed by an unregistered certificate authority. That's why you can't actually create a channel.
You can decode any pem encoded certificate to see who has signed this certificate by using an online decoder such as This Online Cert Decoder. Just paste your pem encoded certificate there and you'll see the signing information which confirms that the issuer ca is unknown to your network leading to failure in channel creation.

Hyperledger-fabric not working with docker swarm

I 'm using the Fabric 1.1 alpha release and trying to set it up using docker swarm. I 'm using docker compose files with docker stack to deploy the containers.
The issue I 'm having is my chaincode listening port which is 7052 hardcoded somewhere in the peer container is not listening on docker swarm.
The same compose file with minor changes work, if I don't use docker swarm.
I 'm not sure if it's something wrong with the peer itself or with docker-swarm.
This is from my peer container, which clearly is not allowing any connections on 7052 port.
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# telnet 10.0.0.6 7051
Trying 10.0.0.6...
Connected to 10.0.0.6.
Escape character is '^]'.
^CConnection closed by foreign host.
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# telnet 10.0.0.6 7052
Trying 10.0.0.6...
telnet: Unable to connect to remote host: Connection refused
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# telnet 10.0.0.6 7053
Trying 10.0.0.6...
Connected to 10.0.0.6.
Escape character is '^]'.
^CConnection closed by foreign host.
But it's listening on 7052 port.
root#71c1b8f22052:/opt/gopath/src/github.com/hyperledger/fabric/peer# netstat -nalp | grep 7052
tcp 0 0 10.0.0.6:7052 0.0.0.0:* LISTEN 7/peer
I 'm getting this in my chaincode container logs, when I 'm instantiating the chaincode.
2018-02-06 09:45:11.886 UTC [bccsp] initBCCSP -> DEBU 001 Initialize BCCSP [SW]
2018-02-06 09:45:11.906 UTC [grpc] Printf -> DEBU 002 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.0.10:7052: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7052 <nil>}
2018-02-06 09:45:12.905 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.0.10:7052: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7052 <nil>}
2018-02-06 09:45:14.612 UTC [grpc] Printf -> DEBU 004 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.0.0.10:7052: getsockopt: connection refused"; Reconnecting to {peer0.org1.example.com:7052 <nil>}
2018-02-06 09:45:14.904 UTC [shim] userChaincodeStreamGetter -> ERRO 005 context deadline exceeded
error trying to connect to local peer
github.com/hyperledger/fabric/core/chaincode/shim.userChaincodeStreamGetter
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:111
github.com/hyperledger/fabric/core/chaincode/shim.Start
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/chaincode.go:150
main.main
/chaincode/input/src/github.com/chaincode/alepomm/alepomm.go:355
runtime.main
/opt/go/src/runtime/proc.go:195
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2337
Error creating new Smart Contract: error trying to connect to local peer: context deadline exceeded
^^Ignore the timestamp and Ip here, the logs are from diff runs. But it happens the same everytime.
Here is my compose section for the peer.
peer0:
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb0:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
# - CORE_PEER_ADDRESSAUTODETECT=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7053:7053
expose:
- 7051
- 7053
command: peer node start
depends_on:
- couchdb0
networks:
fabric:
aliases:
- "peer0.org1.example.com"
deploy:
placement:
constraints:
- node.hostname == ip-172-31-22-132
This is really weird, if I remove the deploy section (which is stack specific) everything works.
My n/w is an overlay type n/w with scope of swarm.
#yacovm, thank you so much for the help! The issue was that the chaincode container was not launching on the same network as my peer container and hence it was not able to connect to it. To fix it,
CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
Was added to the env of my peer containers. Now it works like a charm