Hyperledger-fabric v1.0.0 instantiate chaincode on kubernetes failed - kubernetes

I was testing Hyperledger-fabic v1.0.0 with kubernetes. The fabic contains 2 orgs, 4 peers, 1 orderer, and a cli. The things goes well until I instantiates the chaincode in the cli. The peer's error message in the picture. It says missing image, but the image just create successful. What's the problem and how can I solve it?
enter image description here
peer's error message

The answer can be found in the Hyperledger RocketChat on the #fabric-kubernetes channel.
"you basically need the peer to surface its dynamic IP (thats what AUTOADDRESS does) and then tell the chaincode to basically ignore the x509 CN thats what SERVERHOSTOVERRIDE does
(and the other part is you need the peer pod to be privelged so it has the rights to drive the docker-api".
Basically, there's lots to be learned from following the discussion from that point.

Related

Deploy API REST IBM Hyperledger Composer Blockchain (bad flag in substitute command: 'U' ERROR)

I'm getting this error trying to deploy a card to a working blockchain on cloud, any idea? Thanks in advance. I'm using a mac, following the guide (Kubernetes installed/configured well, I think):
https://ibm-blockchain.github.io/interacting/
./create/create_composer-rest-server.sh --paid --business-network-card /Users/sm/jsblock/tutorial-network/PeerAdmin#fabric-network.card
Configured to setup a paid storage on ibm-cs
Preparing yaml file for create composer-rest-server
sed: 1: "s/%COMPOSER_CARD%//User ...": bad flag in substitute command: 'U'
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
error: no objects passed to create
Composer rest server created successfully
There is an error in that document. And you are also specifying the Peer Admin card when you need to use a Network Admin Card.
There are 2 sets of 'parallel' documents for paid and free clusters. The command you are using has --paid in it in error. If you remove --paid and use the Network Admin Card I think it will solve the problem.
Your command will look something like this: ./create/create_composer-rest-server.sh --business-network-card admin#YOUR-network
I tried again with the install proccess (deployed blockchain instance seemed to work weall): https://ibm-blockchain.github.io/simple/
But I noticed that the script:
./create_all.sh
...never ended (mostly an internal kubernetes problems I guess). And I reset the installation:
./delete_all.sh
./create_all.sh
I tried again, now everything Ok. I could get my awesome API REST talking with my Hyperledger Blockchain, deployed on IBM Cloud. Ready to develop something amazing.

why do we need to use the PeerAdmin#byfn-network-org1 card for network start?

A question regarding the "Deploying a Hyperledger Composer blockchain business network to Hyperledger Fabric (multiple organizations)" tutorial. On Step Seventeen, why do we need to use the PeerAdmin#byfn-network-org1 card instead of the PeerAdmin#byfn-network-org1-only card?
I am trying to apply those instructions to a multi organization network on IBM Blockchain platform and getting an error when I try to use the card with all the peers. Things seems to work okay if I use the card with single org peers. But I wondering if there is a specific reason to use the multi org peers card for "composer network start".
Thanks,
Naveen
As you know (from the tutorial you referred to), Org1 requires two connection profiles. One connection profile will contain just the peer nodes that belong to Org1 (-only), and the other connection profile will contain the peer nodes that belong to Org1 and Org2.
The composer network start in Step Seventeen, is instantiating the business network on all peers (defined in the profile) to the shared ledger/channel. That channel is contained in the connection.json (which is part of the business network card ) - ie instantiating it across all (two) Orgs peers on the 'blockchain network'. A prior 'composer runtime install' had already been done, on those peers. The 'start' only needs to be done once for the business network (eg, by Org1 admin in this case). So the connection profile will contain the peer node info, that belongs to both Org1 and Org2 (ie a component part of the 'PeerAdmin#byfn-network-org1' card imported in the wallet). As opposed to: the card called byfn-network-org1-only which only has Org 1's peer defined (because typically, you would only be allowed, in the real world - to be able to install Composer runtime on a peer or peers in your 'own' Organisation - and not another's). PeerAdmin has the role / authority to do the runtime install and network start.
It sounds like your cards may actually be 'the wrong way around' - purely based on what you wrote. Because you will definitely need both peers defined in a card to be able to do Step Seventeen and you would not be able to do it with the '-only' card. I would check your connection profiles and see what's where.

Issue connecting composer to Blockchain on Bluemix - identity or token does not match

I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.

Can I use multiple chaincode using a single Bluemix blockchain service?

I'm new to IBM Bluemix Blockchain service. I wonder if I can create multiple chain code. This is because I got the following error.
! looks like an error loading the chaincode or network, app will fail
{ name: 'register() error',
code: 401,
details: { Error: 'rpc error: code = 13 desc = \'server closed the stream without sending trailers\'' } }
Here is what I did:
Create a blockchain serivce, and nameded as 'blockchain'.
Run cp-web example => Success
Run marbles demo using existing blockchain service ('blockchain'). => Gives me the above error
Newly create a blockchain service, names as 'mbblochchain'
Repush marbles demo with new service name => Success
So I wonder if I can put multiple chaincode into peer's network or not. It is likely I may be misunderstanding how it works or should behave.
Yes you can deploy multiple chaincodes on the same network. The issue you are having is because each app is registering users differently.
Currently only 1 username (aka enrollID) can be registered against 1 peer. If you try to register the same username against two peers, the 2nd registration will fail. This is what is happening to you.
The Bluemix blockchain service is returning two type1 usernames (type1 is the type of enrollID these apps want to use).
cp-web will register the first and second enrollID against peer vp1
marbles will register the first enrollID against vp1 and the 2nd enrollID against vp2
Therefore when you ran marbles after cp-web it tried to register the 2nd enrollID against vp2 when it had already been registered with vp1. Thus giving you an error.
In general, you can deploy multiple chaincode apps to a single instance of the Bluemix Blockchain service and more broadly speaking multiple chaincode apps to a single peer network.
Were you deploying the web apps directly using "cf push" and trying to bind to existing Blockchain service instance or where you trying to use the "deploy to Bluemix" functionality?

Not able to connect to cluster. Facing Certificate signed by unknown authority

I am not sure either what I am trying to do is possible or correct way.
One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access.
After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace.
I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master'
But when I try to list of existing pods using 'cluster/kubecfg.sh list pods'
I see
"F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe")
I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
It is probably due to a not implemented feature, see this issue:
https://github.com/GoogleCloudPlatform/kubernetes/issues/1886
you can copy the files from /usr/share/nginx/... on the master
into your home dir and try again.
I figured out a workaround: set the -insecure_skip_tls_verify option
In kubecfg.sh, change the code near the bottom to
else
auth_config=(
"-insecure_skip_tls_verify"
)
fi
Obviously this is insecure and you are putting yourself at risk of a man in the middle attack, etc.