Hyperledger Fabric Custom Chaincode Instantiation: how to fix 'context deadline exceeded' - docker-compose

I'm working with the "first-network" example of the Hyperledger Fabric framework as found here, but I am not able to successfully instantiate a custom Java chaincode file. Here is the command I am running, which is a very-slightly modified version of the command that the successful start script runs:
peer chaincode instantiate -o orderer.example.com:7050 --tls false --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n javacc -l java -v 1.4 -c '{"Args":["init","a","{10}","b","{20}"]}' --connTimeout 600s -P 'AND ('\''Org1MSP.peer'\'','\''Org2MSP.peer'\'')'
When I run this command, I get the following error:
Error: error getting broadcast client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: context deadline exceeded
After some Googling, the error seems fairly straightforward: for some reason the client container I'm using isn't able to connect to my orderer container. However, this doesn't make sense to me for the following reasons:
First: I was successfully able to run ./byfn.sh up -v -c mychannel -s couchdb, which initially installed and instantiated some golang chaincode from the client container with the following commands (the -v 1.0 was later upgraded by the byfn.sh script to version 1.4):
peer chaincode install -n mycc -v 1.0 -l golang -p github.com/chaincode/chaincode_example02/go/
peer chaincode instantiate -o orderer.example.com:7050 --tls false --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n mycc -l golang -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P 'AND ('\''Org1MSP.peer'\'','\''Org2MSP.peer'\'')'
Second: I was successfully able to install (though not instantiate) my custom Java chaincode from within the same client container: peer chaincode install -n javacc -v 1.4 -l java -p /chaincode-application (/chaincode-application is the location of my Java chaincode source)
I have examined a few possible avenues so far. The website mentioned above does have a note about the length of time: "Please note, Java chaincode instantiation might take time as it compiles chaincode and downloads docker container with java environment." Therefore, I used the --connTimeout 600s (600 seconds = 10 minutes) as part of my command, but I still get the error after waiting for the specified amount of time. Besides, would it really take that long inside of a peer container when it takes about 3 seconds to build in my local IntelliJ?
I have also tried hitting the orderer node from inside my client container: root#7bbadf11e755:/opt/gopath/src/github.com/hyperledger/fabric/peer# curl orderer.example.com:7050, which fails with the following message: curl: (56) Recv failure: Connection reset by peer. This seems to indicate that my client container can connect to the orderer container, but the orderer container doesn't know how to handle the empty request path. As far as I know this is to be expected, likely meaning that the issue isn't in my Docker Compose setup. The only thing I have changed in my Docker Compose files is turning off TLS.
Finally, I've also followed the example Maven project setup for the custom Java chaincode file located inside of my client. I have a pom.xml file that specifies my main class name, and a few Java source files inside of a 'src' directory, including a Java class that implements the proper Chaincode interface as specified in Hyperledger Fabric's documentation. These files build just fine when I run the build with maven outside of the container.
Interestingly, I see the following message on only one of the peer nodes:
2019-04-11 02:11:36.056 UTC [endorser] callChaincode -> INFO 119 [][dc951a0d] Entry chaincode: name:"cscc"
2019-04-11 02:11:36.060 UTC [endorser] callChaincode -> INFO 11a [][dc951a0d] Exit chaincode: name:"cscc" (3ms)
2019-04-11 02:11:36.060 UTC [comm.grpc.server] 1 -> INFO 11b unary call completed {"grpc.start_time": "2019-04-11T02:11:36.055Z", "grpc.service": "protos.Endorser", "grpc.method": "ProcessProposal", "grpc.peer_address": "172.27.0.11:42744", "grpc.code": "OK", "grpc.call_duration": "5.7837ms"}
The other peer node has no entry from today, and neither does the orderer node, which I find curious.
I've tried Googling around for the error. I did find this issue, which seems to be exactly my problem, but there is only one answer that doesn't solve the problem and I'm not allowed to comment or vote it up. I also found this, but wasn't able to glean any insights into my own issue since I'm running all of the Hyperledger Fabric nodes locally. I'm out of ideas for how to troubleshoot this issue. Does anyone have any idea why I'm still getting the 'context deadline exceeded' message?

I re-enabled TLS in my network and the error went away (another error occurred instead, but that's outside the scope of my question :) ). It seems that maybe I didn't disable all the TLS settings like I thought.
Thanks to #Gari Singh for putting me on the right track!

Related

PI4 k3s install server currently unable to handle the request

I'm trying to install and run a single-node lightweight kubernetes cluster, to play around with on my Raspberry pi4, of which I found k3s. However, from what I've read or seen, I'm probably missing something, but haven't found reference to the exact problem I'm getting (testing with simple kubectl command after installation):
$ kubectl get nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request
The installations that I've referenced:
Turing Pis, multi-node cluster
-> The part of knowing and using Ansible currently seems like bit of a overkill)
Pi setup & k3s install -> Good tutorial, but not having similar config responses?
$ sudo k3s server
INFO[2020-09-30T06:58:13.488363192+01:00] Starting k3s v1.18.9+k3s1 (630bebf9)
INFO[2020-09-30T06:58:13.489450500+01:00] Cluster bootstrap already complete
FATA[2020-09-30T06:58:13.535582640+01:00] starting kubernetes: preparing server: start cluster and https: listen tcp :6443: bind: address already in use
Presumed that this isn't necessary anymore then, based on the newer installation version.
complete k3s 101 youtube -> Still not magically working, as shown.
So if anyone is able to please help me, or guide me in a direction to better debug and display the problem so that I understand and can fix the problem.
Feedback from the installation didn't display that anything went wrong:
$ sudo curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode 664" sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.9+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/sha256sum-arm.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.9+k3s1/k3s-armhf
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
After that, trying commands:
$ k3s --version
k3s version v1.18.9+k3s1 (630bebf9)
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9+k3s1", GitCommit:"630bebf94b9dce6b8cd3d402644ed023b3af8f90", GitTreeState:"clean", BuildDate:"2020-09-17T19:04:57Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/arm"}
Error from server (ServiceUnavailable): the server is currently unable to handle the request
$ sudo kubectl get nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request
$ sudo k3s kubectl get nodes
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
And looking with htop, definitely 'something' is happening with k3s servers:
Not sure if anything is missing, or must be changed to hosts, for k3s server + agent on device:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 raspberrypi
... No clue what to debug further??
After learning a bit more of the installation process, by watching this video (k3s install on Pi4 - live walkthrough), I noticed that k3s runs as a service on raspbian.
meaning you're able to:
# see all listed services, to find the name of the running k3s service
$ systemctl --type=service
# service name ironically being 'k3s', and being able to follow the logs for service
$ journalctl -u k3s -f
However, looking in '/boot/cmdline.txt', these cgroup values where in the file, but after a endline-character, which prohibited the k3s service sufficiently reading from the file. File content required to be:
$ sudo cat /boot/cmdline.txt
console=serial0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_enable=1 cgroup_memory=1 cgroup_enable=memory
With that done, I checked journalctl again for the logs, and noticed significantly other logs, regarding pod's containers etc. Master node being functional!:
$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
raspberrypi Ready master 3m52s v1.18.9+k3s1
If this still doesn't work, I also saw a recent blog post regarding the same issue (due to raspbian kernal update), where fix is also suggested -> post

Keycloak server in docker fails to start in standalone mode?

Well, as the title suggests, this is more of an issue record. I was trying to follow the instructions on this README file of Keycloak docker server images, but encountered a few blockers.
After pulling the image, below command to start a standalone instance failed.
docker run jboss/keycloak
The error stack trace:
-b 0.0.0.0
=========================================================================
Using PostgreSQL database
=========================================================================
...
04:45:06,084 INFO [io.smallrye.metrics] (MSC service thread 1-5) Converted [2] config entries and added [4] replacements
04:45:06,096 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 33) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "datasources"),
("data-source" => "KeycloakDS")
]) - failure description: "WFLYCTL0113: '' is an invalid value for parameter user-name. Values must have a minimum length of 1 characters"
...
Caused by: java.lang.RuntimeException: Failed to connect to database
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:382)
...
Caused by: javax.naming.NameNotFoundException: datasources/KeycloakDS -- service jboss.naming.context.java.jboss.datasources.KeycloakDS
at org.jboss.as.naming.ServiceBasedNamingStore.lookup(ServiceBasedNamingStore.java:106)
...
I was wondering how it uses a PostgreSQL database, and assumed it might spin up its own instance. But the error looks like it has a problem connecting to the database.
Changing to the embedded H2 DB made it work.
docker run -e DB_VENDOR="h2" --name docker-keycloak-h2 jboss/keycloak
The docker-entrypoint.sh file shows that it uses below logic to determine what DB to use.
if (getent hosts postgres &>/dev/null); then
export DB_VENDOR="postgres"
...
And further down the flow, this change-database.cli file indicates that it's actually expecting a running PostgreSQL instance to use.
connection-url=jdbc:postgresql://${env.DB_ADDR:postgres}:${env.DB_PORT:5432}/${env.DB_DATABASE:keycloak}${env.JDBC_PARAMS:}
So I began wondering how PostgreSQL was chosen as a default initially. Executing below commands in a running Keycloak docker container revealed some interesting things.
[root#71961b81189c bin]# getent hosts postgres
69.172.201.153 postgres.mbox.com
[root#71961b81189c bin]# echo $?
0
Not sure what this postgres.mbox.com is but apparently it's not an expected PostgreSQL server to be resolved by getent. Not sure whether this is a recent linux issue either. The hosts entry in the Name Service Switch Configuration file /etc/nsswitch.conf looks like below inside the container.
hosts: files dns myhostname
It is the dns data source that resolved postgres to postgres.mbox.com.
This is why the DB vendor determination logic failed which eventually caused the container failing to start. The instructions on this README file do not work as of the day this post is published.
Below are the working commands to start a Keycloak server in docker properly with PostgreSQL as the database.
docker network create keycloak-network
docker run -d --name postgres --net keycloak-network -e POSTGRES_DB=keycloak -e POSTGRES_USER=keycloak -e POSTGRES_PASSWORD=password postgres
docker run --name docker-keycloak-postgres --net keycloak-network -e DB_USER=keycloak -e DB_PASSWORD=password jboss/keycloak
I ran into the same issue. As it turned out, the key to the solution was the missing parameter "DB_USER=keycloak".
The Application tried to authenticate against the database using the username ''. This was indicated by the first error message.
WFLYCTL0113: '' is an invalid value for parameter user-name
Possibly the 4.x and 5.0.0 versions set the default user name to "keycloak" which was no longer the case in 6.0.0.
After adding the parameter DB_USER=keycloak to the list of environment variables, keycloak started up without any problems.
The problem no longer occurs now. I am voting to close the question.
I've also had an interesting observation to this issue, even in version 7.0.0. Alike the author mentions, postgres is selected if the host can resolve.
$ getent hosts postgres
$ 92.242.140.21
What I've noticed is that if I issue a ping command at anything bizzare, even foobar, it evaluates to that same ip address. Example:
$ ping foobar
$ PING foobar (92.242.140.21): 56 data bytes
It seems that my ISP sends everything to a common endspace. I've solved the problem by using the -e DB_VENDOR=h2, to select the h2 db, and then had no issues. Alternatively, you can always spin up your own postgres version, or point to a legitimate endpoint. ( Not something fake provided by your ISP for DNS error handling )

Concourse 5.0 Installation in AWS

We have been trying to setup concourse 5.0.0 (we already set up 4.2.2) in our AWS. We have created two instances one is for web and another is for worker. We are able to see the site up and running but we are not able to run our pipeline. we checked the logs and noticed that worker throwing the below error.
Workerr.beacon.forward-conn.failed-to-dial","data":{"addr":"127.0.0.1:7777","error":"dial tcp 127.0.0.1:7777: connect: connection refused","network":"tcp","session":"9.1.4"}}
We are assuming worker is struggling to connect to web instance and wondering if this could be due to missing gdn configuration. Concourse 5.0.0 release included both concourse and gdn binaries. we want to try --garden-config file to see if that fixes the problem.
can somebody suggest how do we write garden config file ?
I had this same problem and solved it using #umamaheswararao-meka's answer. (Using ubuntu 18.04 on EC2)
Also had a problem with containers not being able to resolve domain names (https://github.com/docker/libnetwork/issues/2187). Here is the error message:
resource script '/opt/resource/check []' failed: exit status 1
stderr:
failed to ping registry: 2 error(s) occurred:
* ping https: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
* ping http: Get http://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
What I did:
sudo apt-get install resolvconf -y
# These are cloudflare's DNS servers
sudo echo "nameserver 1.1.1.1" >> /etc/resolvconf/resolv.conf.d/tail
sudo echo "nameserver 1.0.0.1" >> /etc/resolvconf/resolv.conf.d/tail
sudo resolvconf -u
cat /etc/resolv.conf # just to make sure changes are in place
# restart concourse service
Containers make use of resolv.conf and as the file is generated dynamically on ubuntu 18.04 this was the easiest way of making containers inherit this configuration.
Also relevant snippets from man resolvconf
-u Just run the update scripts (if updating is enabled).
/etc/resolvconf/resolv.conf.d/tail
File to be appended to the dynamically generated resolver configuration file. To append
nothing, make this an empty file. This file is a good place to put a resolver options
line if one is needed, e.g.,
it was the issue with gdn(garden binary) which was not configured. we had to include CONCOURSE_BIND_IP=xx.xx.x.x ( IP where your gdn is located) and CONCOURSE_BIND_PORT=7777( gdn's port) in wroker.env file. Which solved the problem for us.

Docker dotnet run port not mapping, windows 10 host, linux container

I'm following a https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents which uses the older microsoft/aspnetcore-build image but I'm running core 2.1 so I'm using microsoft/dotnet:2.1-sdk instead.
The command I'm running is:
docker run -it -p 8080:5001 -v ${pwd}:/app -w "/app"
microsoft/dotnet:2.1-sdk
and then once inside the TTY I do a dotnet run which gives me the following output:
Using launch settings from /app/Properties/launchSettings.json...
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using '/root/.aspnet/DataProtection-Keys'
as key repository; keys will not be encrypted at rest.
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[58]
Creating key {5445e854-c1d9-4261-82f4-0fc3a7543e0a} with creation date
2018-12-14 10:41:13Z, activation date 2018-12-14 10:41:13Z, and
expiration date 2019-03-14 10:41:13Z.
warn:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key
{5445e854-c1d9-4261-82f4-0fc3a7543e0a} may be persisted to storage in
unencrypted form.
info:
Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[39]
Writing data to file
'/root/.aspnet/DataProtection-Keys/key-5445e854-c1d9-4261-82f4-0fc3a7543e0a.xml'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to https://localhost:5001 on the IPv6 loopback
interface: 'Cannot assign requested address'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to http://localhost:5000 on the IPv6 loopback
interface: 'Cannot assign requested address'.
Hosting environment: Development
Content root path: /app
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Then, when I open browser on my host and navigate to http://localhost:8080 I get a "This page isn't working" "localhost didn't send any data" " ERR_EMPTY_RESPONSE"
I've tried a couple different port combinations too with the same result.
Can anyone spot where I went wrong? Or have any ideas / suggestions?
Not sure if this question still relevant for you, but I also encountered this issue and left my solution here for others. I used PowerShell with the next docker command (almost the same as your command, just used internal port 90 instead of 5000 and used --rm switch which will automatically remove the container when it exits):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" microsoft/dotnet /bin/bash
And after that, I got the interactive bash shell, and when typing dotnet run I got the same output as you and cannot reach my site in the container via localhost:8080.
I resolved it by using UseUrls method or --urls command-line argument. They (UseUrls method or --urls command-line argument) indicates the IP addresses or host addresses with ports and protocols that the server should listen on for requests. Below descriptions of solutions which worked for me:
Edit CreateWebHostBuildermethod in Program.cs like below:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseUrls("http://+:90") //for your case you should use 5000 instead of 90
.UseStartup<Startup>();
You can specify several ports if needed using the next syntax .UseUrls("http://+:90;http://+:5000")
With this approach, you just typed dotnet run in bash shell and then your container will be reachable with localhost:8080.
But with the previous approach you alter the default behavior of your source code, which you can forget and then maybe should debug and fix in the future. So I prefer 2nd approach without changing the source code. After typing docker command and getting an interactive bash shell instead of just dotnet run type it with --urls argument like below (in your case use port 5000 instead of 90):
dotnet run --urls="http://+:90"
In the documentation there is also a 3rd approach where you can use ASPNETCORE_URLS environment variable, but this approach didn't work for me. I used the next command (with -e switch):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" -e "ASPNETCORE_URLS=http://+:90" microsoft/dotnet /bin/bash
If you type printenv in bash you will see that ASPNETCORE_URLS environment variable was passed to the container, but for some reason dotnet run is ignoring it.

Hyperledger composer v0.16.2 Rest server error

I am working composer v0.16.2. I am having an error while I try to reconnect to composer-rest-server.
I am using this command:
composer-rest-server -c admin#mynetwork -n always -a true -m true -w true -t true -e /home /.nvm/versions/node/v8.9.3/lib/node_modules/composer-rest-server/cert.pem -k /home /.nvm/versions/node/v8.9.3/lib/node_modules/composer-rest-server/key.pem
Whatever the option I set it works fine first time but when I need to reconnect with the same command, I need to restart fabric and deploy the business network again otherwise it will show this error:
Discovering types from business network definition ...
Connection fails:
Error: Error trying to ping.
Error: Error trying to query business network.
Error: Connect Failed It will be retried for the next request.
Exception: Error: Error trying to ping.
Error: Error trying to query business network.
Error: Connect Failed Error: Error trying to ping.
Error: Error trying to query business network.
Error: Connect Failed at _checkRuntimeVersions.then.catch (/home/.../.nvm/versions/node/v8.9.1/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:713:34)
at <anonymous>
Hyperledger Composer v0.16.0 network start error
I found a similar question on this link but I need to start fabric again when this error comes and again deploying network archive help to start the rest server.
My question is how can I remove this error without starting fabric again when I need to start rest server?
The first action of the REST server is to 'Discover' the Business Network using the admin#mynetwork Card. So you can simplify testing here by not using the REST server, but by issuing a simpler command composer network ping -c admin#mynetwork or composer network list -c admin#mynetwork.
When your admin#mynetwork card is created (when you deploy the business network), then imported BEFORE you use it try the command composer card list --name admin#mynetwork - at the bottom of the output you should see:
secretSet: Secret set
credentialsSet: Credentials not set
After you use the card for the first time with a composer network ping or list, redo the composer card list --name admin#mynetwork and you should see a change in the output with Credentials set.
This is important because when the Card is created, it is created with a One Time secret, and when it is first used the Certificates are downloaded - Credentials Set. The problem you are seeing with a failure of the REST server the second time you use it suggests that the certificates needed for the second use are not present.