I have deployed hyperledger composer over docker swarm following this article https://medium.com/#drakshayani7/hi-i-have-deployed-hyperledger-composer-application-over-docker-swarm-f54089d2ed7a. Everything is working perfect.
Now I'm going one step further by adding rest server with passport-google-strategy in the current docker swarm network.
But Its giving me error as "Exception: Error: Error trying to ping. Error: No peers available to query. last error was Error: 14 UNAVAILABLE: Connect Failed Error: Error trying to ping. Error: No peers available to query. last error was Error: 14 UNAVAILABLE: Connect Failed at _checkRuntimeVersions.then.catch (/home/composer/.npm-global/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:813:34)
at "
Currently in rest server card's connection.json file I have kept service names. I even tried host name and docker ip address. But still it showing the same error.
Can anyone please guide me on this...
Related
After run the command "docker-compose up" getting below error. Is there any solution.?
Your config is trying to access port 5432. which is unavailable. It is probably busy with another application, for example, PostgreSQL is already running
I’m trying to install Openstack on CentOS Stream 9 by following the official openstack installation guide for Yoga available at: https://docs.openstack.org/install-guide/
When I try to bootstrap keystone I get the following error:
/etc/keystone/fernet-keys/ does not exist. PFA the first screenshot
When I tried to create a domain using openstack domain create --description "An Example Domain" example it failed. Upon pinging controller I found out that the machine could not resolve the controller. Next, I added an entry to /etc/hosts that explicitly resolved the controller to my machine’s IP
Pinging the controller succeeded but I was still not able to create a domain
I tried creating a project using ‘openstack project create --domain default --description "Service Project" service’ This command failed with internal server error.
What should I do to resolve these errors?
I have created a Mongo container using only the base mongo:3.6.4 official docker image and deployed it to my OpenShift OKD cluster, but cannot connect to this MongoDB instance using a Mongo client from outside the cluster.
I can access the pod at http://mongodb.my.domain and successfully get the "It looks like you are trying to access MongoDB over HTTP on the native driver port." message.
When using the terminal on the pod I can successfully log-in using:
mongo "mongodb://mongoadmin:pass#localhost" --authenticationDatabase admin
But when trying to connect from outside OKD the connection fails.
My client needs to pass through a proxy before it can access the OKD pods and I do have a .der certificate file but am unsure if this is related to the issue.
Some commands I have tried:
mongo "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
mongo --ssl "mongodb://mongoadmin:pass#mongodb.my.domain:80" --authenticationDatabase admin
I expected to be able to connect successfully but instead get this error message:
MongoDB shell version v3.4.20
connecting to: mongodb://mongoadmin:pass#mongodb.my.domain:80
2019-05-15T11:32:25.514+0100 I NETWORK [thread1] recv(): message len 1347703880 is invalid. Min 16 Max: 48000000
2019-05-15T11:32:25.514+0100 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongodb.my.domain:80' :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I am unsure if it an issue with how I am using my MongoDB client or potentially some proxy settings on my OKD cluster. Any help would be appreciated.
The problem here is that external OpenShift routes aren't great at handling database connections. When you attempt to connect to the Mongo pod via the route, the route will accept the connection and transmit your connection to the Mongo service. I believe this transmission wraps the connection in in a HTTP wrapper, which Mongo doesn't like to handle. The OKD documentation highlights that path based route traffic should be HTTP based, which will cause the connection to fail.
You can see evidence of this when trying to connect to a MongoDB database and it returns "It looks like you are trying to access MongoDB over HTTP on the native driver port." to the browser. The user relief.malone explains this and has proposed a couple of solutions / workarounds in their answer to this question.
To add to relief.malone's answer, I would suggest that you port forward from the MongoDB pod to your local machine for development/debugging. In production, you could deploy an application to OKD that references the MongoDB service via it's internal DNS name, which will look something like this: mongodb.project_namespace.svc:27017. This way you will avoid the route interfering with the connection.
The Openshift OKD documentation on port-forwarding isn't that informative, but, since oc runs the kubectl command under the hood, you can read this Kubernetes guide to get some more information
I am working composer v0.16.2. I am having an error while I try to reconnect to composer-rest-server.
I am using this command:
composer-rest-server -c admin#mynetwork -n always -a true -m true -w true -t true -e /home /.nvm/versions/node/v8.9.3/lib/node_modules/composer-rest-server/cert.pem -k /home /.nvm/versions/node/v8.9.3/lib/node_modules/composer-rest-server/key.pem
Whatever the option I set it works fine first time but when I need to reconnect with the same command, I need to restart fabric and deploy the business network again otherwise it will show this error:
Discovering types from business network definition ...
Connection fails:
Error: Error trying to ping.
Error: Error trying to query business network.
Error: Connect Failed It will be retried for the next request.
Exception: Error: Error trying to ping.
Error: Error trying to query business network.
Error: Connect Failed Error: Error trying to ping.
Error: Error trying to query business network.
Error: Connect Failed at _checkRuntimeVersions.then.catch (/home/.../.nvm/versions/node/v8.9.1/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:713:34)
at <anonymous>
Hyperledger Composer v0.16.0 network start error
I found a similar question on this link but I need to start fabric again when this error comes and again deploying network archive help to start the rest server.
My question is how can I remove this error without starting fabric again when I need to start rest server?
The first action of the REST server is to 'Discover' the Business Network using the admin#mynetwork Card. So you can simplify testing here by not using the REST server, but by issuing a simpler command composer network ping -c admin#mynetwork or composer network list -c admin#mynetwork.
When your admin#mynetwork card is created (when you deploy the business network), then imported BEFORE you use it try the command composer card list --name admin#mynetwork - at the bottom of the output you should see:
secretSet: Secret set
credentialsSet: Credentials not set
After you use the card for the first time with a composer network ping or list, redo the composer card list --name admin#mynetwork and you should see a change in the output with Credentials set.
This is important because when the Card is created, it is created with a One Time secret, and when it is first used the Certificates are downloaded - Credentials Set. The problem you are seeing with a failure of the REST server the second time you use it suggests that the certificates needed for the second use are not present.
I am at my wits end and have been searching everywhere for a solution to this problem but it seems like I am the only one with it.
I have done multiple different methods of spinnaker installs and have tried multiple versions of it but I cannot seem to restore the state of my spinnaker installation after I reboot the machine. I ssh in
gcloud compute ssh $HALYARD_HOST --project=$GCP_PROJECT -- -L 9000:localhost:9000 -L 8084:localhost:8084
I then redirect my browser to the spinnaker UI
http://localhost:9000
But I get the following showing up on the ssh terminal:
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
channel 6: open failed: connect failed: Connection refused
channel 5: open failed: connect failed: Connection refused
It just continues like that as long as I keep the gui open which just sits at the following screen:
It sometimes lets me proceed past this point, but then the UI is completely useless. Clicking on different menu options just shows a massive spinner which doesn't go anywhere and everything I did before the reboot is now gone.
I have tried the prebaked system provided by google 1-click deploy. I have also tried both the spinnaker computer and container codelabs provided by spinnaker. I have searched a whole host of github questions but no one seems to be running into this problem.
TL;DR; On Google's kubernetes environment install and configure halyard as root and install your spinnaker instance to the cluster and not the halyard VM.
So I figured out the issue. I did not notice that the gcloud ssh command creates a new user on login when I changed workstations. My users at all of my different machines have different usernames (Windows, Linux, Mac, home, office environments)
Secondly, I completed the installation by installing spinnaker directly to the kubernetes cluster from the halyard VM. The halyard installation and configuration I conducted as root.
After trawling a whole log of GitHub questions and answers I noticed that my spinnaker installation was done by the initial user that I logged into the machine as and often I would end up reconnecting to the instance as a different user and end up crying as to why nothing was suddenly working.