IBM hyperledger fabric Business network deployment on Enterprise plan - ibm-cloud

composer network install -c adminCard -a hyperledger-fabric-network.bna
Network install commands fails on following error:
Installing business network. This may take a minute...E1115 11:51:11.667324200 30359 ssl_transport_security.cc:599] Could not load any root certificate.
E1115 11:51:11.667359374 30359 ssl_transport_security.cc:1400] Cannot load server root certificates.
E1115 11:51:11.667373715 30359 security_connector.cc:1025] Handshaker factory creation failed with TSI_INVALID_ARGUMENT.
E1115 11:51:11.667384067 30359 secure_channel_create.cc:111] Failed to create secure subchannel for secure name 'ldn-zbc03a.4.secure.blockchain.ibm.com:20355'
E1115 11:51:11.667390697 30359 secure_channel_create.cc:142] Failed to create subchannel arguments during subchannel creation.
E1115 11:51:11.668097853 30359 ssl_transport_security.cc:599] Could not load any root certificate.
E1115 11:51:11.668109600 30359 ssl_transport_security.cc:1400] Cannot load server root certificates.
E1115 11:51:11.668118612 30359 security_connector.cc:1025] Handshaker factory creation failed with TSI_INVALID_ARGUMENT.
E1115 11:51:11.668123679 30359 secure_channel_create.cc:111] Failed to create secure subchannel for secure name 'ldn-zbc03a.4.secure.blockchain.ibm.com:20355'
E1115 11:51:11.668129626 30359 secure_channel_create.cc:142] Failed to create subchannel arguments during subchannel creation.
✖ Installing business network. This may take a minute...
Error: Error trying install business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: Failed to connect before the deadline

Please check the adminCard. It seems the certification isn't correct.

Related

tls: bad certificate after certificate updates

I have Hyperledger Fabric (1.3) network which had expired certificates.
I was not able to execute peer chaincode commands.
I have generated certificates using same ca server and replaced. Now I am able to run query commands but still getting following error on peer for invoke,
2022-11-23 15:07:01.440 UTC [grpc] createTransport -> DEBU 0be grpc:
addrConn.createTransport failed to connect to {orderer1:7050 0 <nil>}. Err :connection error:
desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
Kindly help. Any suggestion will be appreciated.

Puppet: Issuer certificate error when requesting certfificate from ubuntu 18 node to Centos8 Master

I am trying to configure a demo, cross-platform Puppet Setup, that is
Puppet master on Centos 8 running puppetserver version: 6.11.1
Puppet node on ubuntu18.04 running Puppet v5.4.0
However, when I try a test puppet run with puppet agent --test , I get error like below.
Looks like it is expecting a issuer certificate of Puppet CA , that is certificate of the root CA which signed the puppet CAs certficate. This error is not present with centosMaster-centosUbuntu.
Can anyone helpout here? Can i copy this cert from the Puppet servers file on centos to the ubuntu node that is expecting it? If so what would be the location on the centos host?
root#node04-ubuntu:/# puppet agent --test --ca_server=puppetmaster.localdomain.com --no-daemonize --waitforcert=20 --verbose
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate): [unable to get issuer certificate for /CN=Puppet CA: puppetmaster.localdomain.com]
Info: Retrieving pluginfacts
Error: /File[/var/cache/puppet/facts.d]: Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate): [unable to get issuer certificate for /CN=Puppet CA: puppetmaster.localdomain.com]
Error: /File[/var/cache/puppet/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet:///pluginfacts: SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate): [unable to get issuer certificate for /CN=Puppet CA: puppetmaster.localdomain.com]
Info: Retrieving plugin
Error: /File[/var/cache/puppet/lib]: Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate): [unable to get issuer certificate for /CN=Puppet CA: puppetmaster.localdomain.com]
Error: /File[/var/cache/puppet/lib]: Could not evaluate: Could not retrieve file metadata for puppet:///plugins: SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate): [unable to get issuer certificate for /CN=Puppet CA: puppetmaster.localdomain.com]
Error: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate): [unable to get issuer certificate for /CN=Puppet CA: puppetmaster.localdomain.com]
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get issuer certificate): [unable to get issuer certificate for /CN=Puppet CA: puppetmaster.localdomain.com
Fixed
The issue was the version mismatch. Got the client to be as the same latest version as the master and it is working now. Cheers

Error restoring Rancher: This cluster is currently Unavailable; areas that interact directly with it will not be available until the API is ready

I am trying to backup and restore rancher server (single node install), as the described here.
After backup, I tried to turn off the rancher server node, and I run a new rancher container on a new node (in the same network, but another ip address), then I restored using the backup file.
After restoring, I logged in to the rancher UI and it showed the error below:
So, I checked the logs of the rancher server and it showed as below:
2019-10-05 16:41:32.197641 I | http: TLS handshake error from 127.0.0.1:38388: EOF
2019-10-05 16:41:32.202442 I | http: TLS handshake error from 127.0.0.1:38380: EOF
2019-10-05 16:41:32.210378 I | http: TLS handshake error from 127.0.0.1:38376: EOF
2019-10-05 16:41:32.211106 I | http: TLS handshake error from 127.0.0.1:38386: EOF
2019/10/05 16:42:26 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019/10/05 16:44:34 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019/10/05 16:48:50 [ERROR] ClusterController c-4pgjl [user-controllers-controller] failed with : failed to start user controllers for cluster c-4pgjl: failed to contact server: Get https://192.168.94.154:6443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
2019-10-05 16:50:19.114475 I | mvcc: store.index: compact 75951
2019-10-05 16:50:19.137825 I | mvcc: finished scheduled compaction at 75951 (took 22.527694ms)
2019-10-05 16:55:19.120803 I | mvcc: store.index: compact 76282
2019-10-05 16:55:19.124813 I | mvcc: finished scheduled compaction at 76282 (took 2.746382ms)
After that, I checked logs of the master nodes, I found that the rancher agent still tries to connect to the old rancher server (old ip address), not as the new one, so it makes the cluster not available.
How can I fix this?
You need to re-register the node in Rancher using the following steps.
Update the server-url in Rancher by going to Global -> Settings -> server-url
This should be the full URL with https://
Then use this script to re-register the node in Rancher https://github.com/mattmattox/cluster-agent-tool

can not join new node to k8s cluster

i want to join my new server to k8s cluster ,but failed,i do not know why?
# kubeadm join 10.100.1.20:6443 --token xxxxxx --discovery-token-ca-cert-hash sha256:xxxxxx
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I1126 10:30:33.608681 7238 kernel_validator.go:81] Validating kernel version
I1126 10:30:33.608737 7238 kernel_validator.go:96] Validating kernel config
[WARNING Hostname]: hostname "t-k8s-b1" could not be reached
[WARNING Hostname]: hostname "t-k8s-b1" lookup t-k8s-b1 on 103.224.222.222:53: no such host
[discovery] Trying to connect to API Server "10.100.1.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.100.1.20:6443"
[discovery] Requesting info from "https://10.100.1.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.100.1.20:6443"
[discovery] Successfully established connection with API Server "10.100.1.20:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
Unauthorized
can not find new node
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
t-k8s-a1 Ready master 6d v1.11.3
t-k8s-b2 Ready <none> 6d v1.11.3
in /var/log/messages
Nov 26 10:40:39 t-k8s-b1 systemd: Configuration file /etc/systemd/system/kubelet.service is marked executable. Please remove executable permission bits. Proceeding anyway.
i change /etc/systemd/system/kubelet.service from 0755 to 0644 ,the message warning disppeared and modprobe the module ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh,still Unauthorized
[preflight] running pre-flight checks
I1126 10:48:03.529871 8416 kernel_validator.go:81] Validating kernel version
I1126 10:48:03.529927 8416 kernel_validator.go:96] Validating kernel config
[WARNING Hostname]: hostname "t-k8s-b1" could not be reached
[WARNING Hostname]: hostname "t-k8s-b1" lookup t-k8s-b1 on 103.224.222.222:53: no such host
[discovery] Trying to connect to API Server "10.100.1.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.100.1.20:6443"
[discovery] Requesting info from "https://10.100.1.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.100.1.20:6443"
[discovery] Successfully established connection with API Server "10.100.1.20:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
Unauthorized
Solution
the reason is token expired, i recreate a new token ,and join with it ,all fine
# kubeadm token create
new token
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
ca cert hash
# kubeadm join 10.100.1.20:6443 --token new_token --discovery-token-ca-cert-hash sha256:ca_cert_hash

Kubeadm alpha phase certs causes join command token to be invalid

I have used kubeadm alpha phase certs to recreate the certificates used in my Kubernetes cluster. Also, use the alpha phase for kubeconfig. Now when trying to join a new worker - it is giving me errors that my token is invalid even when the token has been regenerate 3 times using - kubeadm token create --print-join-command.
The error that I keep getting is:
[discovery] Created cluster-info discovery client, requesting info from "https://x.x.x.x:6443"
[discovery] Failed to connect to API Server "x.x.x.x:6443": token id "bvw4cz" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
Anyone run into the same problems or have a suggestion?
Thanks!
EDIT--
This is the tail end of /var/log/syslog --
Nov 5 09:40:01 master01 kubelet[755]: E1105 09:40:01.892304 755 kubelet.go:2236] node "master01" not found
Nov 5 09:40:01 master01 kubelet[755]: E1105 09:40:01.928937 755 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://x.x.x.x:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetserver&limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Nov 5 09:40:01 master01 kubelet[755]: E1105 09:40:01.992427 755 kubelet.go:2236] node "master01" not found
EDIT 2 - 1. Now the real question is - if regenerating certs do not enable trust to itself as a CA, how do you fix this problem? 2. Is this a problem that is well known?