SERVICE UNAVAILABLE - No raft leader when trying to create channel in Hyperledger fabric setup in Kubernetes - kubernetes

Start_orderer.sh file:
#edit *values.yaml file to be used with helm chart and deploy orderer through it
consensus_type=etcdraft
#change below instantiated variable for changing configuration of persistent volume sizes
persistence_status=true
persistent_volume_size=2Gi
while getopts "i:o:O:d:" c
do
case $c in
i) network_id=$OPTARG ;;
o) number=$OPTARG ;;
O) org_name=$OPTARG ;;
d) domain=$OPTARG ;;
esac
done
network_path=/etc/zeeve/fabric/${network_id}
source status.sh
cp ../yaml-files/orderer.yaml $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
sed -i "s/persistence_status/$persistence_status/; s/persistent_volume_size/$persistent_volume_size/; s/consensus_type/$consensus_type/; s/number/$number/g; s/org_name/${org_name}/; s/domain/$domain/; " $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
helm install orderer-${number}${org_name} --namespace blockchain-${org_name} -f $network_path/yaml-files/orderer-${number}${org_name}_values.yaml `pwd`/../helm-charts/hlf-ord
cmd_success $? orderer-${number}${org_name}
#update state of deployed componenet, used for pod level operations like start, stop, restart etc
update_statusfile helm orderer_${number}${org_name} orderer-${number}${org_name}
update_statusfile persistence orderer_${number}${org_name} $persistence_status
Configtx.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
Organizations:
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.demointainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.demointainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.demointainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.demointainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.demointainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.demointainabs.emulya.com
Port: 443
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.intainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.intainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.intainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.intainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.intainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.intainabs.emulya.com
Port: 443
Orderer: &OrdererDefaults
OrdererType: etcdraft
Addresses:
- orderer1.originator.demointainabs.emulya.com:443
- orderer2.trustee.demointainabs.emulya.com:443
- orderer2.issuer.demointainabs.emulya.com:443
- orderer1.trustee.demointainabs.emulya.com:443
- orderer1.issuer.demointainabs.emulya.com:443
- orderer1.originator.intainabs.emulya.com:443
- orderer2.trustee.intainabs.emulya.com:443
- orderer2.issuer.intainabs.emulya.com:443
- orderer1.trustee.intainabs.emulya.com:443
- orderer1.issuer.intainabs.emulya.com:443
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-hlf.blockchain-kz.svc.cluster.local:9092
EtcdRaft:
Consenters:
- Host: orderer1.originator.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
- Host: orderer1.originator.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
BaseGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
Consortiums:
MyConsortium:
Organizations:
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
BaseChannel:
Consortium: MyConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
I am currently doing hyperledger fabric network setup in Kubernetes. My network includes, 6 organizations and 5 orderer nodes. Our orderers are made to follow raft consensus. I have done the following:
Setup ca and tlsca servers
Setup ingress controller
Generated crypto-materials for peers, orderer
Generated channel artifacts
-- Started peers and orderers
Next step is to create the channel on orderer for each orgs and join the peers in each org to the channel. I am unable to create the channel. When requesting to create the channel, getting the following error:
SERVICE UNAVAILABLE - No raft leader.
How to fix this issue??
Can anyone please guide me on this. Thanks in advance.

Related

default treafik install from k3s, how port forwarding is done?

I try to add a entry points on the treafik installed by default with my k3s...
I see treafik listen over port 8000 and 8443...
part of the default deployment:
containers:
- name: traefik
image: rancher/mirrored-library-traefik:2.6.1
args:
- '--global.checknewversion'
- '--global.sendanonymoususage'
- '--entrypoints.metrics.address=:9100/tcp'
- '--entrypoints.traefik.address=:9000/tcp'
- '--entrypoints.web.address=:8000/tcp'
- '--entrypoints.websecure.address=:8443/tcp'
- '--api.dashboard=true'
- '--ping=true'
- '--metrics.prometheus=true'
- '--metrics.prometheus.entrypoint=metrics'
- '--providers.kubernetescrd'
- '--providers.kubernetesingress'
- >-
--providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/traefik
- '--entrypoints.websecure.http.tls=true'
ports:
- name: metrics
containerPort: 9100
protocol: TCP
- name: traefik
containerPort: 9000
protocol: TCP
- name: web
containerPort: 8000
protocol: TCP
- name: websecure
containerPort: 8443
protocol: TCP
How this work ?
client => xxx.com:80 => ???? => treafik:8000 => ???? => pod:80 ....

Not connecting GRPC requist using aws nlb with ACM cert

I have k8s cluster for using gRPC service with envoy proxy, all gRPC and web request collect Envoy and passed into backend , Envoy SVC started with nlb, and nlb attached with ACM certificate. without nlb certificate request passed into backend correctly and getting response, but need to use nlb-url:443, again if I'm attaching ACM cert into nlb,and its not at all getting response. why?
Or again I need to use another ingress for treat ssl and route?
envoy-svc.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:12345676789:certificate/ss304s07-3ss2-4s73-8744-bs2sss123460
service.beta.kubernetes.io/aws-load-balancer-type: nlb
creationTimestamp: "2021-06-11T02:50:24Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
name: envoy-service
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 31156
port: 443
protocol: TCP
targetPort: 80
selector:
name: envoy
sessionAffinity: None
type: LoadBalancer
envoy-conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http2_protocol_options: {}
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: /
grpc:
route:
cluster: greeter_service
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/etc/envoy-sync/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: greeter_service
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: greeter_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: micro-deployment
port_value: 8081
http2_protocol_options: {} # Force HTTP/2

Authentication in MongoDB is not working in Azure Container Instance when deploying with Azure File Share as mount volume

I am deploying MongoDB container as Azure container instance in a group using Azure CLI YAML script. Here I am using a Azure file share as a mount volume, while using this mount volume the authentication is not enabled in MongoDB. Though without mount volume it is working fine. Here is my YAML script:
apiVersion: '2019-12-01'
location: eastus
name: api-service
properties:
containers:
- name: mongo-db
properties:
image: serviceregistry.azurecr.io/apiservice_mongo-db:latest
ports:
- port: 27017
environmentVariables:
- name: MONGO_INITDB_DATABASE
value: logsdb
- name: MONGO_INITDB_ROOT_USERNAME
value: root
- name: MONGO_INITDB_ROOT_PASSWORD
secureValue: mypass
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /data/mongodata
name: mongofileshare
command: ['mongod', '--dbpath', '/data/mongodata'] #if i comment this line works fine
- name: log-api
properties:
environmentVariables: []
image: serviceregistry.azurecr.io/apiservice_log-api:latest
ports:
- port: 3100
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
- name: mongo-express
properties:
image: mongo-express:latest
ports:
- port: 8081
environmentVariables:
- name: ME_CONFIG_MONGODB_SERVER
value: 127.0.0.1
- name: ME_CONFIG_MONGODB_PORT
value: 27017
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
value: root
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
secureValue: mypass
- name: ME_CONFIG_BASICAUTH_USERNAME
value: admin
- name: ME_CONFIG_BASICAUTH_PASSWORD
secureValue: mypass
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
osType: Linux
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 3100
- protocol: tcp
port: 8081
- protocol: tcp
port: 27017
dnsNameLabel: serviceapidns
volumes:
- name: mongofileshare
azureFile:
sharename: mysharefile
storageAccountName: myaccount
storageAccountKey: mykey
imageRegistryCredentials:
server: serviceregistry.azurecr.io
username: MyUserName
password: MyPass
tags: {}
type: Microsoft.ContainerInstance/containerGroups
I got this:
msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","principalName":"root","authenticationDatabase":"admin","client":"127.0.0.1:44566","result":"UserNotFound: Could not find user "root" for db "admin""}}

server closed the stream without sending trailers

I’m trying o communicate from Envoy to Envoy using gRPC for Kubernetes(Amazon EKS).
I have an envoy in my sidecar and I am using grpcurl to validate the request.
The request is delivered to the application container and there are no errors, but the console returns the following results
server closed the stream without sending trailers
I don’t know what the reason for the above problem is, and what could be the reason for this result??
I was able to confirm that the response came back fine when I hit a single service before connecting with envoy.
this my envoy config
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 10000
static_resources:
listeners:
- name: listener_secure_grpc
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service_grpc
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: cluster_grpc
max_stream_duration:
grpc_timeout_header_max: 30s
tracing: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext"
common_tls_context:
tls_certificates:
- certificate_chain:
filename: /etc/ssl/grpc/tls.crt
private_key:
filename: /etc/ssl/grpc/tls.key
- name: listener_stats
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10001
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config:
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: /stats
route:
cluster: cluster_admin
http_filters:
- name: envoy.filters.http.router
- name: listener_healthcheck
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10010
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
clusters:
- name: cluster_grpc
connect_timeout: 1s
type: STATIC
http2_protocol_options: {}
upstream_connection_options:
tcp_keepalive: {}
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 1443
- name: cluster_admin
connect_timeout: 1s
type: STATIC
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 10000
P.S 2021.03.19
Here's what else I found out.
When I request from the ingress host, I get the above failure, but when I request from the service, I get a normal response!

AWS ELB redirect HTTP to HTTPS

I am using this CloudFormation template https://github.com/widdix/aws-cf-templates/blob/master/jenkins/jenkins2-ha-agents.yaml to setup a jenkins server.
I want to now add an SSL to the ELB and have modified https://github.com/widdix/aws-cf-templates/blob/master/jenkins/jenkins2-ha-agents.yaml#L511-L519 to the following:
MasterELBListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: "redirect"
RedirectConfig:
Protocol: "HTTPS"
Port: "443"
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
StatusCode: "HTTP_301"
LoadBalancerArn: !Ref MasterELB
Port: 80
Protocol: HTTP
MasterHTTPSListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Certificates:
# - CertificateArn: !Ref CertificateARN
- CertificateArn: !FindInMap
- SSLmapping
- ssl1
- !FindInMap
- AWSRegionsNameMapping
- !Ref 'AWS::Region'
- RegionName
DefaultActions:
- Type: forward
TargetGroupArn: !Ref MasterELBTargetGroup
LoadBalancerArn: !Ref MasterELB
Port: 443
Protocol: HTTPS
But when I try to to access the site, it just times.
Any advice is much appreciated
ok, i needed to open access to 433 from the ELB, with:
MasterELBHTTPSSGInWorld:
Type: 'AWS::EC2::SecurityGroupIngress'
Condition: HasNotAuthProxySecurityGroup
Properties:
GroupId: !Ref MasterELBSG
IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: '0.0.0.0/0'