server closed the stream without sending trailers - kubernetes

I’m trying o communicate from Envoy to Envoy using gRPC for Kubernetes(Amazon EKS).
I have an envoy in my sidecar and I am using grpcurl to validate the request.
The request is delivered to the application container and there are no errors, but the console returns the following results
server closed the stream without sending trailers
I don’t know what the reason for the above problem is, and what could be the reason for this result??
I was able to confirm that the response came back fine when I hit a single service before connecting with envoy.
this my envoy config
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 10000
static_resources:
listeners:
- name: listener_secure_grpc
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service_grpc
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: cluster_grpc
max_stream_duration:
grpc_timeout_header_max: 30s
tracing: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext"
common_tls_context:
tls_certificates:
- certificate_chain:
filename: /etc/ssl/grpc/tls.crt
private_key:
filename: /etc/ssl/grpc/tls.key
- name: listener_stats
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10001
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config:
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: /stats
route:
cluster: cluster_admin
http_filters:
- name: envoy.filters.http.router
- name: listener_healthcheck
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10010
traffic_direction: INBOUND
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
codec_type: AUTO
stat_prefix: ingress_http
route_config: {}
http_filters:
- name: envoy.filters.http.health_check
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck"
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthz"
- name: envoy.filters.http.router
clusters:
- name: cluster_grpc
connect_timeout: 1s
type: STATIC
http2_protocol_options: {}
upstream_connection_options:
tcp_keepalive: {}
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 1443
- name: cluster_admin
connect_timeout: 1s
type: STATIC
load_assignment:
cluster_name: cluster_grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 10000
P.S 2021.03.19
Here's what else I found out.
When I request from the ingress host, I get the above failure, but when I request from the service, I get a normal response!

Related

How to verify a JWT token with Envoy

I'm using Envoy as the gateway of my backend and I'm trying to use its JWT mechanism.
Here is what I've done:
Generate a pair of prv & pub keys with the command openssl:
openssl genrsa -out keyprv.pem 512
openssl rsa -in keyprv.pem -pubout -out keypub.crt
Let's say that the private key is PRVKEY and the public key is PUBKEY.
Generate a token with Python3. Here is my code:
#!/usr/bin/env python3
import jwt
from datetime import datetime
from datetime import timedelta
due_date = datetime.now() + timedelta(minutes=30)
expiry = int(due_date.timestamp())
payload = {"iss": "ImIss", "sub": "ImSub", "user_id": 12345, "exp": expiry}
private_key=b"-----BEGIN RSA PRIVATE KEY-----\nPRVKEY\n-----END RSA PRIVATE KEY-----"
jwt_token = jwt.encode(payload, private_key, algorithm="RS256", headers={"alg": "RS256", "typ": "JWT"})
print(jwt_token)
After executing the Python3 script, I get a JWT token aaa.bbb.ccc.
Configure my Envoy to verify the JWT token above. Here is the config file of Envoy:
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 9989
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http_filters:
- name: envoy.filters.http.jwt_authn
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication
providers:
remote_jwt_provider:
issuer: "ImIss"
local_jwks:
inline_string: |
{
"keys": [
{
"kty": "RSA",
"kid": "my-key-id",
"use": "sig",
"n": "PUBKEY", # is this correct?
"e": "AQAB",
}
]
}
forward: true
from_params:
- jwt_token
rules:
- match:
prefix: "/"
requires:
provider_name: remote_jwt_provider
- name: envoy.filters.http.router
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: cluster_example
clusters:
- name: auth_service
connect_timeout: 30s
type: STATIC
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: auth_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.16.0.44
port_value: 8081
- name: cluster_example
connect_timeout: 30s
type: STATIC
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: cluster_example
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.16.0.226
port_value: 11111
But the config file would generate an Envoy error: Jwks RSA [n] or [e] field is missing or has a parse error. It seems that the characters / and = in the PUBKEY cause this error. So I'm thinking I may need to encode the public key with base64. So I use the output of base64.b64encode(PUBKEY) in the config file of Envoy. Now Envoy won't generate any error but it can't verify the token: Jwt verification fails.

Not connecting GRPC requist using aws nlb with ACM cert

I have k8s cluster for using gRPC service with envoy proxy, all gRPC and web request collect Envoy and passed into backend , Envoy SVC started with nlb, and nlb attached with ACM certificate. without nlb certificate request passed into backend correctly and getting response, but need to use nlb-url:443, again if I'm attaching ACM cert into nlb,and its not at all getting response. why?
Or again I need to use another ingress for treat ssl and route?
envoy-svc.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:12345676789:certificate/ss304s07-3ss2-4s73-8744-bs2sss123460
service.beta.kubernetes.io/aws-load-balancer-type: nlb
creationTimestamp: "2021-06-11T02:50:24Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
name: envoy-service
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 31156
port: 443
protocol: TCP
targetPort: 80
selector:
name: envoy
sessionAffinity: None
type: LoadBalancer
envoy-conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
http2_protocol_options: {}
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: /
grpc:
route:
cluster: greeter_service
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/etc/envoy-sync/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: greeter_service
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: greeter_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: micro-deployment
port_value: 8081
http2_protocol_options: {} # Force HTTP/2

OPA (running as host level separate service ) policies are not getting enforced via envoy while calling the API

My scenario - one example-app service exposed through Nodeport in K8s cluster .
The service is having one envoy side car . OPA is running as separate service on a same node.
My app-deployment specs -
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
labels:
app: example-app
spec:
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
initContainers:
- name: proxy-init
image: openpolicyagent/proxy_init:v5
# Configure the iptables bootstrap script to redirect traffic to the
# Envoy proxy on port 8000, specify that Envoy will be running as user
# 1111, and that we want to exclude port 8282 from the proxy for the
# OPA health checks. These values must match up with the configuration
# defined below for the "envoy" and "opa" containers.
args: ["-p", "8000", "-u", "1111"]
securityContext:
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
containers:
- name: app
image: openpolicyagent/demo-test-server:v1
ports:
- containerPort: 8080
- name: envoy
image: envoyproxy/envoy:v1.14.4
securityContext:
runAsUser: 1111
volumeMounts:
- readOnly: true
mountPath: /config
name: proxy-config
args:
- "envoy"
- "--config-path"
- "/config/envoy.yaml"
volumes:
- name: proxy-config
configMap:
name: proxy-config
My Envoy specs(loaded through config maps) -
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: service
http_filters:
- name: envoy.filters.http.ext_authz
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
grpc_service:
envoy_grpc:
cluster_name: opa
timeout: 0.250s
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080
- name: opa
connect_timeout: 0.250s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: opa
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: opa
port_value: 9191
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
OPA deployment specs (exposed through nodeport as a service):-
apiVersion: apps/v1
kind: Deployment
metadata:
name: opa
labels:
app: opa
spec:
replicas: 1
selector:
matchLabels:
app: opa
template:
metadata:
labels:
app: opa
name: opa
spec:
containers:
- name: opa
image: openpolicyagent/opa:latest-envoy
ports:
- name: http
containerPort: 8181
securityContext:
runAsUser: 1111
args:
- "run"
- "--server"
- "--log-level=info"
- "--log-format=json-pretty"
- "--set=decision_logs.console=true"
- "--set=plugins.envoy_ext_authz_grpc.addr=:9191"
- "--set=plugins.envoy_ext_authz_grpc.query=data.envoy.authz.allow"
- "--ignore=.*"
- "/policy/policy.rego"
volumeMounts:
- readOnly: true
mountPath: /policy
name: opa-policy
volumes:
- name: opa-policy
configMap:
name: opa-policy
Sample policy.rego (loaded through configmap)-
package envoy.authz
import input.attributes.request.http as http_request
default allow = false
token = {"valid": valid, "payload": payload} {
[_, encoded] := split(http_request.headers.authorization, " ")
[valid, _, payload] := io.jwt.decode_verify(encoded, {"secret": "secret"})
}
allow {
is_token_valid
action_allowed
}
is_token_valid {
token.valid
now := time.now_ns() / 1000000000
token.payload.nbf <= now
now < token.payload.exp
}
action_allowed {
http_request.method == "GET"
token.payload.role == "guest"
glob.match("/people*", [], http_request.path)
}
action_allowed {
http_request.method == "GET"
token.payload.role == "admin"
glob.match("/people*", [], http_request.path)
}
action_allowed {
http_request.method == "POST"
token.payload.role == "admin"
glob.match("/people", [], http_request.path)
lower(input.parsed_body.firstname) != base64url.decode(token.payload.sub)
}
While calling REST API getting below response -
curl -i -H "Authorization: Bearer $ALICE_TOKEN" http://$SERVICE_URL/people
HTTP/1.1 403 Forbidden
date: Sat, 13 Feb 2021 08:50:29 GMT
server: envoy
content-length: 0
OPA decision logs -
{
"addrs": [
":8181"
],
"diagnostic-addrs": [],
"level": "info",
"msg": "Initializing server.",
"time": "2021-02-13T08:48:27Z"
}
{
"level": "info",
"msg": "Starting decision logger.",
"plugin": "decision_logs",
"time": "2021-02-13T08:48:27Z"
}
{
"addr": ":9191",
"dry-run": false,
"enable-reflection": false,
"level": "info",
"msg": "Starting gRPC server.",
"path": "",
"query": "data.envoy.authz.allow",
"time": "2021-02-13T08:48:27Z"
}
What I am doing wrong here ? there is nothing in decision logs for my rest call to the API . OPA policy should invoke through envoy filter which is not happening.
Please help .

SERVICE UNAVAILABLE - No raft leader when trying to create channel in Hyperledger fabric setup in Kubernetes

Start_orderer.sh file:
#edit *values.yaml file to be used with helm chart and deploy orderer through it
consensus_type=etcdraft
#change below instantiated variable for changing configuration of persistent volume sizes
persistence_status=true
persistent_volume_size=2Gi
while getopts "i:o:O:d:" c
do
case $c in
i) network_id=$OPTARG ;;
o) number=$OPTARG ;;
O) org_name=$OPTARG ;;
d) domain=$OPTARG ;;
esac
done
network_path=/etc/zeeve/fabric/${network_id}
source status.sh
cp ../yaml-files/orderer.yaml $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
sed -i "s/persistence_status/$persistence_status/; s/persistent_volume_size/$persistent_volume_size/; s/consensus_type/$consensus_type/; s/number/$number/g; s/org_name/${org_name}/; s/domain/$domain/; " $network_path/yaml-files/orderer-${number}${org_name}_values.yaml
helm install orderer-${number}${org_name} --namespace blockchain-${org_name} -f $network_path/yaml-files/orderer-${number}${org_name}_values.yaml `pwd`/../helm-charts/hlf-ord
cmd_success $? orderer-${number}${org_name}
#update state of deployed componenet, used for pod level operations like start, stop, restart etc
update_statusfile helm orderer_${number}${org_name} orderer-${number}${org_name}
update_statusfile persistence orderer_${number}${org_name} $persistence_status
Configtx.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
Organizations:
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.demointainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.demointainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.demointainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.demointainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.demointainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.demointainabs.emulya.com
Port: 443
- &Orginvestor
Name: investor
ID: investorMSP
MSPDir: ./crypto-config/investor/msp
AnchorPeers:
- Host: peer1.investor.intainabs.emulya.com
Port: 443
- &Orgtrustee
Name: trustee
ID: trusteeMSP
MSPDir: ./crypto-config/trustee/msp
AnchorPeers:
- Host: peer1.trustee.intainabs.emulya.com
Port: 443
- &Orgwhlender
Name: whlender
ID: whlenderMSP
MSPDir: ./crypto-config/whlender/msp
AnchorPeers:
- Host: peer1.whlender.intainabs.emulya.com
Port: 443
- &Orgservicer
Name: servicer
ID: servicerMSP
MSPDir: ./crypto-config/servicer/msp
AnchorPeers:
- Host: peer1.servicer.intainabs.emulya.com
Port: 443
- &Orgissuer
Name: issuer
ID: issuerMSP
MSPDir: ./crypto-config/issuer/msp
AnchorPeers:
- Host: peer1.issuer.intainabs.emulya.com
Port: 443
- &Orgoriginator
Name: originator
ID: originatorMSP
MSPDir: ./crypto-config/originator/msp
AnchorPeers:
- Host: peer1.originator.intainabs.emulya.com
Port: 443
Orderer: &OrdererDefaults
OrdererType: etcdraft
Addresses:
- orderer1.originator.demointainabs.emulya.com:443
- orderer2.trustee.demointainabs.emulya.com:443
- orderer2.issuer.demointainabs.emulya.com:443
- orderer1.trustee.demointainabs.emulya.com:443
- orderer1.issuer.demointainabs.emulya.com:443
- orderer1.originator.intainabs.emulya.com:443
- orderer2.trustee.intainabs.emulya.com:443
- orderer2.issuer.intainabs.emulya.com:443
- orderer1.trustee.intainabs.emulya.com:443
- orderer1.issuer.intainabs.emulya.com:443
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-hlf.blockchain-kz.svc.cluster.local:9092
EtcdRaft:
Consenters:
- Host: orderer1.originator.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.demointainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
- Host: orderer1.originator.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
ServerTLSCert: crypto-config/originator/orderer-1originator/tls/server.crt
- Host: orderer2.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-2trustee/tls/server.crt
- Host: orderer2.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-2issuer/tls/server.crt
- Host: orderer1.trustee.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
ServerTLSCert: crypto-config/trustee/orderer-1trustee/tls/server.crt
- Host: orderer1.issuer.intainabs.emulya.com
Port: 443
ClientTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
ServerTLSCert: crypto-config/issuer/orderer-1issuer/tls/server.crt
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
BaseGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
- *Orgoriginator
- *Orgtrustee
- *Orgissuer
Consortiums:
MyConsortium:
Organizations:
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
- *Orginvestor
- *Orgtrustee
- *Orgwhlender
- *Orgservicer
- *Orgissuer
- *Orgoriginator
BaseChannel:
Consortium: MyConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
- *Orgoriginator
- *Orgissuer
- *Orgservicer
- *Orgwhlender
- *Orgtrustee
- *Orginvestor
I am currently doing hyperledger fabric network setup in Kubernetes. My network includes, 6 organizations and 5 orderer nodes. Our orderers are made to follow raft consensus. I have done the following:
Setup ca and tlsca servers
Setup ingress controller
Generated crypto-materials for peers, orderer
Generated channel artifacts
-- Started peers and orderers
Next step is to create the channel on orderer for each orgs and join the peers in each org to the channel. I am unable to create the channel. When requesting to create the channel, getting the following error:
SERVICE UNAVAILABLE - No raft leader.
How to fix this issue??
Can anyone please guide me on this. Thanks in advance.

AWS ELB redirect HTTP to HTTPS

I am using this CloudFormation template https://github.com/widdix/aws-cf-templates/blob/master/jenkins/jenkins2-ha-agents.yaml to setup a jenkins server.
I want to now add an SSL to the ELB and have modified https://github.com/widdix/aws-cf-templates/blob/master/jenkins/jenkins2-ha-agents.yaml#L511-L519 to the following:
MasterELBListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: "redirect"
RedirectConfig:
Protocol: "HTTPS"
Port: "443"
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
StatusCode: "HTTP_301"
LoadBalancerArn: !Ref MasterELB
Port: 80
Protocol: HTTP
MasterHTTPSListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Certificates:
# - CertificateArn: !Ref CertificateARN
- CertificateArn: !FindInMap
- SSLmapping
- ssl1
- !FindInMap
- AWSRegionsNameMapping
- !Ref 'AWS::Region'
- RegionName
DefaultActions:
- Type: forward
TargetGroupArn: !Ref MasterELBTargetGroup
LoadBalancerArn: !Ref MasterELB
Port: 443
Protocol: HTTPS
But when I try to to access the site, it just times.
Any advice is much appreciated
ok, i needed to open access to 433 from the ELB, with:
MasterELBHTTPSSGInWorld:
Type: 'AWS::EC2::SecurityGroupIngress'
Condition: HasNotAuthProxySecurityGroup
Properties:
GroupId: !Ref MasterELBSG
IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: '0.0.0.0/0'