How to change the launch type of ECS cluster in my Cloudformation template? - aws-cloudformation

I have a Cloudformation template that creates ECS (Fargate) type cluster, service and other mandatory resources. Now I want to change the type of ECS type from Fargate to EC2 launch type. Here is my cloudformation template:
AWSTemplateFormatVersion: 2010-09-09
Description: The CloudFormation template for the Fargate ECS Cluster.
Parameters:
Stage:
Type: String
ContainerPort:
Type: Number
ImageURI:
Type: String
Resources:
# Create an ECS Cluster
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'Cluster']]
# Create a VPC
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 172.10.0.0/16
EnableDnsHostnames: True
EnableDnsSupport: True
# Create a Subnet
SubnetA:
Type: AWS::EC2::Subnet
Properties:
CidrBlock: 172.10.1.0/24
VpcId: !Ref VPC
AvailabilityZone: !Join ['', [!Ref "AWS::Region", 'a']]
# Create a Subnet
SubnetB:
Type: AWS::EC2::Subnet
Properties:
CidrBlock: 172.10.2.0/24
VpcId: !Ref VPC
AvailabilityZone: !Join ['', [!Ref "AWS::Region", 'b']]
# Create a route table to allow access to internet
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
# Create a Route to allow access to internet using an internet gateway
PublicRoute:
Type: AWS::EC2::Route
DependsOn: VPCInternetGatewayAttachment
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
# Attach Public Route to SubnetA
SubnetAPublicRouteAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref SubnetA
# Attach Public Route to SubnetB
SubnetBPublicRouteAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref SubnetB
# Create an Internet Gateway
InternetGateway:
Type: AWS::EC2::InternetGateway
# Attach the internet gateway to the VPC
VPCInternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
# Create Access Role for ECS-Tasks
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'ExecutionRole']]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
# Create a TaskDefinition with container details
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Memory: 1024
Cpu: 512
NetworkMode: awsvpc
RequiresCompatibilities:
- 'FARGATE'
TaskRoleArn: !Ref ExecutionRole
ExecutionRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'Container']]
Image: !Ref ImageURI
PortMappings:
- ContainerPort: !Ref ContainerPort
HostPort: !Ref ContainerPort
# Creat a security group for load balancer and open port 80 in bound from internet
LoadBalancerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'LoadBalancerSecurityGroup']]
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
# Creat a security group for Containers and open in bound Container port from Load balancer security group to the Container
ContainerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'ContainerSecurityGroup']]
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: !Ref ContainerPort
ToPort: !Ref ContainerPort
SourceSecurityGroupId: !Ref LoadBalancerSecurityGroup
# Create a LoadBalancer and attach the Security group and Subnets
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
IpAddressType: ipv4
Name: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'LoadBalancer']]
Scheme: internet-facing
SecurityGroups:
- !Ref LoadBalancerSecurityGroup
Subnets:
- !Ref SubnetA
- !Ref SubnetB
Type: application
# Create a TargetGroup for HTTP port 80
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Name: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'TargetGroup']]
Port: 80
Protocol: HTTP
TargetType: ip
VpcId: !Ref VPC
# Create a LoadBalancerListener and attach the TargetGroup and LoadBalancer
LoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn: !Ref TargetGroup
Type: forward
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
# Create an ECS Service and add created Cluster, TaskDefintion, Subnets, TargetGroup and SecurityGroup
ECSService:
Type: AWS::ECS::Service
DependsOn: LoadBalancerListener
Properties:
ServiceName: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'ECSService']]
Cluster: !Ref Cluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 2
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
Subnets:
- !Ref SubnetA
- !Ref SubnetB
SecurityGroups:
- !Ref ContainerSecurityGroup
LoadBalancers:
- ContainerName: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'Container']]
ContainerPort: !Ref ContainerPort
TargetGroupArn: !Ref TargetGroup
Can someone guide what changes I have to made in this template to convert into EC2 type? I am new to cloudformation, I really don't have much idea how to do.
I can't use any other template because this Cloudformation is linked to one more cloudformation stack. Actually I am following this tutorial and there is Fargate type but I want EC2 launch type.

The main thing is the LaunchType: FARGATE that needs to be changed to LaunchType: EC2.
The second biggest thing is that you would need to add EC2 resources to the cluster to be able to land your tasks (with Fargate you don't need that but if you opt to use the EC2 launch type you have to have a cluster with EC2 instances).
Third you may need to add EC2 to the compatibility section of your task def:
RequiresCompatibilities:
- 'FARGATE'
- 'EC2'
Fourth, assigning public IPs to tasks (AssignPublicIp: ENABLED) is not a best practice and it actually won't work with the EC2 launch type (see here for example). You should disable this BUT this means you will need to add a NAT GW to your VPC for your task to be able to go out to the Internet (and grab the container image from ECR). An alternative would be to add ECR private endpoints to your VPC to avoid the Internet "long haul".
There may be other things that need tuning but these are the biggest.
PS why do you need to move to EC2 out of curiosity?

Related

How do I deploy envoy proxy as kubernetes load balancer

How do I create an envoy proxy as a load balancer to redirect the necessary traffic pods?
Here Kubernetes Service file
kind: Service
metadata:
name: files
spec:
type: ClusterIP
selector:
app: filesservice
ports:
- name: filesservice
protocol: TCP
port: 80
targetPort: 80
And for the envoy configuration file
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: /var/log/envoy/access.log
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/f" }
route: {host_rewrite: files, cluster: co_clusters, timeout: 60s}
http_filters:
- name: envoy.router
clusters:
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: LEAST_REQUEST
hosts:
- socket_address:
address: files
I have tried to change the cluster configuration to
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
hostname: files.default.svc.cluster.local
However, none of this works, from the error logs am getting this outs
[2023-01-09 04:15:53.250][9][critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': Protobuf message (type envoy.config.bootstrap.v3.Bootstrap reason INVALID_ARGUMENT:(static_resources.clusters[0]) hosts: Cannot find field.) has unknown fields
[2023-01-09 04:15:53.250][9][info][main] [source/server/server.cc:961] exiting
This is the tutorial I tried following but still no joy.
The error is due to the changes made by the envoy during upgrades you can see that in this issue.
As I can see that you are following an outdated tutorial where the config may be different depending on the upgrades.
Attaching a newer version of the yaml file in the git hub, cross check with the existing yaml file with and note the changes. You can check that in the envoy website also by using this doc.
Try the below yaml file and let me know if it works.
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
codec_type: AUTO
stat_prefix: ingress_https
clusters:
- name: echo-grpc
connect_timeout: 0.5s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: echo-grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: echo-grpc.default.svc.cluster.local
port_value: 8081
Note:added #type from reffering the links, so make changes if any required.
Attaching a similar issue for your reference.

EnvoyFilter is not applied when rediness gate exists and health check failed in Istio

We have an EnvoyFilter for route http request to upstream application port as below:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-envoy-filter
spec:
workloadSelector:
labels:
app: my-app
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
routeConfiguration:
portNumber: 80
vhost:
name: "inbound|http|80"
patch:
operation: MERGE
value:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*.myservice.com"
routes:
- match: { prefix: "/" }
route:
cluster: mycluster
priority: HIGH
- applyTo: CLUSTER
match:
context: SIDECAR_INBOUND
patch:
operation: ADD
value:
name: mycluster
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: mycluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9999
It works properly when I applied it to workloads which doesn't have any rediness gate property.
However, if a workload has its own rediness gate and the rediness check failed, then the EnvoyFilter doesn't seem to be applied properly.
Is it an intended result? Are the proxy configurations are applied after the rediness gate confirmed the health of the proxy?
Is there anyway to apply proxy configurations such as EnvoyFilter before the rediness gate confirmation?

gRPC and gRPC-web backend not connecting though kubernetes nginx ingress

I have a gRPC server set up in AWS EKS, and use Nginx-Ingress-Controller with network load balancer , infornt of gRPC , Envoy is using , So it will come like that - NLB >> Ingress >> envoy >> gRPC
problem is when we make reqy=uist from bloomRPC , requist not landing into Envoy
What you expected to happen:
It should be connect requist from out side to grpc service and, need to use gRPC and gRPC-web with ssl, looking best solution for this.
How to reproduce it (as minimally and precisely as possible):
Spinup normail gRPC and grpc-web service , connect gRPC service using envoy , below conf i used to Envoy, and inginx-ingress-controller also I tryed using with nginx ingress controller nginx-ingress-controller:0.30.0 image, becoase of its will help to connect HTTP2 and gRPC with nginx ingress rule
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-http2: enabled
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: tk-ingress
spec:
tls:
- hosts:
- test.domain.com
secretName: tls-secret
rules:
- host: test.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: envoy
port:
number: 80
Envoy - conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8803
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "test.domain.com"
routes:
- match:
prefix: /
grpc:
route:
cluster: tkmicro
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type,channel,api-key,lang
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/home/ubuntu/envoy/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
alpn_protocols: "h2"
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: tkmicro
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: tkmicro
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.20.120.201
port_value: 8081
http2_protocol_options: {} # Force HTTP/2
Anything else we need to know?:
from bloomRPC getting this error
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
Environment:
Kubernetes version (use kubectl version): GitVersion:"v1.21.1"
Cloud provider or hardware configuration: AWS -EKS

How to measure Hyperledger Fabric performance using Hyperledger Caliper in Kubernetes setting

My fabric network is deployed in a local Kubernetes cluster(vagrant) using the following
https://medium.com/swlh/how-to-implement-hyperledger-fabric-external-chaincodes-within-a-kubernetes-cluster-fd01d7544523 tutorial.
The pods are up and running, and I was able to insert/read marbles from fabric-cli.
I was not able to configure caliper to measure the performance of my deployment. I ran the caliper 0.4.2 docker image in the same 'hyperledger' namespace.
the caliper deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: caliper
name: caliper
namespace: hyperledger
spec:
selector:
matchLabels:
app: caliper
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: caliper
spec:
containers:
- env:
- name: CALIPER_BIND_SUT
value: fabric:2.2
- name: CALIPER_BENCHCONFIG
value: benchmarks/myAssetBenchmark.yaml
- name: CALIPER_NETWORKCONFIG
value: networks/networkConfig3.yaml
- name: CALIPER_FABRIC_GATEWAY_ENABLED
value: "true"
- name: CALIPER_FLOW_ONLY_TEST
value: "true"
image: hyperledger/caliper:0.4.2
name: caliper
command:
- caliper
args:
- launch
- manager
tty: true
volumeMounts:
- mountPath: /hyperledger/caliper/workspace
name: caliper-workspace
- mountPath: /hyperledger/caliper/fabric-samples
name: fabric-workspace
workingDir: /hyperledger/caliper/workspace
restartPolicy: Always
volumes:
- name: caliper-workspace
hostPath:
path: /home/vagrant/caliper-workspace
type: Directory
- name: fabric-workspace
hostPath:
path: /home/vagrant/fabr volumeMounts:
- mountPath: /hyperledger/caliper/workspace
name: caliper-workspace
- mountPath: /hyperledger/caliper/fabric-samples
name: fabric-workspace
workingDir: /hyperledger/caliper/workspace
restartPolicy: Always
volumes:
- name: caliper-workspace
hostPath:
path: /home/vagrant/caliper-workspace
type: Directory
- name: fabric-workspace
hostPath:
path: /home/vagrant/fabric-external-chaincodes/
type: Directoryic-external-chaincodes/
type: Directory
I followed https://hyperledger.github.io/caliper/v0.4.2/fabric-tutorial/tutorials-fabric-existing/ documentation on running caliper alongside existing fabric network.
the networkconfig3.yaml file
name: Fabric
version: '2.0.0'
mutual-tls: true
caliper:
blockchain: fabric
sutOptions:
mutualTls: true
channels:
- channelName: mychannel
contracts:
- id: marbles
organizations:
- mspid: org1MSP
identities:
certificates:
- name: 'Admin'
admin: true
clientPrivateKey:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/Admin#org1/msp/keystore/priv_sk'
clientSignedCert:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/Admin#org1/msp/signcerts/Admin#org1-cert.pem'
- name: 'User1'
clientPrivateKey:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/User1#org1/msp/keystore/priv_sk'
clientSignedCert:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/User1#org1/msp/signcerts/User1#org1-cert.pem'
connectionProfile:
path: 'networks/profile-org1.yaml'
discover: true
the org1 connection profile will look like
name: Fabric
version: '1.0.0'
client:
organization: org1
connection:
timeout:
peer:
endorser: '300'
organizations:
org1:
mspid: org1MSP
peers:
- peer0-org1
peers:
peer0-org1:
url: grpcs://peer0-org1:7051
grpcOptions:
ssl-target-name-override: peer0-org1
grpc.keepalive_time_ms: 600000
tlsCACerts:
path: ../fabric-samples/crypto-config/peerOrganizations/org1/peers/peer0-org1/msp/tlscacerts/tlsca.org1-cert.pem
the myAssetBenchmark.yaml file
test:
name: marble-benchmark
description: test benchmark
workers:
type: local
number: 2
rounds:
- label: initMarble
description: init marbles benchmark
txNumber: 100
rateControl:
type: fixed-load
opts:
tps: 25
workload:
module: workload/init.js
monitor:
type:
- none
observer:
type: local
interval: 1
The caliper is failing because the connection to the peers is not going through.
2021-01-05T04:37:55.592Z - ^[[32minfo^[[39m: [NetworkConfig]: buildPeer - Unable to connect to the endorser peer0-org1 due to Error: Failed to connect before the deadline on Endorser- name: peer0-org1, url:grpcs://peer0-org1:7051, connected:false, connectAttempted:true
some more error logs
2021-01-04T01:08:35.466Z - error: [DiscoveryService]: send[mychannel] - no discovery results
2021-01-04T01:08:38.473Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Discoverer- name: peer0-org1, url:grpcs://peer0-org1:7051, connected:false, connectAttempted:true
2021-01-04T01:08:38.473Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer0-org1 url:grpcs://peer0-org1:7051 timeout:3000
2021-01-04T01:08:38.473Z - error: [ServiceEndpoint]: ServiceEndpoint grpcs://peer0-org1:7051 reset connection failed :: Error: Failed to connect before the deadline on Discoverer- name: peer0-org1, url:grpcs://peer0-org1:7051, connected:false, connectAttempted:true
What are the issues with my current configuration?
Is there any blog or documentation to look more?
Your connection profile doesn't look correct as you haven't specified the tlsCACert information correctly. As you need to use a connection profile that works with node 2.2 the following might work
name: Fabric
organizations:
org1:
mspid: org1MSP
peers:
- peer0-org1
peers:
peer0-org1:
url: grpcs://peer0-org1:7051
tlsCACerts:
path: ../fabric-samples/crypto-config/peerOrganizations/org1/peers/peer0-org1/msp/tlscacerts/tlsca.org1-cert.pem
there are some details here about the node sdk 2.2 expected format for the connection profile but I'm not sure how correct they are https://hyperledger.github.io/fabric-sdk-node/release-2.2/tutorial-commonconnectionprofile.html

Using RBAC Network Filter to block ingress or egress to/from service in Envoy Proxy

I want to try and configure a Filter in Envoy Proxy to block ingress and egress to the service based on some IP's, hostname, routing table, etc.
I have searched for the documentation and see it's possible. But didn't get any examples, of its usage.
Can someone point out some example of how It can be done?
One configuration example is present on this page:
https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto
But this is for a service account, like in Kubernetes.
The closest to what I want, I can see here in this page:
https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics
Mentioned as, "The filter supports configuration with either a safe-list (ALLOW) or block-list (DENY) set of policies based on properties of the connection (IPs, ports, SSL subject)."
But it doesn't show how to do it.
I have figured out something like this:
network_filters:
- name: service-access
config:
rules:
action: ALLOW
policies:
"service-access":
principals:
source_ip: 192.168.135.211
permissions:
- destination_ip: 0.0.0.0
- destination_port: 443
But I am not able to apply this network filter. All the configurations give me configuration error.
I would recommend Istio. You can set up a Rule that will deny all traffic not originating from 192.168.0.1 IP.
apiVersion: "config.istio.io/v1alpha2"
kind: denier
metadata:
name: denyreviewsv3handler
spec:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: checknothing
metadata:
name: denyreviewsv3request
spec:
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denyreviewsv3
spec:
match: source.ip != ip("192.168.0.1")
actions:
- handler: denyreviewsv3handler.denier
instances: [ denyreviewsv3request.checknothing ]
You can match other attributes specified in Attribute Vocabulary, for example, block curl command match: match(request.headers["user-agent"], "curl*")
More about Traffic Management and Denials and White/Black Listing can be found in Istio documentation.
I can also recommend you this istio-workshop published by szihai.
This is a complete rbac filter config given to me by envoy team in their guthub issue. Haven't tested it out though.
static_resources:
listeners:
- name: "ingress listener"
address:
socket_address:
address: 0.0.0.0
port_value: 9001
filter_chains:
filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
per_filter_config:
envoy.filters.http.rbac:
rbac:
rules:
action: ALLOW
policies:
"per-route-rule":
permissions:
- any: true
principals:
- any: true
http_filters:
- name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"general-rules":
permissions:
- any: true
principals:
- any: true
- name: envoy.router
config: {}
access_log:
name: envoy.file_access_log
config: {path: /dev/stdout}
clusters:
- name: local_service
connect_timeout: 0.250s
type: static
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: 127.0.0.1
port_value: 9000
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8080