Using RBAC Network Filter to block ingress or egress to/from service in Envoy Proxy - kubernetes

I want to try and configure a Filter in Envoy Proxy to block ingress and egress to the service based on some IP's, hostname, routing table, etc.
I have searched for the documentation and see it's possible. But didn't get any examples, of its usage.
Can someone point out some example of how It can be done?
One configuration example is present on this page:
https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto
But this is for a service account, like in Kubernetes.
The closest to what I want, I can see here in this page:
https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics
Mentioned as, "The filter supports configuration with either a safe-list (ALLOW) or block-list (DENY) set of policies based on properties of the connection (IPs, ports, SSL subject)."
But it doesn't show how to do it.
I have figured out something like this:
network_filters:
- name: service-access
config:
rules:
action: ALLOW
policies:
"service-access":
principals:
source_ip: 192.168.135.211
permissions:
- destination_ip: 0.0.0.0
- destination_port: 443
But I am not able to apply this network filter. All the configurations give me configuration error.

I would recommend Istio. You can set up a Rule that will deny all traffic not originating from 192.168.0.1 IP.
apiVersion: "config.istio.io/v1alpha2"
kind: denier
metadata:
name: denyreviewsv3handler
spec:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: checknothing
metadata:
name: denyreviewsv3request
spec:
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denyreviewsv3
spec:
match: source.ip != ip("192.168.0.1")
actions:
- handler: denyreviewsv3handler.denier
instances: [ denyreviewsv3request.checknothing ]
You can match other attributes specified in Attribute Vocabulary, for example, block curl command match: match(request.headers["user-agent"], "curl*")
More about Traffic Management and Denials and White/Black Listing can be found in Istio documentation.
I can also recommend you this istio-workshop published by szihai.

This is a complete rbac filter config given to me by envoy team in their guthub issue. Haven't tested it out though.
static_resources:
listeners:
- name: "ingress listener"
address:
socket_address:
address: 0.0.0.0
port_value: 9001
filter_chains:
filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
per_filter_config:
envoy.filters.http.rbac:
rbac:
rules:
action: ALLOW
policies:
"per-route-rule":
permissions:
- any: true
principals:
- any: true
http_filters:
- name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"general-rules":
permissions:
- any: true
principals:
- any: true
- name: envoy.router
config: {}
access_log:
name: envoy.file_access_log
config: {path: /dev/stdout}
clusters:
- name: local_service
connect_timeout: 0.250s
type: static
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: 127.0.0.1
port_value: 9000
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8080

Related

How do I deploy envoy proxy as kubernetes load balancer

How do I create an envoy proxy as a load balancer to redirect the necessary traffic pods?
Here Kubernetes Service file
kind: Service
metadata:
name: files
spec:
type: ClusterIP
selector:
app: filesservice
ports:
- name: filesservice
protocol: TCP
port: 80
targetPort: 80
And for the envoy configuration file
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: /var/log/envoy/access.log
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/f" }
route: {host_rewrite: files, cluster: co_clusters, timeout: 60s}
http_filters:
- name: envoy.router
clusters:
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: LEAST_REQUEST
hosts:
- socket_address:
address: files
I have tried to change the cluster configuration to
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
hostname: files.default.svc.cluster.local
However, none of this works, from the error logs am getting this outs
[2023-01-09 04:15:53.250][9][critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': Protobuf message (type envoy.config.bootstrap.v3.Bootstrap reason INVALID_ARGUMENT:(static_resources.clusters[0]) hosts: Cannot find field.) has unknown fields
[2023-01-09 04:15:53.250][9][info][main] [source/server/server.cc:961] exiting
This is the tutorial I tried following but still no joy.
The error is due to the changes made by the envoy during upgrades you can see that in this issue.
As I can see that you are following an outdated tutorial where the config may be different depending on the upgrades.
Attaching a newer version of the yaml file in the git hub, cross check with the existing yaml file with and note the changes. You can check that in the envoy website also by using this doc.
Try the below yaml file and let me know if it works.
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
codec_type: AUTO
stat_prefix: ingress_https
clusters:
- name: echo-grpc
connect_timeout: 0.5s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: echo-grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: echo-grpc.default.svc.cluster.local
port_value: 8081
Note:added #type from reffering the links, so make changes if any required.
Attaching a similar issue for your reference.

How to create Traefik IngressRoute out of Traefik configuration?

I want to deploy Zitadel in my Kubernetes Cluster, but I'm struggling to get the Traefik IngressRoute right to work with Zitadel. It's a problem with http2 and Grpc forwarding, but I can't figure out which options are needed.
I created a zitadel helm deployment with these options:
replicaCount: 1
zitadel:
masterkey: "changeM3"
configmapConfig:
ExternalPort: 443
ExternalDomain: 'id.example.com'
ExternalSecure: true
TLS:
Enabled: false
secretConfig:
Database:
cockroach:
User:
Password: "cockroach-password"
cockroachdb:
singel-node: true
statefulset:
replicas: 1
For Reverse Proxy configuration, the zitadel docs have configurations for traefik, but only for a static configuration file and not for kubernetes configuration:
entrypoints:
web:
address: ":80"
websecure:
address: ":443"
tls:
stores:
default:
defaultCertificate:
providers:
file:
filename: /etc/traefik/traefik.yaml
http:
middlewares:
zitadel:
headers:
isDevelopment: false
allowedHosts:
- 'localhost'
redirect-to-https:
redirectScheme:
scheme: https
port: 443
permanent: true
routers:
router0:
entryPoints:
- web
middlewares:
- redirect-to-https
rule: 'HostRegexp(`localhost`, `{subdomain:[a-z]+}.localhost`)'
service: zitadel
router1:
entryPoints:
- websecure
service: zitadel
middlewares:
- zitadel
rule: 'HostRegexp(`localhost`, `{subdomain:[a-z]+}.localhost`)'
tls:
domains:
- main: "localhost"
sans:
- "*.localhost"
- "localhost"
services:
zitadel:
loadBalancer:
servers:
- url: h2c://localhost:8080
passHostHeader: true
I tried to convert this configuration to IngressRoute, but the dashboard is only loading the site's skeleton and giving an Unknown Content-type received Error like described in this github issue.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: zitadel
namespace: apps
spec:
entryPoints:
- websecure
routes:
- match: Host(`id.example.com`)
kind: Rule
services:
- name: zitadel
namespace: apps
port: 8080
scheme: h2c
passHostHeader: true
- match: Host(`id.example.com`)
kind: Rule
services:
- name: zitadel
namespace: apps
port: 8080
scheme: http
passHostHeader: true
tls:
certResolver: letsencrypt-prod
domains:
- main: id.example.com
Am I missing something in my IngressRoute that causes that error?
the problem were the two rules of the Ingressroute overlapping. Removing the second route solves the problem:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: zitadel
namespace: apps
spec:
entryPoints:
- websecure
routes:
- match: Host(`id.example.com`)
kind: Rule
services:
- name: zitadel
namespace: apps
port: 8080
scheme: h2c
passHostHeader: true
tls:
certResolver: letsencrypt-prod
domains:
- main: id.example.com

EnvoyFilter is not applied when rediness gate exists and health check failed in Istio

We have an EnvoyFilter for route http request to upstream application port as below:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-envoy-filter
spec:
workloadSelector:
labels:
app: my-app
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
routeConfiguration:
portNumber: 80
vhost:
name: "inbound|http|80"
patch:
operation: MERGE
value:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*.myservice.com"
routes:
- match: { prefix: "/" }
route:
cluster: mycluster
priority: HIGH
- applyTo: CLUSTER
match:
context: SIDECAR_INBOUND
patch:
operation: ADD
value:
name: mycluster
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: mycluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9999
It works properly when I applied it to workloads which doesn't have any rediness gate property.
However, if a workload has its own rediness gate and the rediness check failed, then the EnvoyFilter doesn't seem to be applied properly.
Is it an intended result? Are the proxy configurations are applied after the rediness gate confirmed the health of the proxy?
Is there anyway to apply proxy configurations such as EnvoyFilter before the rediness gate confirmation?

gRPC and gRPC-web backend not connecting though kubernetes nginx ingress

I have a gRPC server set up in AWS EKS, and use Nginx-Ingress-Controller with network load balancer , infornt of gRPC , Envoy is using , So it will come like that - NLB >> Ingress >> envoy >> gRPC
problem is when we make reqy=uist from bloomRPC , requist not landing into Envoy
What you expected to happen:
It should be connect requist from out side to grpc service and, need to use gRPC and gRPC-web with ssl, looking best solution for this.
How to reproduce it (as minimally and precisely as possible):
Spinup normail gRPC and grpc-web service , connect gRPC service using envoy , below conf i used to Envoy, and inginx-ingress-controller also I tryed using with nginx ingress controller nginx-ingress-controller:0.30.0 image, becoase of its will help to connect HTTP2 and gRPC with nginx ingress rule
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-http2: enabled
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: tk-ingress
spec:
tls:
- hosts:
- test.domain.com
secretName: tls-secret
rules:
- host: test.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: envoy
port:
number: 80
Envoy - conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8803
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "test.domain.com"
routes:
- match:
prefix: /
grpc:
route:
cluster: tkmicro
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type,channel,api-key,lang
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/home/ubuntu/envoy/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
alpn_protocols: "h2"
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: tkmicro
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: tkmicro
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.20.120.201
port_value: 8081
http2_protocol_options: {} # Force HTTP/2
Anything else we need to know?:
from bloomRPC getting this error
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
Environment:
Kubernetes version (use kubectl version): GitVersion:"v1.21.1"
Cloud provider or hardware configuration: AWS -EKS

Istio Virtual Service: only allow certain api's to be accessed by a list of Ip addresses

I have two virtual service config files that get merged into one by istio.
I want a specific API (accounts/v1/invites) to only be accessed by a list of client ip addresses.
This API is will only be called by an external backend server and I want to restrict the ability for the that api to be called by only the IP addresses I list.
My assumption was that listing the ip addresses in the host parameter would enforce this restriction but instead I am not able to access the api at all.
Am I configuring it correctly or am I making a grossly incorrect assumption ?
--- Virtual Service yaml ---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mp-server-virtualservice
spec:
hosts:
- "*"
gateways:
- mp-server-gateway
http:
- match:
- uri:
exact: /private/api
- uri:
exact: /private/graphiql
- uri:
exact: /public/api
route:
- destination:
host: mp-server
port:
number: 4000
corsPolicy:
allowOrigin:
- 'https://xxxxxxx.com'
allowMethods:
- POST
- GET
- OPTIONS
allowHeaders:
- content-type
- namespace
- authorization
maxAge: 500s
allowCredentials: true
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mp-server-virtualservice-machine2machine
spec:
hosts:
- "138.91.154.99"
- "54.183.64.135"
- "54.67.77.38"
- "54.67.15.170"
- "54.183.204.205"
- "54.173.21.107"
- "54.85.173.28"
- "35.167.74.121"
- "35.160.3.103"
- "35.166.202.113"
- "52.14.40.253"
- "52.14.38.78"
- "52.14.17.114"
- "52.71.209.77"
- "34.195.142.251"
- "52.200.94.42"
gateways:
- mp-server-gateway
http:
- match:
- uri:
exact: /accounts/v1/invites
route:
- destination:
host: mp-server
port:
number: 4000
--- Gateway yaml ---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mp-server-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- "*"
- port:
number: 443
name: https-443
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
I'm afraid that Istio's way of IP based white/black listing access to services inside the mesh is through the usage of listchecker of IP_ADDRESSES type. Please check the example here.
According to the documentation (Traffic Management) the hosts under virtual service’s hosts should hold DNS name (not IP address), that needs to resolve to FQDN by cluster DNS server.