How do I deploy envoy proxy as kubernetes load balancer - kubernetes

How do I create an envoy proxy as a load balancer to redirect the necessary traffic pods?
Here Kubernetes Service file
kind: Service
metadata:
name: files
spec:
type: ClusterIP
selector:
app: filesservice
ports:
- name: filesservice
protocol: TCP
port: 80
targetPort: 80
And for the envoy configuration file
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: /var/log/envoy/access.log
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/f" }
route: {host_rewrite: files, cluster: co_clusters, timeout: 60s}
http_filters:
- name: envoy.router
clusters:
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: LEAST_REQUEST
hosts:
- socket_address:
address: files
I have tried to change the cluster configuration to
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
hostname: files.default.svc.cluster.local
However, none of this works, from the error logs am getting this outs
[2023-01-09 04:15:53.250][9][critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': Protobuf message (type envoy.config.bootstrap.v3.Bootstrap reason INVALID_ARGUMENT:(static_resources.clusters[0]) hosts: Cannot find field.) has unknown fields
[2023-01-09 04:15:53.250][9][info][main] [source/server/server.cc:961] exiting
This is the tutorial I tried following but still no joy.

The error is due to the changes made by the envoy during upgrades you can see that in this issue.
As I can see that you are following an outdated tutorial where the config may be different depending on the upgrades.
Attaching a newer version of the yaml file in the git hub, cross check with the existing yaml file with and note the changes. You can check that in the envoy website also by using this doc.
Try the below yaml file and let me know if it works.
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
codec_type: AUTO
stat_prefix: ingress_https
clusters:
- name: echo-grpc
connect_timeout: 0.5s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: echo-grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: echo-grpc.default.svc.cluster.local
port_value: 8081
Note:added #type from reffering the links, so make changes if any required.
Attaching a similar issue for your reference.

Related

Asking for tips on how to debug envoy for kubernetes deployment

I want to use envoy as my kubernetes deployment service proxy, and my application uses grpc to communicate with client-side.
My steps:
Write a yaml file for envoy configuration.
Envoy configuration:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: http_listener
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: [ "*" ]
routes:
- match:
{ prefix: "/" }
route:
cluster: my_app_prod_service
timeout: 30s
max_grpc_timeout: 30s
cors:
allow_origin_string_match:
- safe_regex: { regex: ".*", google_re2: { } }
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.router
clusters:
- name: my_app_prod_service
connect_timeout: 0.5s
type: strict_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: my_app_prod_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: my-app-service-staging
port_value: 30015
Deploy it as config map.
kubectl create configmap envoy-config-prod \
--from-file=envoy_config_prod.yaml \
-o yaml --dry-run=client | kubectl replace --force -f -
Deploy envoy deployment and service, and mount the config map.
apiVersion: apps/v1
kind: Deployment
metadata:
name: envoy-server-prod
spec:
replicas: 3
selector:
matchLabels:
app: envoy-server-prod
template:
metadata:
labels:
app: envoy-server-prod
spec:
containers:
- name: envoy-server-prod
image: envoyproxy/envoy:v1.18.2
args:
- -c
- /etc/envoy/envoy_config_prod.yaml
- --log-path
- /tmp/envoy_info.log
ports:
- name: http
containerPort: 8080
- name: envoy-admin
containerPort: 9901
resources:
requests:
cpu: 5
memory: 5Gi
volumeMounts:
- mountPath: /etc/envoy
name: envoy-config-prod
volumes:
- name: envoy-config-prod
configMap:
name: envoy-config-prod
---
kind: Service
apiVersion: v1
metadata:
name: envoy-service-prod
labels:
app: envoy-service-prod
spec:
selector:
app: envoy-server-prod
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
externalIPs:
- 10.1.4.63
Make a headless service and its deployment.
apiVersion: v1
kind: Service
metadata:
name: my-app-service-staging
labels:
app: my-app-service-staging
spec:
clusterIP: None
ports:
- name: grpc
port: 30015
targetPort: 30015
protocol: TCP
selector:
app: my-app-deploy-staging
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deploy-staging
spec:
replicas: 1
selector:
matchLabels:
app: my-app-deploy-staging
template:
metadata:
labels:
app: my-app-deploy-staging
spec:
containers:
- name: my-app-deploy-staging
image: $IMAGE_SHA
resources:
requests:
memory: 2G
cpu: 1
I checked that in the envoy deployment, /etc/envoy/envoy_config_prod.yaml and /tmp/envoy_info.log both exist, and I don't see error messages in it.
I tried to make http connections to envoy service, hoping it to transfer it to my application deployment.
> curl -v 10.1.4.63:8080
* Trying 10.1.4.63:8080...
* TCP_NODELAY set
* connect to 10.1.4.63 port 8080 failed: Connection timed out
* Failed to connect to 10.1.4.63 port 8080: Connection timed out
* Closing connection 0
curl: (28) Failed to connect to 10.1.4.63 port 8080: Connection timed out
But it just time outs.
I tried to get the services and deployments.
> k get svc
my-app-service-staging ClusterIP None <none> 30015/TCP 3h54m
envoy-service-prod ClusterIP 10.43.157.121 10.1.4.63 8080/TCP 6h55m
> k get deploy
my-deploy-deploy-staging 1/1 1 1 29d
I'm wondering, how should debug this issue?
Envoy supports a wide range of timeouts that may need to be configured depending on the deployment. Configure timeouts summarize the most important timeouts used in various scenarios.
Refer to Debug Envoy Proxy.
Please go through this similar SO1 & SO2, which may help to resolve your issue.

EnvoyFilter is not applied when rediness gate exists and health check failed in Istio

We have an EnvoyFilter for route http request to upstream application port as below:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-envoy-filter
spec:
workloadSelector:
labels:
app: my-app
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
routeConfiguration:
portNumber: 80
vhost:
name: "inbound|http|80"
patch:
operation: MERGE
value:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*.myservice.com"
routes:
- match: { prefix: "/" }
route:
cluster: mycluster
priority: HIGH
- applyTo: CLUSTER
match:
context: SIDECAR_INBOUND
patch:
operation: ADD
value:
name: mycluster
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: mycluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9999
It works properly when I applied it to workloads which doesn't have any rediness gate property.
However, if a workload has its own rediness gate and the rediness check failed, then the EnvoyFilter doesn't seem to be applied properly.
Is it an intended result? Are the proxy configurations are applied after the rediness gate confirmed the health of the proxy?
Is there anyway to apply proxy configurations such as EnvoyFilter before the rediness gate confirmation?

gRPC and gRPC-web backend not connecting though kubernetes nginx ingress

I have a gRPC server set up in AWS EKS, and use Nginx-Ingress-Controller with network load balancer , infornt of gRPC , Envoy is using , So it will come like that - NLB >> Ingress >> envoy >> gRPC
problem is when we make reqy=uist from bloomRPC , requist not landing into Envoy
What you expected to happen:
It should be connect requist from out side to grpc service and, need to use gRPC and gRPC-web with ssl, looking best solution for this.
How to reproduce it (as minimally and precisely as possible):
Spinup normail gRPC and grpc-web service , connect gRPC service using envoy , below conf i used to Envoy, and inginx-ingress-controller also I tryed using with nginx ingress controller nginx-ingress-controller:0.30.0 image, becoase of its will help to connect HTTP2 and gRPC with nginx ingress rule
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-http2: enabled
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: tk-ingress
spec:
tls:
- hosts:
- test.domain.com
secretName: tls-secret
rules:
- host: test.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: envoy
port:
number: 80
Envoy - conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8803
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "test.domain.com"
routes:
- match:
prefix: /
grpc:
route:
cluster: tkmicro
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type,channel,api-key,lang
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/home/ubuntu/envoy/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
alpn_protocols: "h2"
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: tkmicro
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: tkmicro
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.20.120.201
port_value: 8081
http2_protocol_options: {} # Force HTTP/2
Anything else we need to know?:
from bloomRPC getting this error
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
Environment:
Kubernetes version (use kubectl version): GitVersion:"v1.21.1"
Cloud provider or hardware configuration: AWS -EKS

Connect to external Kafka brokers via istio egress gateway

My app deployed in openshift cluster needs to connect to 2 external kafka brokers. Since the application is on the istio mesh, all outbound traffic must go through the egress gateway. The connection to kafka is via the log4j2 appender over SSL.
I made the following istio config:
kind: ServiceEntry
metadata:
name: se-kafka
spec:
hosts:
- kafka1.host.com
- kafka2.host.com
addresses:
- 10.200.200.1
- 10.200.200.2
ports:
- name: kafka-port
number: 9093
protocol: TCP
location: MESH_EXTERNAL
resolution: NONE
exportTo:
- .
=====================
kind: DestinationRule
metadata:
name: dr-kafka
spec:
host: egressgateway #name egressgateway deployment
subnets:
- name: se-kafka
=====================
kind: Gateway
metadata:
name: gw-kafka
spec:
servers:
- hosts:
- kafka1.host.com
port:
name: kafka1-egress-port
number: 16001
protocol: TCP
- hosts:
- kafka2.host.com
port:
name: kafka2-egress-port
number: 16002
protocol: TCP
selector:
istio: egressgateway
=======================
kind: VirtualService
metadata:
name: vs-kafka
spec:
hosts:
- kafka1.host.com
- kafka2.host.com
gateways:
- mesh
- gw-kafka
tls:
- match:
- gateways:
- mesh
port: 9093
sniHosts:
- kafka1.host.com
route:
- destination:
host: egressgateway
port:
number: 16001
- match:
- gateways:
- mesh
port: 9093
sniHosts:
- kafka2.host.com
route:
- destination:
host: egressgateway
port:
number: 16002
- match:
- gateways:
- gw-kafka
port: 16001
sniHosts:
- kafka1.host.com
route:
- destination:
host: kafka1.host.com
port:
number: 9093
- match:
- gateways:
- gw-kafka
port: 16002
sniHosts:
- kafka2.host.com
route:
- destination:
host: kafka2.host.com
port:
number: 9093
========================
It works. But I think that traffic bypasses the istio egressgateway. There is no connection in kiali between ServiceEntry and Egressgateway. And if you look at the egressgateway logs, you can see the following warning:
gRPC config for envoy.api.v2.ClusterLoadAssigment rejected: malformed IP address: kafka1.host.com. Consider setting resolver_name or setting cluster type to 'STRICT_DNS' or 'LOGICAL_DNS'
What is the problem and how to properly configure the egress gateway?

Using RBAC Network Filter to block ingress or egress to/from service in Envoy Proxy

I want to try and configure a Filter in Envoy Proxy to block ingress and egress to the service based on some IP's, hostname, routing table, etc.
I have searched for the documentation and see it's possible. But didn't get any examples, of its usage.
Can someone point out some example of how It can be done?
One configuration example is present on this page:
https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto
But this is for a service account, like in Kubernetes.
The closest to what I want, I can see here in this page:
https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics
Mentioned as, "The filter supports configuration with either a safe-list (ALLOW) or block-list (DENY) set of policies based on properties of the connection (IPs, ports, SSL subject)."
But it doesn't show how to do it.
I have figured out something like this:
network_filters:
- name: service-access
config:
rules:
action: ALLOW
policies:
"service-access":
principals:
source_ip: 192.168.135.211
permissions:
- destination_ip: 0.0.0.0
- destination_port: 443
But I am not able to apply this network filter. All the configurations give me configuration error.
I would recommend Istio. You can set up a Rule that will deny all traffic not originating from 192.168.0.1 IP.
apiVersion: "config.istio.io/v1alpha2"
kind: denier
metadata:
name: denyreviewsv3handler
spec:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: checknothing
metadata:
name: denyreviewsv3request
spec:
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denyreviewsv3
spec:
match: source.ip != ip("192.168.0.1")
actions:
- handler: denyreviewsv3handler.denier
instances: [ denyreviewsv3request.checknothing ]
You can match other attributes specified in Attribute Vocabulary, for example, block curl command match: match(request.headers["user-agent"], "curl*")
More about Traffic Management and Denials and White/Black Listing can be found in Istio documentation.
I can also recommend you this istio-workshop published by szihai.
This is a complete rbac filter config given to me by envoy team in their guthub issue. Haven't tested it out though.
static_resources:
listeners:
- name: "ingress listener"
address:
socket_address:
address: 0.0.0.0
port_value: 9001
filter_chains:
filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
per_filter_config:
envoy.filters.http.rbac:
rbac:
rules:
action: ALLOW
policies:
"per-route-rule":
permissions:
- any: true
principals:
- any: true
http_filters:
- name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"general-rules":
permissions:
- any: true
principals:
- any: true
- name: envoy.router
config: {}
access_log:
name: envoy.file_access_log
config: {path: /dev/stdout}
clusters:
- name: local_service
connect_timeout: 0.250s
type: static
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: 127.0.0.1
port_value: 9000
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8080