gRPC and gRPC-web backend not connecting though kubernetes nginx ingress - kubernetes

I have a gRPC server set up in AWS EKS, and use Nginx-Ingress-Controller with network load balancer , infornt of gRPC , Envoy is using , So it will come like that - NLB >> Ingress >> envoy >> gRPC
problem is when we make reqy=uist from bloomRPC , requist not landing into Envoy
What you expected to happen:
It should be connect requist from out side to grpc service and, need to use gRPC and gRPC-web with ssl, looking best solution for this.
How to reproduce it (as minimally and precisely as possible):
Spinup normail gRPC and grpc-web service , connect gRPC service using envoy , below conf i used to Envoy, and inginx-ingress-controller also I tryed using with nginx ingress controller nginx-ingress-controller:0.30.0 image, becoase of its will help to connect HTTP2 and gRPC with nginx ingress rule
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-http2: enabled
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: tk-ingress
spec:
tls:
- hosts:
- test.domain.com
secretName: tls-secret
rules:
- host: test.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: envoy
port:
number: 80
Envoy - conf
admin:
access_log_path: /dev/stdout
address:
socket_address: { address: 0.0.0.0, port_value: 8801 }
static_resources:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/listener/v3/listener.proto#config-listener-v3-listener
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8803
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#extensions-filters-network-http-connection-manager-v3-httpconnectionmanager
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto
#
# You can also configure this extension with the qualified
# name envoy.access_loggers.http_grpc
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/grpc/v3/als.proto
- name: envoy.access_loggers.file
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/access_loggers/file/v3/file.proto#extensions-access-loggers-file-v3-fileaccesslog
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
# Console output
path: /dev/stdout
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "test.domain.com"
routes:
- match:
prefix: /
grpc:
route:
cluster: tkmicro
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
# custom-header-1 is just an example. the grpc-web
# repository was missing grpc-status-details-bin header
# which used in a richer error model.
# https://grpc.io/docs/guides/error/#richer-error-model
allow_headers: accept-language,accept-encoding,user-agent,referer,sec-fetch-mode,origin,access-control-request-headers,access-control-request-method,accept,cache-control,pragma,connection,host,name,x-grpc-web,x-user-agent,grpc-timeout,content-type,channel,api-key,lang
expose_headers: grpc-status-details-bin,grpc-status,grpc-message,authorization
max_age: "1728000"
http_filters:
- name: envoy.filters.http.grpc_web
# This line is optional, but adds clarity to the configuration.
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/grpc_web/v3/grpc_web.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/cors/v3/cors.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/home/ubuntu/envoy/sync.pb"
ignore_unknown_query_parameters: true
services:
- "com.tk.system.sync.Synchronizer"
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: true
preserve_proto_field_names: true
- name: envoy.filters.http.router
typed_config:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/extensions/filters/http/router/v3/router.proto
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
alpn_protocols: "h2"
clusters:
# https://www.envoyproxy.io/docs/envoy/v1.15.0/api-v3/config/cluster/v3/cluster.proto#config-cluster-v3-cluster
- name: tkmicro
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: round_robin
load_assignment:
cluster_name: tkmicro
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.20.120.201
port_value: 8081
http2_protocol_options: {} # Force HTTP/2
Anything else we need to know?:
from bloomRPC getting this error
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
Environment:
Kubernetes version (use kubectl version): GitVersion:"v1.21.1"
Cloud provider or hardware configuration: AWS -EKS

Related

Istio OAuth2 EnvoyFilter is not applied while others are

I'm trying to apply mandatory authentication through Okta before accessing the apps running on the cluster (GKE on GCP), by applying the Envoy OAuth2 filter at the Istio Ingress Gateway level. However, after applying the EnvoyFilter, nothing change, and I can still access the application without being redirected to Okta first.
Istioctl version: 1.16.2
Kubernetes version: v1.25.5-gke.2000
I did two things to diagnose the issue:
Add another Envoy filter with a Lua script that add a header, to see if the EnvoyFilter was properly applied
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.lua
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
default_source_code:
inline_string: |
function envoy_on_request(request_handle)
request_handle:headers():add("authorization", "it works!")
end
function envoy_on_response(response_handle)
filter_name = "ENVOY"
response_handle:headers():add("my_Filter", filter_name)
end
Running a curl on my cluster endpoint, I can see that the filter is applied and the test header added:
curl -s -I -X HEAD https://www.mytestdomain.com/productpage
HTTP/2 200
content-type: text/html; charset=utf-8
content-length: 5294
server: istio-envoy
date: Thu, 09 Feb 2023 13:23:56 GMT
x-envoy-upstream-service-time: 51
my_filter: ENVOY
via: 1.1 google
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Watch the logs of the ingress gateway for any problems. Indeed, there is a warning about the secrets not being found:
"2023-02-09T12:26:18.535534Z warning envoy config gRPC config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) 0.0.0.0_8080: paths must refer to an existing path in the system: '/etc/istio/config/token-secret.yaml' does not exist"
Running kubectl exec istio-ingressgateway-pod -n istio-system -c istio-proxy -- ls /etc/istio/config, I do not see any secrets files.
This problem is mentiond here but the workaround did not fixed the issue for me. I changed between inline_bytes and inline_string and nothing changed.
Please find below my full config:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: ingressgateway-settings
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: NETWORK_FILTER # trusts the GCLB (two hops) and does not sanitize the X-Forwarded headers
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
xff_num_trusted_hops: 2
- applyTo: CLUSTER
match:
cluster:
service: oauth
patch:
operation: ADD
value:
name: oauth
dns_lookup_family: V4_ONLY
type: LOGICAL_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
sni: myorg.okta.com
load_assignment:
cluster_name: oauth
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: myorg.okta.com
port_value: 443
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.lua
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
default_source_code:
inline_string: |
function envoy_on_request(request_handle)
request_handle:headers():add("authorization", "it works!")
end
function envoy_on_response(response_handle)
filter_name = "ENVOY"
response_handle:headers():add("my_Filter", filter_name)
end
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.jwt_authn"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.oauth2
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
config:
token_endpoint:
cluster: oauth
uri: "https://myorg.okta.com/oauth2/v1/token"
timeout: 5s
authorization_endpoint: "https://myorg.okta.com/oauth2/v1/authorize"
redirect_uri: "http://localhost:8080/authorization-code/callback"
redirect_path_matcher:
path:
exact: /callback
signout_path:
path:
exact: /signout
auth_scopes:
- openid
- profile
credentials:
client_id: xxx
token_secret:
name: token
sds_config:
path: "/etc/istio/config/token-secret.yaml"
hmac_secret:
name: hmac
sds_config:
path: "/etc/istio/config/hmac-secret.yaml"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-oauth2
namespace: istio-system
data:
token-secret.yaml: |-
resources:
- "#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: token
generic_secret:
secret:
inline_bytes: xxx
hmac-secret.yaml: |-
resources:
- "#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: hmac
generic_secret:
secret:
# generated using `head -c 32 /dev/urandom | base64`
inline_bytes: xxx
Any help welcomed. Thank you!

How do I deploy envoy proxy as kubernetes load balancer

How do I create an envoy proxy as a load balancer to redirect the necessary traffic pods?
Here Kubernetes Service file
kind: Service
metadata:
name: files
spec:
type: ClusterIP
selector:
app: filesservice
ports:
- name: filesservice
protocol: TCP
port: 80
targetPort: 80
And for the envoy configuration file
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: /var/log/envoy/access.log
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/f" }
route: {host_rewrite: files, cluster: co_clusters, timeout: 60s}
http_filters:
- name: envoy.router
clusters:
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: LEAST_REQUEST
hosts:
- socket_address:
address: files
I have tried to change the cluster configuration to
- name: co_clusters
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
hostname: files.default.svc.cluster.local
However, none of this works, from the error logs am getting this outs
[2023-01-09 04:15:53.250][9][critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': Protobuf message (type envoy.config.bootstrap.v3.Bootstrap reason INVALID_ARGUMENT:(static_resources.clusters[0]) hosts: Cannot find field.) has unknown fields
[2023-01-09 04:15:53.250][9][info][main] [source/server/server.cc:961] exiting
This is the tutorial I tried following but still no joy.
The error is due to the changes made by the envoy during upgrades you can see that in this issue.
As I can see that you are following an outdated tutorial where the config may be different depending on the upgrades.
Attaching a newer version of the yaml file in the git hub, cross check with the existing yaml file with and note the changes. You can check that in the envoy website also by using this doc.
Try the below yaml file and let me know if it works.
listeners:
- name: listener_0
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
codec_type: AUTO
stat_prefix: ingress_https
clusters:
- name: echo-grpc
connect_timeout: 0.5s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: echo-grpc
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: echo-grpc.default.svc.cluster.local
port_value: 8081
Note:added #type from reffering the links, so make changes if any required.
Attaching a similar issue for your reference.

How to create Traefik IngressRoute out of Traefik configuration?

I want to deploy Zitadel in my Kubernetes Cluster, but I'm struggling to get the Traefik IngressRoute right to work with Zitadel. It's a problem with http2 and Grpc forwarding, but I can't figure out which options are needed.
I created a zitadel helm deployment with these options:
replicaCount: 1
zitadel:
masterkey: "changeM3"
configmapConfig:
ExternalPort: 443
ExternalDomain: 'id.example.com'
ExternalSecure: true
TLS:
Enabled: false
secretConfig:
Database:
cockroach:
User:
Password: "cockroach-password"
cockroachdb:
singel-node: true
statefulset:
replicas: 1
For Reverse Proxy configuration, the zitadel docs have configurations for traefik, but only for a static configuration file and not for kubernetes configuration:
entrypoints:
web:
address: ":80"
websecure:
address: ":443"
tls:
stores:
default:
defaultCertificate:
providers:
file:
filename: /etc/traefik/traefik.yaml
http:
middlewares:
zitadel:
headers:
isDevelopment: false
allowedHosts:
- 'localhost'
redirect-to-https:
redirectScheme:
scheme: https
port: 443
permanent: true
routers:
router0:
entryPoints:
- web
middlewares:
- redirect-to-https
rule: 'HostRegexp(`localhost`, `{subdomain:[a-z]+}.localhost`)'
service: zitadel
router1:
entryPoints:
- websecure
service: zitadel
middlewares:
- zitadel
rule: 'HostRegexp(`localhost`, `{subdomain:[a-z]+}.localhost`)'
tls:
domains:
- main: "localhost"
sans:
- "*.localhost"
- "localhost"
services:
zitadel:
loadBalancer:
servers:
- url: h2c://localhost:8080
passHostHeader: true
I tried to convert this configuration to IngressRoute, but the dashboard is only loading the site's skeleton and giving an Unknown Content-type received Error like described in this github issue.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: zitadel
namespace: apps
spec:
entryPoints:
- websecure
routes:
- match: Host(`id.example.com`)
kind: Rule
services:
- name: zitadel
namespace: apps
port: 8080
scheme: h2c
passHostHeader: true
- match: Host(`id.example.com`)
kind: Rule
services:
- name: zitadel
namespace: apps
port: 8080
scheme: http
passHostHeader: true
tls:
certResolver: letsencrypt-prod
domains:
- main: id.example.com
Am I missing something in my IngressRoute that causes that error?
the problem were the two rules of the Ingressroute overlapping. Removing the second route solves the problem:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: zitadel
namespace: apps
spec:
entryPoints:
- websecure
routes:
- match: Host(`id.example.com`)
kind: Rule
services:
- name: zitadel
namespace: apps
port: 8080
scheme: h2c
passHostHeader: true
tls:
certResolver: letsencrypt-prod
domains:
- main: id.example.com

EnvoyFilter is not applied when rediness gate exists and health check failed in Istio

We have an EnvoyFilter for route http request to upstream application port as below:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-envoy-filter
spec:
workloadSelector:
labels:
app: my-app
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
routeConfiguration:
portNumber: 80
vhost:
name: "inbound|http|80"
patch:
operation: MERGE
value:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*.myservice.com"
routes:
- match: { prefix: "/" }
route:
cluster: mycluster
priority: HIGH
- applyTo: CLUSTER
match:
context: SIDECAR_INBOUND
patch:
operation: ADD
value:
name: mycluster
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: mycluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9999
It works properly when I applied it to workloads which doesn't have any rediness gate property.
However, if a workload has its own rediness gate and the rediness check failed, then the EnvoyFilter doesn't seem to be applied properly.
Is it an intended result? Are the proxy configurations are applied after the rediness gate confirmed the health of the proxy?
Is there anyway to apply proxy configurations such as EnvoyFilter before the rediness gate confirmation?

Using RBAC Network Filter to block ingress or egress to/from service in Envoy Proxy

I want to try and configure a Filter in Envoy Proxy to block ingress and egress to the service based on some IP's, hostname, routing table, etc.
I have searched for the documentation and see it's possible. But didn't get any examples, of its usage.
Can someone point out some example of how It can be done?
One configuration example is present on this page:
https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto
But this is for a service account, like in Kubernetes.
The closest to what I want, I can see here in this page:
https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics
Mentioned as, "The filter supports configuration with either a safe-list (ALLOW) or block-list (DENY) set of policies based on properties of the connection (IPs, ports, SSL subject)."
But it doesn't show how to do it.
I have figured out something like this:
network_filters:
- name: service-access
config:
rules:
action: ALLOW
policies:
"service-access":
principals:
source_ip: 192.168.135.211
permissions:
- destination_ip: 0.0.0.0
- destination_port: 443
But I am not able to apply this network filter. All the configurations give me configuration error.
I would recommend Istio. You can set up a Rule that will deny all traffic not originating from 192.168.0.1 IP.
apiVersion: "config.istio.io/v1alpha2"
kind: denier
metadata:
name: denyreviewsv3handler
spec:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: checknothing
metadata:
name: denyreviewsv3request
spec:
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denyreviewsv3
spec:
match: source.ip != ip("192.168.0.1")
actions:
- handler: denyreviewsv3handler.denier
instances: [ denyreviewsv3request.checknothing ]
You can match other attributes specified in Attribute Vocabulary, for example, block curl command match: match(request.headers["user-agent"], "curl*")
More about Traffic Management and Denials and White/Black Listing can be found in Istio documentation.
I can also recommend you this istio-workshop published by szihai.
This is a complete rbac filter config given to me by envoy team in their guthub issue. Haven't tested it out though.
static_resources:
listeners:
- name: "ingress listener"
address:
socket_address:
address: 0.0.0.0
port_value: 9001
filter_chains:
filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
per_filter_config:
envoy.filters.http.rbac:
rbac:
rules:
action: ALLOW
policies:
"per-route-rule":
permissions:
- any: true
principals:
- any: true
http_filters:
- name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"general-rules":
permissions:
- any: true
principals:
- any: true
- name: envoy.router
config: {}
access_log:
name: envoy.file_access_log
config: {path: /dev/stdout}
clusters:
- name: local_service
connect_timeout: 0.250s
type: static
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: 127.0.0.1
port_value: 9000
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8080