EnvoyFilter to exclude specific hosts - kubernetes

I need to exclude specific host from the EnvoyFilter that looks like this:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: authn-filter
spec:
workloadLabels:
istio: ingressgateway
filters:
- filterConfig:
httpService:
serverUri:
uri: http://authservice.$(namespace).svc.cluster.local
cluster: outbound|8080||authservice.$(namespace).svc.cluster.local
failureModeAllow: false
timeout: 10s
authorizationRequest:
allowedHeaders:
patterns:
- exact: "cookie"
- exact: "X-Auth-Token"
authorizationResponse:
allowedUpstreamHeaders:
patterns:
- exact: "kubeflow-userid"
statusOnError:
code: GatewayTimeout
filterName: envoy.ext_authz
filterType: HTTP
insertPosition:
index: FIRST
listenerMatch:
listenerType: GATEWAY
The problem is that the filter applies to the default istio ingress gateway which affects all traffic that is coming through that gateway, i would like to have some hosts that could be excluded / whitelisted from the filter.

I found my answer here. The question asks to exclude some paths, but I was successful with hosts as well. This is what I used:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: bypass-authn
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
routeConfiguration:
vhost:
name: subdomain.example.org:80 # <== your host goes here
patch:
operation: MERGE
value:
name: envoy.ext_authz_disabled
typed_per_filter_config:
envoy.ext_authz:
"#type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute
disabled: true
More information in Istio documentation. Specifically, the documentation specifies that you should also put into the name: field the port, but I think it should work without it as well.

Related

How to access the prometheus & grafana via Istion ingress gateway? I have installed the promethius anfd grafana through Helm

I used below command to bring up the pod:
kubectl create deployment grafana --image=docker.io/grafana/grafana:5.4.3 -n monitoring
Then I used below command to create custerIp:
kubectl expose deployment grafana --type=ClusterIP --port=80 --target-port=3000 --protocol=TCP -n monitoring
Then I have used below virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- cogtiler-gateway.skydeck
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana
kubectl apply -f grafana-virtualservice.yaml -n monitoring
Output:
virtualservice.networking.istio.io/grafana created
Now, when I try to access it, I get below error from grafana:
**If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_path setting includes subpath
3. If you have a local dev build make sure you build frontend using: npm run dev, npm run watch, or npm run build
4. Sometimes restarting grafana-server can help **
The easiest and working out of the box solution to configure that would be with a grafana host and / prefix.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "grafana.example.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 80
As you mentioned in the comments, I want to use path based routing something like my.com/grafana, that's also possible to configure. You can use istio rewrite to configure that.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
But, according to this github issue you would have also additionally configure grafana for that. As without the proper grafana configuration that won't work correctly.
I found a way to configure grafana with different url with the following env variable GF_SERVER_ROOT_URL in grafana deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: grafana
spec:
containers:
- image: docker.io/grafana/grafana:5.4.3
name: grafana
env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s/grafana/"
resources: {}
Also there is a Virtual Service and Gateway for that deployment.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana/
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
You need to create a Gateway to allow routing between the istio-ingressgateway and your VirtualService.
Something in the lines of :
kind: Gateway
metadata:
name: ingress
namespace: istio-system
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
You also need a DNS entry for your domain (my-domain.com) that points to the IP address of your istio-ingressgateway.
When your browser will hit my.domain.com, then it'll be redirected to the istio-ingressgateway. The istio-ingressgateway will inspect the Host field from the request, and redirect the request to grafana (according to VirtualService rules).
You can check kubectl get svc -n istio-system | grep istio-ingressgateway to get the public IP of your ingress gateway.
If you want to enable TLS, then you need to provision a TLS certificate for your domain (most easy with cert-manager). Then you can use https redirect in your gateway, like so :
kind: Gateway
metadata:
name: ingress
namespace: whatever
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- my.domain.com
tls:
mode: SIMPLE
# name of the secret containing the TLS certificate + keys. The secret must exist in the same namespace as the istio-ingressgateway (probably istio-system namespace)
# This secret can be created by cert-manager
# Or you can create a self-signed certificate
# and add it to manually inside the browser trusted certificates
credentialName: my-domain-tls
Then you VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "my.domain.com"
gateways:
- ingress
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana

k8s, Istio: remove transfer-encoding header

In application's responses we see doubled transfer-encoding headers.
Suppose, because of that we get 503 in UI, but at the same time application returns 201 in pod's logs.
Except http code: 201 there are transfer-encoding=chunked and Transfer-Encoding=chunked headers in logs, so that could be a reason of 503.
We've tried to remove transfer-encoding via Istio virtual service or envoy filter, but no luck..
Here are samples we tried:
VS definition:
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: test
namespace: my-ns
spec:
hosts:
- my-service
http:
- route:
- destination:
host: my-service
headers:
response:
remove:
- transfer-encoding
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: test
namespace: istio-system
spec:
gateways:
- wildcard-api-gateway
hosts:
- my-ns_domain
http:
- match:
- uri:
prefix: /operator/api/my-service
rewrite:
uri: /my-service
route:
- destination:
host: >-
my-service.my-ns.svc.cluster.local
port:
number: 8080
headers:
response:
remove:
- transfer-encoding
EnvoyFilter definition:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: test
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
patch:
operation: ADD
value:
name: envoy.filters.http.lua
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:headers():remove("transfer-encoding")
end
In older envoy versions I see envoy.reloadable_features.reject_unsupported_transfer_encodings=false was a workaround. Unfortunately, it was deprecated.
Please advise what is wrong with VS/filter or is there any alternative to reject_unsupported_transfer_encodings option?
Istio v1.8.2
Envoy v1.16.1
Decision so far: created requirement for dev team to remove the duplication of chunked encoding

Istio: RequestAuthentication jwksUri does not resolve internal services names

Notice
The root cause of this is the same than Istio: Health check / sidecar fails when I enable the JWT RequestAuthentication, but after further diagnose, I have reworded to simply (trying to get help)
Problem
I'm trying to configure RequestAuthentication (and AuthorizationPolicy) in an Istio mesh. My JWK tokens are provided by an internal OAUTH server (based in CloudFoundry) that is working fine for other services.
My problem comes when I configure the uri for getting the signing key, linking to the internal pod. In this moment, Istio is not resolving the name of the internal pod. I'm getting confused because the microservices are able to connect to all my internal pods (mongodb, mysql, rabbitmq) including the uaa. Why the RequestAuthentication is not able to do the same?
UAA Service configuration (notice: I'm also creating a virtual service for external access, and this is working fine)
apiVersion: apps/v1
kind: Deployment
metadata:
name: uaa
spec:
replicas: 1
selector:
matchLabels:
app: uaa
template:
metadata:
labels:
app: uaa
spec:
containers:
- name: uaa
image: example/uaa
imagePullPolicy: Never
env:
- name: LOGGING_LEVEL_ROOT
value: DEBUG
ports:
- containerPort: 8090
resources:
limits:
memory: 350Mi
---
apiVersion: v1
kind: Service
metadata:
name: uaa
spec:
selector:
app: uaa
ports:
- port: 8090
name: http
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.example.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: uaa
- match:
- uri:
prefix: /uaa
rewrite:
uri: "/uaa"
route:
- destination:
port:
number: 8090
host: uaa
RequestAuthentication: Notice the parameter jwksUri, looking for uaa hostname.
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://uaa:8090/uaa/token_keys"
forwardOriginalToken: true
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "ap-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
action: ALLOW
rules:
- to:
- operation:
methods: ["GET","POST","DELETE","HEAD","PUT"]
paths: ["*"]
Error log (in istiod pod)
2021-03-17T09:56:18.833731Z error Failed to fetch jwt public key from "http://uaa:8090/uaa/token_keys": Get "http://uaa:8090/uaa/token_keys": dial tcp: lookup uaa on 10.96.0.10:53: no such host
2021-03-17T09:56:18.838233Z info ads LDS: PUSH for node:product-composite-5cbf8498c7-nhxtj.chp18 resources:29 size:134.8kB
2021-03-17T09:56:18.856277Z warn ads ADS:LDS: ACK ERROR sidecar~10.1.4.2~product-composite-5cbf8498c7-nhxtj.chp18~chp18.svc.cluster.local-8 Internal:Error adding/updating listener(s) virtualInbound: Provider 'origins-0' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
Workaround
For the moment, I have declared the OAUTH server as external, and redirected the request, but this is totally inefficient.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: se-auth
spec:
hosts:
- "host.docker.internal"
ports:
- number: 8090
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.example.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: "host.docker.internal"
- match:
- uri:
prefix: /uaa
rewrite:
uri: "/uaa"
route:
- destination:
port:
number: 8090
host: "host.docker.internal"
---
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://host.docker.internal:8090/uaa/token_keys"
forwardOriginalToken: true
Workaround 2:
I have workarounded using the FQDN (the fully qualified name) in the host name. But this does not solve my problem, because links the configuration file to the namespace (I use multiple namespaces and I need to have only one configuration file)
In any case, my current line is:
jwksUri: "http://uaa.mynamespace.svc.cluster.local:8090/uaa/token_keys"
I'm totally sure that is a silly configuration parameter, but I'm not able to find it! Thanks in advance
jwksUri: "http://uaa:8090/uaa/token_keys" will not work from istiod, because http://uaa will be interpreted as http://uaa.istio-system.svc.cluster.local. That's why your workaround are solving the problem.
I don't understand why your workaround 2 is not a sufficient solution. Let's say your uaa service runs in namespace auth. If you configure the jwksUri with uaa.auth.svc.cluster.local, every kubernetes pod is able to call it, regardless of it's namespace.
I had a very similar issue which was caused by a PeerAuthentication that set mtls.mode = STRICT for all pods. This caused the istiod pod to fail to retrieve the keys (as istiod seems to not use MTLS when it performs the HTTP GET on the jwksUri).
The solution was to set a PeerAuthentication with mtls.mode = PERMISSIVE on the Pod hosting the jwksUri (which in my case was dex).
Some example YAML:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default-mtls
namespace: my-namespace
spec:
mtls:
## the empty `selector` applies STRICT to all Pods in `my-namespace`
mode: STRICT
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: dex-mtls
namespace: my-namespace
spec:
selector:
matchLabels:
my_label: dex
mtls:
## the dex pods must allow incoming non-MTLS traffic because istiod reads the JWKS keys from:
## http://dex.my-namespace.svc.cluster.local:5556/dex/keys
mode: PERMISSIVE

How to bind gateway to a specific namespace?

I have the following scenario:
When the user A enter the address foo.example1.example.com in the
browser, then it should call the service FOO in the namespace
example1.
When the user B enter the address foo.example1.example.com in the
browser, then it should call the service FOO in the namespace
example2.
I am using istio, the question is, how to configure the gateway, that is bind specific to a namespace:
Look at an example of istio gateway configuration:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ns_example1
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "example1.example.com"
EOF
When I would deploy the gateway, then it will apply to current namespace but I would like to specify a namespace.
How to assign a gateway to specific namespace?
I think this link should answer your question.
There is many things You won't need, but there is idea You want to apply to your istio cluster.
So You need 1 gateway and 2 virtual services.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foocorp-gateway
namespace: default
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 80
name: http-example1
protocol: HTTP
hosts:
- "example1.example.com"
- port:
number: 80
name: http-example2
protocol: HTTP
hosts:
- "example2.example.com"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: example1
namespace: ex1
spec:
hosts:
- "example1.example.com"
gateways:
- foocorp-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: example1.ex1.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: example2
namespace: ex2
spec:
hosts:
- "example2.example.com"
gateways:
- foocorp-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: example2.ex2.svc.cluster.local
port:
number: 80
EDIT
You can create gateway in namespace ex1 and ex2, then just change gateway field in virtual service and it should work.
Remember to add namespace/gateway, not only gateway name, like there.
gateways:
- some-config-namespace/gateway-name
Let me know if that help You.

Granular policy over istio egress trafic

I have kubernetes cluster with installed Istio. I have two pods, for example, sleep1 and sleep2 (containers with installed curl). I want to configure istio to permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com.
So, I created ServiceEntry:
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- www.google.com
- google.com
ports:
- name: http-port
protocol: HTTP
number: 80
resolution: DNS
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http-port
protocol: HTTP
hosts:
- "*"
two virtualServices (mesh->egress, egress->google)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mesh-to-egress
spec:
hosts:
- www.google.com
- google.com
gateways:
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: egress-to-google-int
spec:
hosts:
- www.google.com
- google.com
gateways:
- istio-egressgateway
http:
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: google.com
port:
number: 80
weight: 100
As result, I can curl google from both pods.
And the question again: can i permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com? I know that this is possible to do with kubernetes NetworkPolicy and black/white lists (https://istio.io/docs/tasks/policy-enforcement/denial-and-list/), but both methods are forbids (permits) traffic to specific ips or maybe I missed something?
You can create different service accounts for sleep1 and sleep2. Then you create an RBAC policy to limit access to the istio-egressgateway policy, so sleep2 will not be able to access any egress traffic through the egress gateway. This should work with forbidding any egress traffic from the cluster, that does not originate from the egress gateway. See https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations.
If you want to allow sleep2 access other services, but not www.google.com, you can use Mixer rules and handlers, see this blog post. It shows how to allow a certain URL path to a specific service account.
I think you're probably on the right track on the denial option.
It is also not limited to IP as we may see attribute-based example for Simple Denial and Attribute-based Denial
So, for example, if we write a simple denial rule for Sleep2 -> www.google.com:
apiVersion: "config.istio.io/v1alpha2"
kind: handler
metadata:
name: denySleep2Google
spec:
compiledAdapter: denier
params:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: denySleep2GoogleRequest
spec:
compiledTemplate: checknothing
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denySleep2
spec:
match: destination.service.host == "www.google.com" && source.labels["app"]=="sleep2"
actions:
- handler: denySleep2Google
instances: [ denySleep2GoogleRequest ]
Please check and see if this helps.
Also, the "match" field in the "rule" entry is based on istio expression language around the attributes. Some vocabulary can be found in this doc.