How to insert JWT only in external request (ISTIO 1.4) - jwt

How to insert JWT only in external request (ISTIO 1.4) ?
when the request is made through the internal network, the system should not request the jwt.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: example-jwt
namespace: istio-system
spec:
targets:
- name: istio-ingressgateway
origins:
- jwt:
issuer: {{issuer}}
jwksUri: {{uri}}
jwtHeaders:
- "Authorization"
trigger_rules:
- included_paths:
- exact: {{include}}
- excluded_paths:
- exact: {{excludes}}
principalBinding: USE_ORIGIN

Related

Istio Gateway and Kubernetes Ingress on same hostname because of cert-manager HTTP01 ACME challenge: can this work?

I deployed an Istio Service Mesh, and I use its gateway controller for ingress. I setup cert-manager which passes ssl certificates to the gateways. With self-signed certificates this setup works fine, but when using letsencrypt, I have a conflict between cert-manager's automated temporary ingress, and the istio gateway.
Here's the resulting setup, for httpbin:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
meta.helm.sh/release-name: httpbin-ingress
meta.helm.sh/release-namespace: httpbin
creationTimestamp: "2022-10-13T08:07:33Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: httpbin-ingress
namespace: istio-ingress
resourceVersion: "5243"
uid: d4087649-2609-40c0-8d4a-55b9a420fda9
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- httpbin.example.com
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- httpbin.example.com
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: httpbin-ssl-certificate-secret
mode: SIMPLE
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
meta.helm.sh/release-name: httpbin-ingress
meta.helm.sh/release-namespace: httpbin
creationTimestamp: "2022-10-13T08:07:33Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: httpbin-ingress
namespace: istio-ingress
resourceVersion: "5246"
uid: ef5b6397-2c7a-408c-b142-4528e8f28a20
spec:
gateways:
- httpbin-ingress
hosts:
- httpbin.example.com
http:
- match:
- uri:
prefix: /outpost.goauthentik.io
route:
- destination:
host: authentik.authentik.svc.cluster.local
port:
number: 80
- match:
- uri:
regex: ^\/[^\.]+.*
- uri:
exact: /
route:
- destination:
host: httpbin.httpbin.svc.cluster.local
port:
number: 14001
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0
creationTimestamp: "2022-10-13T08:07:38Z"
generateName: cm-acme-http-solver-
generation: 1
labels:
acme.cert-manager.io/http-domain: "1703151793"
acme.cert-manager.io/http-token: "1233129203"
acme.cert-manager.io/http01-solver: "true"
name: cm-acme-http-solver-gtgxg
namespace: istio-ingress
ownerReferences:
- apiVersion: acme.cert-manager.io/v1
blockOwnerDeletion: true
controller: true
kind: Challenge
name: httpbin-ssl-certificate-ct48l-1136457683-1300359052
uid: dd19a50c-5944-46b8-ae09-8345bef9c114
resourceVersion: "5308"
uid: 5d5578a5-3371-4705-9a8c-e031be5f4d7c
spec:
rules:
- host: httpbin.example.com
http:
paths:
- backend:
service:
name: cm-acme-http-solver-rkr2g
port:
number: 8089
path: /.well-known/acme-challenge/YKCZwQz6T9HezJtPwzev-esq-Q4WaLHoUC_CafmPJUk
pathType: ImplementationSpecific
status:
loadBalancer: {}
The problem I face is the following. With this setup:
curl --resolve httpbin.example.com:443:127.0.0.1 https://httpbin.example.com/ -k works.
curl --resolve httpbin.example.com:443:127.0.0.1 https://httpbin.example.com/.well-known/acme-challenge/YKCZwQz6T9HezJtPwzev-esq-Q4WaLHoUC_CafmPJUk -Ik gives http code 404.
if I delete the gateway httpbin-ingress, curl --resolve httpbin.example.com:80:127.0.0.1 http://httpbin.example.com/.well-known/acme-challenge/YKCZwQz6T9HezJtPwzev-esq-Q4WaLHoUC_CafmPJUk -Ik works as expected with http code 200.
The Certificate resource for cert-manager is annotated with
cert-manager.io/issue-temporary-certificate: "true"
and that works (the gateway is setup with a self-signed certificate until letsencrypt succeeds), so the fact that I use httpsRedirect: true should not be the culprit.
My question is: is it possible to have the gateway in place and have cert-manager succeed with the HTTP01 challenge? My thinking is that there must be something I am overlooking in getting the gateway forward traffic for "/.well-known/..." to the cert-manager's ingress.
I looked at this question, Using Gateway + VirtualService + http01 + SDS , but I have not been able to find where my configuration is different. I tried changing the gateway's protocol on port 80 from HTTP to HTTP2, and curling with --http1.1 to the .well-known path, but this did not solve the issue.
The solution for me was in the end:
to add a * host to the servers on port 80 in each Gateway,
drop the use of the built-in httpsRedirect: true function,
and write a slightly more manual redirect rule for http to https which only matches the non-ACME paths.
It seems that you cannot have multiple Istio Gateways for the same port and hostname, unless the hostnames include *. Someone correct me if I'm wrong.
My configuration which works is now:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
meta.helm.sh/release-name: httpbin-ingress
meta.helm.sh/release-namespace: httpbin
creationTimestamp: "2022-10-13T13:24:27Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: httpbin-ingress
namespace: istio-ingress
resourceVersion: "54782"
uid: d36977db-20a2-4d43-a137-ba4cbfeccf8d
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
- httpbin.example.com
port:
name: http
number: 80
protocol: HTTP
- hosts:
- httpbin.example.com
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: httpbin-ssl-certificate-secret
mode: SIMPLE
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
meta.helm.sh/release-name: httpbin-ingress
meta.helm.sh/release-namespace: httpbin
creationTimestamp: "2022-10-13T13:24:27Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: httpbin-ingress
namespace: istio-ingress
resourceVersion: "54783"
uid: 3a1d988c-c287-49a8-942a-9aaf41a4b2b5
spec:
gateways:
- httpbin-ingress
hosts:
- httpbin.example.com
http:
- match:
- headers:
x-forwarded-proto:
exact: https
uri:
prefix: /outpost.goauthentik.io
route:
- destination:
host: authentik.authentik.svc.cluster.local
port:
number: 80
- match:
- headers:
x-forwarded-proto:
exact: https
uri:
regex: ^\/[^\.]+.*
- headers:
x-forwarded-proto:
exact: https
uri:
exact: /
route:
- destination:
host: httpbin.httpbin.svc.cluster.local
port:
number: 14001
- match:
- headers:
x-forwarded-proto:
exact: http
uri:
regex: ^\/[^\.]+.*
- headers:
x-forwarded-proto:
exact: http
uri:
exact: /
- headers:
x-forwarded-proto:
exact: http
uri:
exact: /
redirect:
scheme: https
And no need to repost the generated ingress from cert-manager, as that did not change!
As you can see, I elaborated the VirtualService quite a bit, where each match now explicitly verifies the protocol (http vs https), and the lines which match http only apply to paths which do not match the ACME Challenge (/.well-known...).
If your service uses paths that start with a dot, well, then you will have to add more rules to the matching to avoid matching ACME but allow for your .-paths.
PS If someone knows a smarter RE2-regex such that it fits in one line, please do tell!

k8s, Istio: remove transfer-encoding header

In application's responses we see doubled transfer-encoding headers.
Suppose, because of that we get 503 in UI, but at the same time application returns 201 in pod's logs.
Except http code: 201 there are transfer-encoding=chunked and Transfer-Encoding=chunked headers in logs, so that could be a reason of 503.
We've tried to remove transfer-encoding via Istio virtual service or envoy filter, but no luck..
Here are samples we tried:
VS definition:
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: test
namespace: my-ns
spec:
hosts:
- my-service
http:
- route:
- destination:
host: my-service
headers:
response:
remove:
- transfer-encoding
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: test
namespace: istio-system
spec:
gateways:
- wildcard-api-gateway
hosts:
- my-ns_domain
http:
- match:
- uri:
prefix: /operator/api/my-service
rewrite:
uri: /my-service
route:
- destination:
host: >-
my-service.my-ns.svc.cluster.local
port:
number: 8080
headers:
response:
remove:
- transfer-encoding
EnvoyFilter definition:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: test
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
patch:
operation: ADD
value:
name: envoy.filters.http.lua
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:headers():remove("transfer-encoding")
end
In older envoy versions I see envoy.reloadable_features.reject_unsupported_transfer_encodings=false was a workaround. Unfortunately, it was deprecated.
Please advise what is wrong with VS/filter or is there any alternative to reject_unsupported_transfer_encodings option?
Istio v1.8.2
Envoy v1.16.1
Decision so far: created requirement for dev team to remove the duplication of chunked encoding

Istio: RequestAuthentication jwksUri does not resolve internal services names

Notice
The root cause of this is the same than Istio: Health check / sidecar fails when I enable the JWT RequestAuthentication, but after further diagnose, I have reworded to simply (trying to get help)
Problem
I'm trying to configure RequestAuthentication (and AuthorizationPolicy) in an Istio mesh. My JWK tokens are provided by an internal OAUTH server (based in CloudFoundry) that is working fine for other services.
My problem comes when I configure the uri for getting the signing key, linking to the internal pod. In this moment, Istio is not resolving the name of the internal pod. I'm getting confused because the microservices are able to connect to all my internal pods (mongodb, mysql, rabbitmq) including the uaa. Why the RequestAuthentication is not able to do the same?
UAA Service configuration (notice: I'm also creating a virtual service for external access, and this is working fine)
apiVersion: apps/v1
kind: Deployment
metadata:
name: uaa
spec:
replicas: 1
selector:
matchLabels:
app: uaa
template:
metadata:
labels:
app: uaa
spec:
containers:
- name: uaa
image: example/uaa
imagePullPolicy: Never
env:
- name: LOGGING_LEVEL_ROOT
value: DEBUG
ports:
- containerPort: 8090
resources:
limits:
memory: 350Mi
---
apiVersion: v1
kind: Service
metadata:
name: uaa
spec:
selector:
app: uaa
ports:
- port: 8090
name: http
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.example.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: uaa
- match:
- uri:
prefix: /uaa
rewrite:
uri: "/uaa"
route:
- destination:
port:
number: 8090
host: uaa
RequestAuthentication: Notice the parameter jwksUri, looking for uaa hostname.
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://uaa:8090/uaa/token_keys"
forwardOriginalToken: true
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "ap-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
action: ALLOW
rules:
- to:
- operation:
methods: ["GET","POST","DELETE","HEAD","PUT"]
paths: ["*"]
Error log (in istiod pod)
2021-03-17T09:56:18.833731Z error Failed to fetch jwt public key from "http://uaa:8090/uaa/token_keys": Get "http://uaa:8090/uaa/token_keys": dial tcp: lookup uaa on 10.96.0.10:53: no such host
2021-03-17T09:56:18.838233Z info ads LDS: PUSH for node:product-composite-5cbf8498c7-nhxtj.chp18 resources:29 size:134.8kB
2021-03-17T09:56:18.856277Z warn ads ADS:LDS: ACK ERROR sidecar~10.1.4.2~product-composite-5cbf8498c7-nhxtj.chp18~chp18.svc.cluster.local-8 Internal:Error adding/updating listener(s) virtualInbound: Provider 'origins-0' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
Workaround
For the moment, I have declared the OAUTH server as external, and redirected the request, but this is totally inefficient.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: se-auth
spec:
hosts:
- "host.docker.internal"
ports:
- number: 8090
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.example.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: "host.docker.internal"
- match:
- uri:
prefix: /uaa
rewrite:
uri: "/uaa"
route:
- destination:
port:
number: 8090
host: "host.docker.internal"
---
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://host.docker.internal:8090/uaa/token_keys"
forwardOriginalToken: true
Workaround 2:
I have workarounded using the FQDN (the fully qualified name) in the host name. But this does not solve my problem, because links the configuration file to the namespace (I use multiple namespaces and I need to have only one configuration file)
In any case, my current line is:
jwksUri: "http://uaa.mynamespace.svc.cluster.local:8090/uaa/token_keys"
I'm totally sure that is a silly configuration parameter, but I'm not able to find it! Thanks in advance
jwksUri: "http://uaa:8090/uaa/token_keys" will not work from istiod, because http://uaa will be interpreted as http://uaa.istio-system.svc.cluster.local. That's why your workaround are solving the problem.
I don't understand why your workaround 2 is not a sufficient solution. Let's say your uaa service runs in namespace auth. If you configure the jwksUri with uaa.auth.svc.cluster.local, every kubernetes pod is able to call it, regardless of it's namespace.
I had a very similar issue which was caused by a PeerAuthentication that set mtls.mode = STRICT for all pods. This caused the istiod pod to fail to retrieve the keys (as istiod seems to not use MTLS when it performs the HTTP GET on the jwksUri).
The solution was to set a PeerAuthentication with mtls.mode = PERMISSIVE on the Pod hosting the jwksUri (which in my case was dex).
Some example YAML:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default-mtls
namespace: my-namespace
spec:
mtls:
## the empty `selector` applies STRICT to all Pods in `my-namespace`
mode: STRICT
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: dex-mtls
namespace: my-namespace
spec:
selector:
matchLabels:
my_label: dex
mtls:
## the dex pods must allow incoming non-MTLS traffic because istiod reads the JWKS keys from:
## http://dex.my-namespace.svc.cluster.local:5556/dex/keys
mode: PERMISSIVE

Istio 1.5 cors not working - Response to preflight request doesn't pass access control check

Cors preflight requests do not work when a Jwt Policy is configured on the istio-ingressgateway target.
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: api-gateway
namespace: foo
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "api.example.com"
tls:
httpsRedirect: true # sends 301 redirects for http requests
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "api.example.com"
Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: backend-vs
namespace: foo
spec:
hosts:
- "api.example.com"
gateways:
- api-gateway
http:
- match:
- uri:
prefix: /api/v1/info
route:
- destination:
host: backend.foo.svc.cluster.local
corsPolicy:
allowOrigin:
- "https://app.example.com"
allowMethods:
- POST
- GET
- PUT
- DELETE
- PATCH
- OPTIONS
allowHeaders:
- authorization
- content-type
- accept
- origin
- user-agent
allowCredentials: true
maxAge: 300s
Security
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "jwt-example"
namespace: foo
spec:
selector:
matchLabels:
app: backend
jwtRules:
- issuer: "http://keycloak.foo/auth/realms/example"
jwksUri: "http://keycloak.foo/auth/realms/example/protocol/openid-connect/certs"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt-example
namespace: foo
spec:
selector:
matchLabels:
app: backend
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["http://keycloak.foo/auth/realms/example/http://keycloak.foo/auth/realms/example"]
when:
- key: request.auth.claims[groups]
values: ["group1"]
when I test the web application in firefox it works fine, but in other browsers like opera, chrome, safari, it fails with the following error:
Access to XMLHttpRequest at 'https://api.example.com/api/v1/info' from origin 'https://app.example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
What makes me more thoughtful is because in firefox it works well but in other browsers it fails
NOTE: To validate that the cors policy was correct in istio, what I did was disable this policy in istio and test in firefox to see what was happening, the result was that a problem with cors did indeed come out, but when I re-enabled the cors in istio when rerunning in firefox the request works fine.
Good after doing segmented tests and see what was causing the error, I found that the problem appeared when I created the keycloak gateway (keycloak.example.com) that was running on the same service port (backend.example.com) , which by default for https is 443 and for http is 80.
What I did was expose keycloak to another port on the gateway (ingressgateway). with the above and the angular application I stop putting problem of the cors.

Access istio/k8s service via HTTPS

I'm a bit new to Kubernetes and istio. I'm trying to create a service and access it over HTTPS.
Over HTTP everything looks great
I've used cert-manager with Let's Encrypt to generate the certificate
The Certificate has been generated successfully
I've generated the secret using the following command
kubectl create secret generic clouddns --namespace=cert-manager --from-literal=GCP_PROJECT=<PROJECT> --from-file=/etc/keys/<KEY>.json
These are my configurations files of the Gateway, Virtual Service, Cluster Issuer, and Certificate.
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: messaging-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "<HOST>"
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "<HOST>"
tls:
credentialName: messaging-certificate
mode: SIMPLE
privateKey: sds
serverCertificate: sds
Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: messaging
spec:
hosts:
- "<HOST>"
gateways:
- messaging-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: messaging
port:
number: 8082
Cluster Issuer
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: messaging-cluster-issuer
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: <EMAIL>
privateKeySecretRef:
name: messaging-letsencrypt
solvers:
- dns01:
clouddns:
serviceAccountSecretRef:
name: clouddns
key: <KEY>.json
project: <PROJECT>
Certificate
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: messaging-certificate
spec:
secretName: messaging-certificate
duration: 2160h # 90d
renewBefore: 360h # 15d
organization:
- RELE.AI
commonName: <HOST>
isCA: false
keySize: 2048
keyAlgorithm: rsa
keyEncoding: pkcs1
usages:
- server auth
- client auth
dnsNames:
- <HOST>
issuerRef:
name: messaging-cluster-issuer
kind: ClusterIssuer
When I'm running kubectl get secrets messaging-certificate -o yaml, I can see both the tls.crt and the tls.key content.
Any ideas why I can't get to a point where I can access over HTTPS?
---- Edit
Full istio manifest - I have generated the manifest using istioctl manifest generate. Hopefully that's the correct way
You should do the following:
Enable SDS - see the first step in https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-sds/#configure-a-tls-ingress-gateway-using-sds
Remove serverCertificate and privateKey fields from the Gateway's tls field, as in https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-sds/#configure-a-tls-ingress-gateway-for-a-single-host