Istio: RequestAuthentication jwksUri does not resolve internal services names - kubernetes

Notice
The root cause of this is the same than Istio: Health check / sidecar fails when I enable the JWT RequestAuthentication, but after further diagnose, I have reworded to simply (trying to get help)
Problem
I'm trying to configure RequestAuthentication (and AuthorizationPolicy) in an Istio mesh. My JWK tokens are provided by an internal OAUTH server (based in CloudFoundry) that is working fine for other services.
My problem comes when I configure the uri for getting the signing key, linking to the internal pod. In this moment, Istio is not resolving the name of the internal pod. I'm getting confused because the microservices are able to connect to all my internal pods (mongodb, mysql, rabbitmq) including the uaa. Why the RequestAuthentication is not able to do the same?
UAA Service configuration (notice: I'm also creating a virtual service for external access, and this is working fine)
apiVersion: apps/v1
kind: Deployment
metadata:
name: uaa
spec:
replicas: 1
selector:
matchLabels:
app: uaa
template:
metadata:
labels:
app: uaa
spec:
containers:
- name: uaa
image: example/uaa
imagePullPolicy: Never
env:
- name: LOGGING_LEVEL_ROOT
value: DEBUG
ports:
- containerPort: 8090
resources:
limits:
memory: 350Mi
---
apiVersion: v1
kind: Service
metadata:
name: uaa
spec:
selector:
app: uaa
ports:
- port: 8090
name: http
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.example.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: uaa
- match:
- uri:
prefix: /uaa
rewrite:
uri: "/uaa"
route:
- destination:
port:
number: 8090
host: uaa
RequestAuthentication: Notice the parameter jwksUri, looking for uaa hostname.
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://uaa:8090/uaa/token_keys"
forwardOriginalToken: true
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "ap-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
action: ALLOW
rules:
- to:
- operation:
methods: ["GET","POST","DELETE","HEAD","PUT"]
paths: ["*"]
Error log (in istiod pod)
2021-03-17T09:56:18.833731Z error Failed to fetch jwt public key from "http://uaa:8090/uaa/token_keys": Get "http://uaa:8090/uaa/token_keys": dial tcp: lookup uaa on 10.96.0.10:53: no such host
2021-03-17T09:56:18.838233Z info ads LDS: PUSH for node:product-composite-5cbf8498c7-nhxtj.chp18 resources:29 size:134.8kB
2021-03-17T09:56:18.856277Z warn ads ADS:LDS: ACK ERROR sidecar~10.1.4.2~product-composite-5cbf8498c7-nhxtj.chp18~chp18.svc.cluster.local-8 Internal:Error adding/updating listener(s) virtualInbound: Provider 'origins-0' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
Workaround
For the moment, I have declared the OAUTH server as external, and redirected the request, but this is totally inefficient.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: se-auth
spec:
hosts:
- "host.docker.internal"
ports:
- number: 8090
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.example.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: "host.docker.internal"
- match:
- uri:
prefix: /uaa
rewrite:
uri: "/uaa"
route:
- destination:
port:
number: 8090
host: "host.docker.internal"
---
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ra-product-composite"
spec:
selector:
matchLabels:
app: "product-composite"
jwtRules:
- issuer: "http://uaa:8090/uaa/oauth/token"
jwksUri: "http://host.docker.internal:8090/uaa/token_keys"
forwardOriginalToken: true
Workaround 2:
I have workarounded using the FQDN (the fully qualified name) in the host name. But this does not solve my problem, because links the configuration file to the namespace (I use multiple namespaces and I need to have only one configuration file)
In any case, my current line is:
jwksUri: "http://uaa.mynamespace.svc.cluster.local:8090/uaa/token_keys"
I'm totally sure that is a silly configuration parameter, but I'm not able to find it! Thanks in advance

jwksUri: "http://uaa:8090/uaa/token_keys" will not work from istiod, because http://uaa will be interpreted as http://uaa.istio-system.svc.cluster.local. That's why your workaround are solving the problem.
I don't understand why your workaround 2 is not a sufficient solution. Let's say your uaa service runs in namespace auth. If you configure the jwksUri with uaa.auth.svc.cluster.local, every kubernetes pod is able to call it, regardless of it's namespace.

I had a very similar issue which was caused by a PeerAuthentication that set mtls.mode = STRICT for all pods. This caused the istiod pod to fail to retrieve the keys (as istiod seems to not use MTLS when it performs the HTTP GET on the jwksUri).
The solution was to set a PeerAuthentication with mtls.mode = PERMISSIVE on the Pod hosting the jwksUri (which in my case was dex).
Some example YAML:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default-mtls
namespace: my-namespace
spec:
mtls:
## the empty `selector` applies STRICT to all Pods in `my-namespace`
mode: STRICT
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: dex-mtls
namespace: my-namespace
spec:
selector:
matchLabels:
my_label: dex
mtls:
## the dex pods must allow incoming non-MTLS traffic because istiod reads the JWKS keys from:
## http://dex.my-namespace.svc.cluster.local:5556/dex/keys
mode: PERMISSIVE

Related

How to access the prometheus & grafana via Istion ingress gateway? I have installed the promethius anfd grafana through Helm

I used below command to bring up the pod:
kubectl create deployment grafana --image=docker.io/grafana/grafana:5.4.3 -n monitoring
Then I used below command to create custerIp:
kubectl expose deployment grafana --type=ClusterIP --port=80 --target-port=3000 --protocol=TCP -n monitoring
Then I have used below virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- cogtiler-gateway.skydeck
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana
kubectl apply -f grafana-virtualservice.yaml -n monitoring
Output:
virtualservice.networking.istio.io/grafana created
Now, when I try to access it, I get below error from grafana:
**If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_path setting includes subpath
3. If you have a local dev build make sure you build frontend using: npm run dev, npm run watch, or npm run build
4. Sometimes restarting grafana-server can help **
The easiest and working out of the box solution to configure that would be with a grafana host and / prefix.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "grafana.example.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 80
As you mentioned in the comments, I want to use path based routing something like my.com/grafana, that's also possible to configure. You can use istio rewrite to configure that.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
But, according to this github issue you would have also additionally configure grafana for that. As without the proper grafana configuration that won't work correctly.
I found a way to configure grafana with different url with the following env variable GF_SERVER_ROOT_URL in grafana deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: grafana
spec:
containers:
- image: docker.io/grafana/grafana:5.4.3
name: grafana
env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s/grafana/"
resources: {}
Also there is a Virtual Service and Gateway for that deployment.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana/
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
You need to create a Gateway to allow routing between the istio-ingressgateway and your VirtualService.
Something in the lines of :
kind: Gateway
metadata:
name: ingress
namespace: istio-system
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
You also need a DNS entry for your domain (my-domain.com) that points to the IP address of your istio-ingressgateway.
When your browser will hit my.domain.com, then it'll be redirected to the istio-ingressgateway. The istio-ingressgateway will inspect the Host field from the request, and redirect the request to grafana (according to VirtualService rules).
You can check kubectl get svc -n istio-system | grep istio-ingressgateway to get the public IP of your ingress gateway.
If you want to enable TLS, then you need to provision a TLS certificate for your domain (most easy with cert-manager). Then you can use https redirect in your gateway, like so :
kind: Gateway
metadata:
name: ingress
namespace: whatever
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- my.domain.com
tls:
mode: SIMPLE
# name of the secret containing the TLS certificate + keys. The secret must exist in the same namespace as the istio-ingressgateway (probably istio-system namespace)
# This secret can be created by cert-manager
# Or you can create a self-signed certificate
# and add it to manually inside the browser trusted certificates
credentialName: my-domain-tls
Then you VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "my.domain.com"
gateways:
- ingress
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana

Kubernetes Istio exposure not working with Virtualservice and Gateway

So we have the following use case running on Istio 1.8.2/Kubernetes 1.18:
Our cluster is exposed via a External Loadbalancer on Azure. When we expose the app the following way, it works:
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
...
name: frontend
namespace: frontend
spec:
replicas: 1
selector:
matchLabels:
app: applicationname
template:
metadata:
labels:
app: appname
name: frontend
customer: customername
spec:
imagePullSecrets:
- name: yadayada
containers:
- name: frontend
image: yadayada
imagePullPolicy: Always
ports:
- name: https
protocol: TCP
containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: frontend
labels:
name: frontend-svc
customer: customername
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
name: frontend
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend
namespace: frontend
annotations:
kubernetes.io/ingress.class: istio
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: "customer.domain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: frontend-svc
servicePort: 80
tls:
- hosts:
- "customer.domain.com"
secretName: certificate
When we start using a Virtualservice and Gateway, we fail to make it work for some reason. We wanna use VSVC and Gateways cause they offer more flexibility and options (like url rewriting). Other apps dont have this issue running on istio (much simpler as well), we dont have networkpolicy in place (yet). We simply cannot reach the webpage. Anyone has an idea? Virtualservice and Gateway down below. with the other 2 replicasets not mentioned cause they are not the problem:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: null
name: virtualservice-name
namespace: frontend
spec:
gateways:
- frontend
hosts:
- customer.domain.com
http:
- match:
- uri:
prefix: /
route:
- destination:
host: frontend
port:
number: 80
weight: 100
- match:
- uri:
prefix: /api/
route:
- destination:
host: backend
port:
number: 8080
weight: 100
- match:
- uri:
prefix: /auth/
route:
- destination:
host: keycloak
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: frontend
namespace: frontend
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http2
protocol: HTTP
tls:
httpsRedirect: True
hosts:
- "customer.domain.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
credentialName: customer-cert
hosts:
- "customer.domain.com"
Your Gateway specifies PASSTHROUGH, however your VirtualService provides an HttpRoute. This means the TLS connection is not terminated by the Gateway, but the VirtualService expects terminated TLS. See also this somewhat similar question.
How do I properly HTTPS secure an application when using Istio?
#user140547 Correct, we changed that now. But we still couldn't access the application.
We found out that one of the important services was not receiving gateway traffic, since that one wasn't setup correctly. It is our first time having an istio deployment with multiple services. So we thought each of them needed their own Gateway. Little did we know that 1 gateway was more then enough...

Istio: How to redirect to HTTPS except for /.well-known/acme-challenge

I want the traffic thar comes to my cluster as HTTP to be redirected to HTTPS. However, the cluster receives requests from hundreds of domains that change dinamically (creating new certs with cert-manager). So I want the redirect to happen only when the URI doesn't have the prefix /.well-known/acme-challenge
I am using a gateway that listens to 443 and other gateway that listens to 80 and send the HTTP to an acme-solver virtual service.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: default-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- site1.com
port:
name: https-site1.com
number: 443
protocol: HTTPS
tls:
credentialName: cert-site1.com
mode: SIMPLE
- hosts:
- site2.com
port:
name: https-site2.com
number: 443
protocol: HTTPS
tls:
credentialName: cert-site2.com
mode: SIMPLE
...
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: acme-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: acme-solver
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- acme-gateway
http:
- match:
- uri:
prefix: /.well-known/acme-challenge
route:
- destination:
host: acme-solver.istio-system.svc.cluster.local
port:
number: 8089
- redirect:
authority: # Should redirect to https://$HOST, but I don't know how to get the $HOST
How can I do that using istio?
Looking into the documentation:
The HTTP-01 challenge can only be done on port 80. Allowing clients to specify arbitrary ports would make the challenge less secure, and so it is not allowed by the ACME standard.
As a workaround:
Please consider using DNS-01 challenge:
a) it only makes sense to use DNS-01 challenges if your DNS provider has an API you can use to automate updates.
b) using this approach you should consider additional security risk as stated in the docs:
Pros:
You can use this challenge to issue certificates containing wildcard domain names.
It works well even if you have multiple web servers.
Cons:
*Keeping API credentials on your web server is risky.
Your DNS provider might not offer an API.
Your DNS API may not provide information on propagation times.
As mentioned here:
In order to be automatic, though, the software that requests the certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies.
I would suggest also another approach to use some simple nginx pod which would redirect all http traffic to https.
There is a tutorial on medium with nginx configuration you might try to use.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
---
apiVersion: v1
kind: Service
metadata:
name: redirect
labels:
app: redirect
spec:
ports:
- port: 80
name: http
selector:
app: redirect
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redirect
spec:
replicas: 1
selector:
matchLabels:
app: redirect
template:
metadata:
labels:
app: redirect
spec:
containers:
- name: redirect
image: nginx:stable
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: config
volumes:
- name: config
configMap:
name: nginx-config
Additionally you would have to change your virtual service to send all the traffic except prefix: /.well-known/acme-challenge to nginx.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: acme-solver
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- acme-gateway
http:
- name: "acmesolver"
match:
- uri:
prefix: /.well-known/acme-challenge
route:
- destination:
host: reviews.prod.svc.cluster.local
port:
number: 8089
- name: "nginx"
route:
- destination:
host: nginx

K8S: Routing with Istio return 404

I'm new in the k8s world.
Im my dev enviroment, I use ngnix as proxy(with CORS configs and with headers forwarding like ) for the different microservices (all made with spring boot) I have. In a k8s cluster, I had to replace it with Istio?
I'm trying to run a simple microservice(for now) and use Istio for routing to it. I've installed istio with google cloud.
If I navigate to IstioIP/auth/api/v1 it returns 404
This is my yaml file
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- '*'
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /auth
route:
- destination:
host: auth-srv
port:
number: 8082
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
labels:
app: auth-srv
spec:
ports:
- name: http
port: 8082
selector:
app: auth-srv
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-srv
spec:
replicas: 1
template:
metadata:
labels:
app: auth-srv
version: v1
spec:
containers:
- name: auth-srv
image: gcr.io/{{MY_PROJECT_ID}}/auth-srv:1.5
imagePullPolicy: IfNotPresent
env:
- name: JAVA_OPTS
value: '-DZIPKIN_SERVER=http://zipkin:9411'
ports:
- containerPort: 8082
livenessProbe:
httpGet:
path: /api/v1
port: 8082
initialDelaySeconds: 60
periodSeconds: 5
Looks like istio doesn't know anything about the url. Therefore you are getting a 404 error response.
If you look closer at the configuration in the virtual server you have configured istio to match on path /auth.
So if you try to request ISTIOIP/auth you will reach your microservice application. Here is image to describe the traffic flow and why you are getting a 404 response.

Traefik 2 http to https redirect with tls not working

I want to set up http to https redirect in one IngressRoute, but with configuration below when I trying to access http endpoint traefik returns 404 not found error. If I remove tls section redirect works but tls not.
Can I have both working?
traefik version 2.1.0-rc2
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: console-web
namespace: dev
labels:
app: console-web
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`console.example.com`)
kind: Rule
services:
- name: console-web
port: 8080
middlewares:
- name: https-redirect
tls:
secretName: example-com-tls
This is an old issue however this might help someone. This might not directly work as i have not tested it. For kubernetes it should work in following way first you define how the middleware works
Untested Code
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-redirectscheme
spec:
redirectScheme:
scheme: https
Then define the IngressRoute
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingress1
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`somehost`)
kind: Rule
services:
- name: console-web
port: 8080
tls:
secretName: example-com-tls
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingress2
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`somehost`)
middlewares:
- name: test-redirectscheme
kind: Rule
services:
- name: console-web
port: 80
two ingress needed as one is redirecting the trafic to other. I also suppose if you dont have two ports you can reuse the previous one as it is going to be redirected to https anyway. Let me know if it does not does not work.
after spending hours on this for docker on this 404 issue for the http endpoint i found this https://stackoverflow.com/a/62093408/2442649