Istio JWT authentication passes traffic without token - kubernetes

Background:
There was a similar question: Here but it didn't offer a solution to my issue.
I have deployed an application which is working as expected to my Istio Cluster. I wanted to enable JWT authentication, so adapting the instructions Here to my use-case.
ingressgateway:
I first applied the following policy to the istio-ingressgateway. This worked and any traffic sent without a JWT token was blocked.
kubectl apply -n istio-system -f mypolicy.yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: istio-system
spec:
targets:
- name: istio-ingressgateway
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
Once that worked I deleted this policy and installed a new policy for my service.
kubectl delete -n istio-system -f mypolicy.yaml
service/core-api-service:
After editing the above policy, changing the namespace and target as below, I reapplied the policy to the correct namespace.
Policy:
kubectl apply -n solarmori -f mypolicy.yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: solarmori
spec:
targets:
- name: core-api-service
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
Service:
apiVersion: v1
kind: Service
metadata:
name: core-api-service
spec:
type: LoadBalancer
ports:
- port: 80
name: api-svc-port
targetPort: api-app-port
selector:
app: core-api-app
The outcome of this action didn't appear to change anything in processing of traffic. I was still able to reach my service even though I did not provide a JWT.
I checked the istio-proxy of my service deployment and there was no creation of a local_jwks in the logs as described Here.
[procyclinsur#P-428 istio]$ kubectl logs -n solarmori core-api-app-5dd9666777-qhf5v -c istio-proxy | grep local_jwks
[procyclinsur#P-428 istio]$
If anyone knows where I am going wrong I would greatly appreciate any help.

For a Service to be part of Istio's service mesh you need to fulfill some requirements as shown in the official docs.
In your case, the service port name needs to be updated to:
<protocol>[-<suffix>] with the <protocol> as either:
grpc
http
http2
https
mongo
mysql
redis
tcp
tls
udp
At that point requests forwarded to the service will go through the service mesh; Currently, requests are resolved by Kubernetes networking.

Related

azure AKS internal load balancer not responding requests

I have an AKS cluster, as well as a separate VM. AKS cluster and the VM are in the same VNET (as well as subnet).
I deployed a echo server with the following yaml, I'm able to directly curl the pod with vnet ip from the VM. But when trying that with load balancer, nothing returns. Really not sure what I'm missing. Any help is appreciated.
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: echo-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: echo-server
image: ealen/echo-server
ports:
- name: http
containerPort: 8080
The following pictures demonstrate the situation
I'm expecting that when curl the vnet ip from load balancer, to receive the same response as I did directly curling the pod ip
Can you check your internal-loadbalancer health probe.
"For Kubernetes 1.24+ the services of type LoadBalancer with appProtocol HTTP/HTTPS will switch to use HTTP/HTTPS as health probe protocol (while before v1.24.0 it uses TCP). And / will be used as the default health probe request path. If your service doesn’t respond 200 for /, please ensure you're setting the service annotation service.beta.kubernetes.io/port_{port}_health-probe_request-path or service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path (applies to all ports) with the correct request path to avoid service breakage."
(ref: https://github.com/Azure/AKS/releases/tag/2022-09-11)
If you are using nginx-ingress controller, try adding the same as mentioned in doc:
(https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration)
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--reuse-values \
--namespace <NAMESPACE> \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
Have you checked whether the pod's IP is correctly mapped as an endpoint to the service? You can check it using,
k describe svc echo-server -n test | grep Endpoints
If not please check label and selectors with your actual deployment (rather the resources put in the description).
If it is correctly mapped, are you sure that the VM you are using (_#tester) is under the correct subnet which should include the iLB IP;10.240.0.226 as well?
Found the solution, the only thing I need to do is to add the following to the Service declaration:
externalTrafficPolicy: 'Local'
Full yaml as below
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
externalTrafficPolicy: 'Local'
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: echo-server
previously it was set to 'Cluster'.
Just got off with azure support, seems like a specific bug on this (it happens with newer version of the AKS), posting the related link here: https://github.com/kubernetes/ingress-nginx/issues/8501

GKE Ingress configuration for HTTPS-enabled Applications leads to failed_to_connect_to_backend

I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.
The Google Cloud Console reports that the backend is healthy, but if i connect to the given address, it returns 502 and in the Ingress logs appears a failed_to_connect_to_backend error.
So are my configurations:
FrontendConfig.yaml:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontendconfig
namespace: my-namespace
spec:
redirectToHttps:
enabled: false
sslPolicy: my-ssl-policy
BackendConfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
sessionAffinity:
affinityType: "CLIENT_IP"
logging:
enable: true
sampleRate: 1.0
healthCheck:
checkIntervalSec: 60
timeoutSec: 5
healthyThreshold: 3
unhealthyThreshold: 5
type: HTTP
requestPath: /health
# The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes)
port: 8001
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
# Frontend Configuration Name
networking.gke.io/v1beta1.FrontendConfig: "my-frontendconfig"
# Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global)
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
spec:
tls:
- secretName: my-secret
defaultBackend:
service:
name: my-service
port:
number: 443
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
# Specify the type of traffic accepted
cloud.google.com/app-protocols: '{"service-port":"HTTPS"}'
# Specify the BackendConfig to be used for the exposed ports
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
# Enables the Cloud Native Load Balancer
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
selector:
app: my-application
ports:
- protocol: TCP
name: service-port
port: 443
targetPort: app-port # this port expects TLS traffic, no http plain connections
The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.
The interesting thing is, if I expose through the Service.yaml also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path /* leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.
Question: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?
There are few elements to your question, i'll try to answer them here.
I don't want to expose the healthcheck port outside my cluster
The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.
The traffic is routed exactly as it is to the HTTPS port of the service/application
The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The secretName configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.
Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google encrypt that traffic at the network level anyway
Actually i solved it by setting a managed certificate connected to Ingress. It "magically" worked without any other change, using Service of type ClusterIP

How can istio mesh-virtual-service manage traffic from ingress-virtual-service?

I am defining canary routes in mesh-virtual-service and wondering whether I can make it applicable for ingress traffic (with ingress-virtual-service) as well. With something like below, but it does not work (all traffic from ingress is going to non-canary version)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app
namespace: test-ns
spec:
gateways:
- mesh
hosts:
- test-deployment-app.test-ns.svc.cluster.local
http:
- name: canary
match:
- headers:
x-canary:
exact: "true"
- port: 8080
headers:
response:
set:
x-canary: "true"
route:
- destination:
host: test-deployment-app-canary.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
- name: stable
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app-internal
namespace: test-ns
spec:
gateways:
- istio-system/default-gateway
hosts:
- myapp.dev.bla
http:
- name: default
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
So I am expecting x-canary:true response header when I call myapp.dev.bla but I don't see that.
Well the answer is only partially inside the link you included. I think the essential thing to realize when working with Istio 'what even is Istio Service Mesh'. Service mesh is every pod with Istio envoy-proxy sidecar + all the gateways (gateway is standalone envoy-proxy). They all know about each other because of IstioD so they can cooperate.
Any pod without Istio sidecar (including ingress pods or i.e. kube-system pods) in your k8s cluster doesn't know anything about Istio or Service Mesh. If such pod wants to send traffic to Service Mesh (to apply some Traffic Management rules like you have) must send it through Istio Gateway. Gateway is object that creates standard deployment + service. Pods in the deployment consist standalone envoy-proxy container.
Gateway object is a very similar concept to k8s ingress. But it doesn't have to listen on nodePort necessarily. You can use it also as an 'internal' gateway. Gateway serves as entry point into your service mesh. Either for external or even internal traffic.
If you're using i.e. Nginx as the Ingress solution you must reconfigure the Ingress rule to send traffic to one of the gateways instead of the target service. Most likely to your mesh gateway. It's nothing else than k8s Service inside istio-gateway or istio-system namespace
Alternatively you can configure Istio Gateway as 'new' Ingress. As I'm not sure if some default Istio Gateway listens on nodePort you need to check it (again in istio-gateway or istio-system namespace. Alternatively you can create new Gateway just for your application and apply VirtualService to the new gateway as well.

Istio egress traffic is not routed through istio istio-proxy sidecar

we have a minimal example of routing egress traffic to an external service outside of the service mesh.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: nexus-test
namespace: REDACTED
spec:
hosts:
- nexus.REDACTED
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
But we are not seeing the traffic pass the sidecar istio-proxy using the command
kubectl logs $SOURCE_POD -c istio-proxy | tail
Also we are not seeing the traffic on the mixer using the command:
kubectl -n istio-system logs -l istio-mixer-type=telemetry -c mixer | grep 'nexus'
as suggested in the documentation https://istio.io/docs/tasks/traffic-management/egress/egress-control/#access-an-external-https-service
Can anyone help us what could be wrong?
Best regards,
rforberger
It works now. It was maybe conflicting with other services in the service mesh.

Kubernetes EKS Ingress and TLS

I'm trying to accomplish a VERY common task for an application:
Assign a certificate and secure it with TLS/HTTPS.
I've spent nearly a day scouring thru documentation and trying multiple different tactics to get this working but nothing is working for me.
Initially I setup nginx-ingress on EKS using Helm by following the docs here: https://github.com/nginxinc/kubernetes-ingress. I tried to get the sample app working (cafe) using the following config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
The ingress and all supported services/deploys worked fine but there's one major thing missing: the ingress doesn't have an associated address/ELB:
NAME HOSTS ADDRESS PORTS AGE
cafe-ingress cafe.example.com 80, 443 12h
Service LoadBalancers create ELB resources, i.e.:
testnodeapp LoadBalancer 172.20.4.161 a64b46f3588fe... 80:32107/TCP 13h
However, the Ingress is not creating an address. How do I get an Ingress controller exposed externally on EKS to handle TLS/HTTPS?
I've replicated every step necessary to get up and running on EKS with a secure ingress. I hope this helps anybody else that wants to get their application on EKS quickly and securely.
To get up and running on EKS:
Deploy EKS using the CloudFormation template here: Keep in mind that I've restricted access with the CidrIp: 193.22.12.32/32. Change this to suit your needs.
Install Client Tools. Follow the guide here.
Configure the client. Follow the guide here.
Enable the worker nodes. Follow the guide here.
You can verify that the cluster is up and running and you are pointing to it by running:
kubectl get svc
Now you launch a test application with the nginx ingress.
NOTE: Everything is placed under the ingress-nginx namespace. Ideally this would be templated to build under different namespaces, but for the purposes of this example it works.
Deploy nginx-ingress:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
Fetch rbac.yml from here. Run:
kubectl apply -f rbac.yml
Have a certificate and key ready for testing. Create the necessary secret like so:
kubectl create secret tls cafe-secret --key mycert.key --cert mycert.crt -n ingress-nginx
Copy coffee.yml from here. Copy coffee-ingress.yml from here. Update the domain you want to run this under. Run them like so
kubectl apply -f coffee.yaml
kubectl apply -f coffee-ingress.yaml
Update the CNAME for your domain to point to the ADDRESS for:
kubectl get ing -n ingress-nginx -o wide
Refresh DNS cache and test the domain. You should get a secure page with request stats. I've replicated this multiple times so if it fails to work for you check the steps, config, and certificate. Also, check the logs on the nginx-ingress-controller* pod.
kubectl logs pod/nginx-ingress-controller-*********** -n ingress-nginx
That should give you some indication of what's wrong.
To make an Ingress resource work, the cluster must have an Ingress controller configured.
This is unlike other types of controllers, which typically run as part of the kube-controller-manager binary,
and which are typically started automatically as part of cluster creation.
For EKS with helm, you may try:
helm registry install quay.io/coreos/alb-ingress-controller-helm
Next, configure the Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
spec:
rules:
- host: YOUR_DOMAIN
http:
paths:
- path: /
backend:
serviceName: ingress-example-test
servicePort: 80
tls:
- secretName: custom-tls-cert
hosts:
- YOUR_DOMAIN
Apply the config:
kubectl create -f ingress.yaml
Next, create the secret with TLS certificates:
kubectl create secret tls custom-tls-cert --key /path/to/tls.key --cert /path/to/tls.crt
and reference to them in the Ingress definition:
tls:
- secretName: custom-tls-cert
hosts:
- YOUR_DOMAIN
The following example of configuration shows how to configure the Ingress controller:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
k8s-app: nginx-ingress-controller
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
# hostNetwork: true
terminationGracePeriodSeconds: 60
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
Next, apply the above configuration, then you can check services for External IP exposed:
kubectl get service nginx-controller -n kube-system
External IP is an address which terminates at one of the Kubernetes nodes as configured by externally configured routing mechanisms.
When configured within a service definition, once a request reaches a node, traffic is redirected to a service endpoint.
Documentation of Kubernetes provides more examples.