authenticate session not found in Keycloak-Gatekeeper configuration - kubernetes

I am trying to use keycloak as my identity provider for accessing the k8s dashboard. I use keycloak-gatekeeper to authenticate.
My keycloak config file is as follows on my pod pod1
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: kubernetes-dashboard
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: gatekeeper
image: carlosedp/keycloak-gatekeeper:latest
args:
- --config=/etc/keycloak-gatekeeper.conf
ports:
- containerPort: 3000
name: service
volumeMounts:
- name: gatekeeper-config
mountPath: /etc/keycloak-gatekeeper.conf
subPath: keycloak-gatekeeper.conf
- name: gatekeeper-files
mountPath: /html
volumes:
- name : gatekeeper-config
configMap:
name: gatekeeper-config
- name : gatekeeper-files
configMap:
name: gatekeeper-files
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper-config
namespace: kubernetes-dashboard
creationTimestamp: null
data:
keycloak-gatekeeper.conf: |+
discovery-url: http://keycloak.<IP>.nip.io:8080/auth/realms/k8s-realm
skip-openid-provider-tls-verify: true
client-id: k8s-client
client-secret: <SECRET>
listen: 0.0.0.0:3000
debug: true
ingress.enabled: true
enable-refresh-tokens: true
enable-logging: true
enable-json-logging: true
redirection-url: http://k8s.dashboard.com/dashboard/
secure-cookie: false
encryption-key: vGcLt8ZUdPX5fXhtLZaPHZkGWHZrT6aa
enable-encrypted-token: false
upstream-url: http://127.0.0.0:80
forbidden-page: /html/access-forbidden.html
headers:
Bearer : <bearer token>
resources:
- uri: /*
groups:
- k8s-group
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gatekeeper-files
namespace: kubernetes-dashboard
creationTimestamp: null
data:
access-forbidden.html: html file
---
apiVersion: v1
kind: Service
metadata:
labels:
app: db
name: db
namespace: kubernetes-dashboard
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: service
selector:
app: db
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: db
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-buffer-size: "64k"
spec:
rules:
- host: k8s.dashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: db
port:
number: 80
when I am accessing k8s.dashboard.com I am getting this URL and it is navigating me to the keycloak page for authentication.
http://keycloak.<IP>.nip.io:8080/auth/realms/k8s-realm/protocol/openid-connect/auth?client_id=k8s-client&redirect_uri=http%3A%2F%2Fk8s.dashboard.com%2Fdashboard%2Foauth%2Fcallback&response_type=code&scope=openid+email+profile&state=23c4b0ff-259f-45c0-934a-98fc780363e6
After logging in to the keycloak, it is throwing me 404 page and the URL which is redirecting is
http://k8s.dashboard.com/dashboard/oauth/callback?state=23c4b0ff-259f-45c0-934a-98fc780363e6&session_state=4c698f90-4e03-44a9-b231-01a418f0d569&code=9ab6a309-98ad-4d61-989f-116f0b151522.4c698f90-4e03-44a9-b231-01a418f0d569.520395c1-d601-4502-981a-b1c08861ab3d
As you can see the extra /oauth/callback endpoint is added after k8s.dashboard.com/dashboard. If I remove /oauth/callback then it will redirect me to k8s dashboard login page.
My pod log file is as follows:
{"level":"info","ts":1626074166.8771496,"msg":"client request","latency":0.000162174,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/favicon.ico"}
{"level":"info","ts":1626074166.9270697,"msg":"client request","latency":0.000054857,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
{"level":"error","ts":1626074176.2642884,"msg":"no session found in request, redirecting for authorization","error":"authentication session not found"}
{"level":"info","ts":1626074176.264481,"msg":"client request","latency":0.000197256,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/"}
{"level":"info","ts":1626074176.2680361,"msg":"client request","latency":0.000041917,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
{"level":"error","ts":1626074185.140641,"msg":"no session found in request, redirecting for authorization","error":"authentication session not found"}
{"level":"info","ts":1626074185.1407247,"msg":"client request","latency":0.000091046,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/"}
{"level":"info","ts":1626074185.1444902,"msg":"client request","latency":0.000042129,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
{"level":"error","ts":1626074202.1827211,"msg":"no session found in request, redirecting for authorization","error":"authentication session not found"}
{"level":"info","ts":1626074202.182838,"msg":"client request","latency":0.000122802,"status":307,"bytes":95,"client_ip":"172.17.0.8:43276","method":"GET","path":"/favicon.ico"}
{"level":"info","ts":1626074202.1899397,"msg":"client request","latency":0.000032541,"status":307,"bytes":330,"client_ip":"172.17.0.8:43276","method":"GET","path":"/oauth/authorize"}
What is wrong here? Any help will be appreciated!

Related

How to configure tls with traefik in kubernetes using yaml?

I am having trouble exposing a service over http and https using traefik 2.9 in Kubernetes.
The http endpoint kinda works, I introduced CORS errors somehow once I tried to add https but that is not my main concern. The https ingress is broken and I cant find any indication of why its not working. The traefik pod doesn't log any errors and the dotnet service isn't receiving the requests. Also both routes show up in the dashboard and websecure is displayed as having TLS enabled.
Excluding ClusterRole, ServiceAccount, and ClusterRoleBinding because I believe that's configured correctly as the http route wouldn't work if it wasnt.
Traefik config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
Traefik services:
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: dashboard
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.1.38
ports:
- targetPort: web
port: 80
name: http
- targetPort: websecure
port: 443
name: https
selector:
app: traefik
Secret for tls:
apiVersion: v1
data:
comptech.pem: <contents of pem file base64 encoded>
comptech.crt: <contents of crt file base64 encoded>
comptech.key: <contents of key file base64 encoded>
kind: Secret
metadata:
name: comptech-cert
namespace: default
type: Opaque
Service for dotnet application:
apiVersion: v1
kind: Service
metadata:
name: control-api-service
spec:
ports:
- name: http
port: 80
targetPort: 5000
protocol: TCP
- name: https
port: 443
targetPort: 5000
protocol: TCP
selector:
app: control-api
Ingresses:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-secure-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: https
tls:
- secretName: comptech-cert
My hope here is that someone with much more experience with traefik/tls will be able to quickly realize what I'm doing incorrectly. Any input is greatly appreciated!
UPDATE:
The firewall was only allowing http traffic, we reconfigured it to support https and it is responding with Traefiks default certs. So i can hit the container but tls is still not configured using my supplied cert.
The pem file is not needed and the crt file was generated incorrectly using openssl the command that worked for me was: openssl crl2pkcs7 -nocrl -certfile comptech.pem | openssl pkcs7 -print_certs -out cert.crt
Pointing to the https port of the control-api-service was not working and needed to be changed to http
A config map needed to be created for the traefik deployment to work correctly:
apiVersion: v1 kind: ConfigMap metadata: name: traefik-config labels:
name: traefik-config namespace: default data: dyn.yaml: |
# https://doc.traefik.io/traefik/https/tls/
tls:
stores:
default:
defaultCertificate:
certFile: '/certs/tls.crt'
keyFile: '/certs/tls.key'
Finally the configmap and secret must be used in the traefik deployment like below:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
- --providers.file.filename=/config/dyn.yaml
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
volumeMounts:
- name: comptech-cert-volume
mountPath: /certs
- name: traefik-config-volume
mountPath: /config
volumes:
- name: comptech-cert-volume
secret:
secretName: comptech-cert
- name: traefik-config-volume
configMap:
name: traefik-config
In my setup, I use the IngressRoute CRD implementation from Traefik.
The CRDs were automatically installed when I setup the Traefik controller using Helm.
Is it a possibility for you to use this in your setup? You can check if the CRDs already exist using below command on your k8s cluster.
kubectl get crd
Below is a snippet from one of my projects where I also use a custom wildcard certificate from a secret using the IngressRoute manifest.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
tls:
secretName: bluecert
You can also include other custom resources that are available from Traefik. The complete set of configuration that is available can be seen here. For example, below is the same snippet with middleware and tlsoptions resources included for improving the security of the endpoint.
apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
name: tlsoptions
spec:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_AES_256_GCM_SHA384
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_FALLBACK_SCSV
curvePreferences:
- CurveP521
- CurveP384
sniStrict: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: security
spec:
headers:
frameDeny: true
sslRedirect: true
browserXssFilter: true
contentTypeNosniff: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 31536000
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
middlewares:
- name: security
tls:
secretName: bluecert
options:
name: tlsoptions

How to implement istio authorization using oauth2 and keycloak

I have been trying to implement istio authorization using Oauth2 and keycloak. I have followed few articles related to this API Authentication: Configure Istio IngressGateway, OAuth2-Proxy and Keycloak, Authorization Policy
Expected output: My idea is to implement keycloak authentication where oauth2 used as an external Auth provider in the istio ingress gateway.
when a user try to access my app in <ingress host>/app , it should automatically redirect to keycloak login page.
How do i properly redirect the page to keycloak login screen for authentication ?
problem:
When i try to access <ingress host>/app, the page will take 10 seconds to load and it gives status 403 access denied.
if i remove the authorization policy (kubectl delete -f authorization-policy.yaml) within that 10 seconds, it will redirect to the login screen (keycloak)
oauth2.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
type: NodePort
selector:
app: oauth-proxy
ports:
- name: http-oauthproxy
port: 4180
nodePort: 31023
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
replicas: 1
selector:
matchLabels:
app: "oauth-proxy"
template:
metadata:
labels:
app: oauth-proxy
spec:
containers:
- name: oauth-proxy
image: "quay.io/oauth2-proxy/oauth2-proxy:v7.2.0"
ports:
- containerPort: 4180
args:
- --http-address=0.0.0.0:4180
- --upstream=http://test-web-app:3000
- --set-xauthrequest=true
- --pass-host-header=true
- --pass-access-token=true
env:
# OIDC Config
- name: "OAUTH2_PROXY_PROVIDER"
value: "keycloak-oidc"
- name: "OAUTH2_PROXY_OIDC_ISSUER_URL"
value: "http://192.168.1.2:31020/realms/my_login_realm"
- name: "OAUTH2_PROXY_CLIENT_ID"
value: "my_nodejs_client"
- name: "OAUTH2_PROXY_CLIENT_SECRET"
value: "JGEQtkrdIc6kRSkrs89BydnfsEv3VoWO"
# Cookie Config
- name: "OAUTH2_PROXY_COOKIE_SECURE"
value: "false"
- name: "OAUTH2_PROXY_COOKIE_SECRET"
value: "ZzBkN000Wm0pQkVkKUhzMk5YPntQRUw_ME1oMTZZTy0="
- name: "OAUTH2_PROXY_COOKIE_DOMAINS"
value: "*"
# Proxy config
- name: "OAUTH2_PROXY_EMAIL_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_WHITELIST_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_HTTP_ADDRESS"
value: "0.0.0.0:4180"
- name: "OAUTH2_PROXY_SET_XAUTHREQUEST"
value: "true"
- name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER
value: "true"
- name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY
value: "true"
- name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
value: "true"
- name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER
value: "true"
keycloak.yaml
apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
type: NodePort
selector:
app: keycloak
ports:
- name: http-keycloak
port: 8080
nodePort: 31020
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
spec:
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.0
ports:
- containerPort: 8080
args: ["start-dev"]
env:
- name: KEYCLOAK_ADMIN
value: "admin"
- name: KEYCLOAK_ADMIN_PASSWORD
value: "admin"
istio-operator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
extensionProviders:
- name: "oauth2-proxy"
envoyExtAuthzHttp:
service: "oauth-proxy.default.svc.cluster.local"
port: "4180" # The default port used by oauth2-proxy.
includeHeadersInCheck: ["authorization", "cookie","x-forwarded-access-token","x-forwarded-user","x-forwarded-email","x-forwarded-proto","proxy-authorization","user-agent","x-forwarded-host","from","x-forwarded-for","accept","x-auth-request-redirect"] # headers sent to the oauth2-proxy in the check request.
headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token","x-forwarded-access-token"] # headers sent to backend application when request is allowed.
headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied.
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: test-gateway
namespace : istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-vs
spec:
hosts:
- '*'
gateways:
- istio-system/test-gateway
http:
- match:
- uri:
prefix: /oauth2
route:
- destination:
host: oauth-proxy.default.svc.cluster.local
port:
number: 4180
- match:
- uri:
prefix: /app
route:
- destination:
host: test-web-app.default.svc.cluster.local
port:
number: 3000
authorization-policy.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
The redirection issue solved by updating authorization policy
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
namespace: istio-system
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
selector:
matchLabels:
app: istio-ingressgateway
Added istio-system namespace instead of workload namespace (it was default in my case)
Forgot to add matchLabels.

Ingress connection refused

I want to deploy a SW application with docker and kubernetes and I have a big issue.
I have master node and worker node, inside, I have a Python application running on port 5000 with his service.
I want to take outside my app and I'm ussing ingress. When I make curl to nginx deployment and nginx service y can get response, but when I curl to ingress I can read connection refused.
Thank u so much
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
labels:
io.kompose.service: nginx
spec:
containers:
- image: nginx:1.17-alpine
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d
readOnly: true
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
ports:
- name: "8094"
port: 8094
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: lazy-trading
spec:
rules:
- host: lazytrading.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 8094
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
io.kompose.service: nginx
name: nginx-conf
namespace: lazy-trading
data:
nginx.conf: |
server {
# Lazt Ytading configuration ---
location = /api/v1/lazytrading {
return 302 /api/v1/lazytrading/;
}
location /api/v1/lazytrading/ {
proxy_pass http://{{ .Values.deployment.name }}:{{
.Values.service.ports.port }}/;
}
}

Ingress creating health check on HTTP instead of TCP

I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.
Here is the complete manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app:web
spec:
containers:
- name: web
image: us-east4-docker.pkg.dev/web:e856485 # docker image
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms
spec:
replicas: 3
selector:
matchLabels:
app: cms
template:
metadata:
labels:
app: cms
spec:
containers:
- name: cms
image: us-east4-docker.pkg.dev/cms:4e1fe2f # docker image
ports:
- containerPort: 8055
env:
- name : DB
value : "postgres"
- name : DB_HOST
value : 10.142.0.3
- name : DB_PORT
value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us-east4-docker.pkg.dev/api:4e1fe2f # docker image
ports:
- containerPort: 8080
env:
- name : HOST
value : "0.0.0.0"
- name : PORT
value : "8080"
- name : NODE_ENV
value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: web
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: cms-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: cms
spec:
ports:
- port: 8055
protocol: TCP
targetPort: 8055
selector:
app: cms
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: api-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: api
type: NodePort
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: api-cert
- secretName: cms-cert
- secretName: web-cert
rules:
- host: web-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: web-lb
port:
number: 3000
- host: cms-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: cms-lb
port:
number: 8055
- host: api-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: api-lb
port:
number: 8080
The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failing.
I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.
But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck.
Any help?
The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).
And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.
As for answering your question, what you are missing is:
You need to implement a BackendConfig with custom HealthCheck configurations.
1- Create the Backendconfig:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-lb-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
2- Use this config in your service/s
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"PORT_NAME_1":"api-lb-backendconfig"
}}'
spec:
ports:
- name: PORT_NAME_1
port: PORT_NUMBER_1
protocol: TCP
targetPort: TARGET_PORT
Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"
Consider this documentation page as your reference.

Kubernetes nginx ingress + oauth2 external auth timing out

I am attempting to protect a service's status page with an oauth2_proxy, using Azure AD as the external auth provider. Currently if I browse to the public url of the app (https://sub.domain.com/service/hangfire) I got a 504 gateway timeout, where it should be directing me to authenticate.
I had been mostly following this guide for reference: https://msazure.club/protect-kubernetes-webapps-with-azure-active-directory-aad-authentication/
If I disable the annotations that direct the authentication, I can get to the public status page without a problem. If I browse to https://sub.domain.com/oauth2, I get a prompt to authenticate with my provider, which I would expect. I am not sure where the issue lies in the ingress config but I was unable to find any similar cases to this online, stackoverflow or otherwise.
In this case, everything (oauth deployment, service, and ingress rules) lives in a 'dev' namespace except the actual ingress deployment, which lives in its own namespace. I don't suspect this makes a difference, but SSL termination is handled by a gateway outside the cluster.
oauth2 deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.2.0
imagePullPolicy: IfNotPresent
args:
- --provider=azure
- --email-domain=domain.com
- --upstream=http://servicename
- --http-address=0.0.0.0:4180
- --azure-tenant=id
- --client-id=id
- --client-secret=number
env:
- name: OAUTH2_PROXY_COOKIE_SECRET
value: secret
ports:
- containerPort: 4180
protocol : TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
Ingress rules:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: https://sub.domain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: https://sub.domain.com/oauth2/start?rd=$https://sub.domain.com/service/hangfire"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /service/hangfire
backend:
serviceName: service
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-oauth2-proxy
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
I am getting 504 errors when I browse to the url but I do not see any errors in the ingress pods.
I ended up finding the resolution here: https://github.com/helm/charts/issues/5958
I had to use the internal service address for the auth-url, which I had not seen mentioned anywhere else.
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.development.svc.cluster.local:4180/oauth2/auth
This is what I've been doing with my oAuth proxy for Azure AD:
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
And I've been using this oAuth proxy:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: xxx
- name: OAUTH2_PROXY_CLIENT_ID
value: yyy
- name: OAUTH2_PROXY_CLIENT_SECRET
value: zzz
- name: OAUTH2_PROXY_COOKIE_SECRET
value: anyrandomstring
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "http://where_to_redirect_to:443"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
My setup is similar to 4c74356b41's
oauth2-proxy deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --azure-tenant=TENANT-GUID
- --email-domain=company.com
- --http-address=0.0.0.0:4180
- --provider=azure
- --upstream=file:///dev/null
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: client-id
name: oauth2-proxy
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client-secret
name: oauth2-proxy
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
key: cookie-secret
name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.1.0
name: oauth2-proxy
oauth2-proxy service
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: oauth2-proxy
type: ClusterIP
oauth2-proxy ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
oauth2-proxy configuration
apiVersion: v1
kind: Secret
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
data:
# Values below are fake
client-id: AAD_CLIENT_ID
client-secret: AAD_CLIENT_SECRET
cookie-secret: COOKIE_SECRET
Application using AAD Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
labels:
app: myapp
name: myapp
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: myapp
servicePort: 80
path: /
tls:
- hosts:
- myapp.hostname.net
An additional step that needs to be done is to add the redirect URI to the AAD App registration. Navigate to your AAD App Registration in the Azure portal > Authentication > Add https://myapp.hostname.net/oauth2/callback to Redirect URIs > Save