Kubernetes nginx ingress + oauth2 external auth timing out - kubernetes

I am attempting to protect a service's status page with an oauth2_proxy, using Azure AD as the external auth provider. Currently if I browse to the public url of the app (https://sub.domain.com/service/hangfire) I got a 504 gateway timeout, where it should be directing me to authenticate.
I had been mostly following this guide for reference: https://msazure.club/protect-kubernetes-webapps-with-azure-active-directory-aad-authentication/
If I disable the annotations that direct the authentication, I can get to the public status page without a problem. If I browse to https://sub.domain.com/oauth2, I get a prompt to authenticate with my provider, which I would expect. I am not sure where the issue lies in the ingress config but I was unable to find any similar cases to this online, stackoverflow or otherwise.
In this case, everything (oauth deployment, service, and ingress rules) lives in a 'dev' namespace except the actual ingress deployment, which lives in its own namespace. I don't suspect this makes a difference, but SSL termination is handled by a gateway outside the cluster.
oauth2 deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.2.0
imagePullPolicy: IfNotPresent
args:
- --provider=azure
- --email-domain=domain.com
- --upstream=http://servicename
- --http-address=0.0.0.0:4180
- --azure-tenant=id
- --client-id=id
- --client-secret=number
env:
- name: OAUTH2_PROXY_COOKIE_SECRET
value: secret
ports:
- containerPort: 4180
protocol : TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
Ingress rules:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: https://sub.domain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: https://sub.domain.com/oauth2/start?rd=$https://sub.domain.com/service/hangfire"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /service/hangfire
backend:
serviceName: service
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-oauth2-proxy
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
I am getting 504 errors when I browse to the url but I do not see any errors in the ingress pods.

I ended up finding the resolution here: https://github.com/helm/charts/issues/5958
I had to use the internal service address for the auth-url, which I had not seen mentioned anywhere else.
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.development.svc.cluster.local:4180/oauth2/auth

This is what I've been doing with my oAuth proxy for Azure AD:
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
And I've been using this oAuth proxy:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: xxx
- name: OAUTH2_PROXY_CLIENT_ID
value: yyy
- name: OAUTH2_PROXY_CLIENT_SECRET
value: zzz
- name: OAUTH2_PROXY_COOKIE_SECRET
value: anyrandomstring
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "http://where_to_redirect_to:443"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP

My setup is similar to 4c74356b41's
oauth2-proxy deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --azure-tenant=TENANT-GUID
- --email-domain=company.com
- --http-address=0.0.0.0:4180
- --provider=azure
- --upstream=file:///dev/null
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: client-id
name: oauth2-proxy
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client-secret
name: oauth2-proxy
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
key: cookie-secret
name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.1.0
name: oauth2-proxy
oauth2-proxy service
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: oauth2-proxy
type: ClusterIP
oauth2-proxy ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
oauth2-proxy configuration
apiVersion: v1
kind: Secret
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
data:
# Values below are fake
client-id: AAD_CLIENT_ID
client-secret: AAD_CLIENT_SECRET
cookie-secret: COOKIE_SECRET
Application using AAD Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
labels:
app: myapp
name: myapp
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: myapp
servicePort: 80
path: /
tls:
- hosts:
- myapp.hostname.net
An additional step that needs to be done is to add the redirect URI to the AAD App registration. Navigate to your AAD App Registration in the Azure portal > Authentication > Add https://myapp.hostname.net/oauth2/callback to Redirect URIs > Save

Related

Redirect oauth2 proxy to custom page in case of 500 server error

I would like to either be able to customise the existing 500 error page OR in case it is not possible, redirect to my custom page.
I am using Okta oidc and everything is setup on Kubernetes cluster. I could not find where I can configure this 500 server error page.
Below is my POC for a somewhat similar setup but this is using github provider.
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
labels:
app: demo
version: new
spec:
containers:
- name: nginx
image: nikk007/nginxwebsite
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: demo
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway-configuration
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: selfsigned-cert-tls-secret
hosts:
- "*" # Domain name of the external website
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: test-selfsigned
namespace: istio-system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: selfsigned-cert
namespace: istio-system
spec:
dnsNames:
- mymy.com
secretName: selfsigned-cert-tls-secret
issuerRef:
name: test-selfsigned
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: myvs # "just" a name for this virtualservice
namespace: default
spec:
hosts:
- "*" # The Service DNS (ie the regular K8S Service) name that we're applying routing rules to.
gateways:
- ingress-gateway-configuration
http:
- route:
- destination:
host: oauth2-proxy # The Target DNS name
port:
number: 4180
---
kind: DestinationRule # Defining which pods should be part of each subset
apiVersion: networking.istio.io/v1alpha3
metadata:
name: grouping-rules-for-our-photograph-canary-release # This can be anything you like.
namespace: default
spec:
host: myservice.default.svc.cluster.local # Service
subsets:
- labels: # SELECTOR.
version: new # find pods with label "safe"
name: new-group
- labels:
version: old
name: old-group
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --provider=github
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
# Register a new application
# https://github.com/settings/applications/new
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <myapp-client-id>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <my-app-secret>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <cookie-secret-I-generated-via-python>
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
I am using cert-manager for self signed certificates.
I am using istio ingress gateway instead of k8s ingress controller.
Everything is working fine and I am able to access istio gateway, which directs to oauth2-proxy page, which directs to github login.
In case there is issue with authorization provider, then we get an error 500 page via outh2-proxy. I want to configure oauth2-proxy to redirect to my custom page(or i would like to configure the oauth2-proxy internal html for error page) in case of 500 error. I tried kubectl exec into oauth2-proxy pod but I could not locate their html. Moreover, I dont have root access inside oauth2-proxy pod.
If my own app throws 500 error then I can handle it by configuring its nginx.

How to implement istio authorization using oauth2 and keycloak

I have been trying to implement istio authorization using Oauth2 and keycloak. I have followed few articles related to this API Authentication: Configure Istio IngressGateway, OAuth2-Proxy and Keycloak, Authorization Policy
Expected output: My idea is to implement keycloak authentication where oauth2 used as an external Auth provider in the istio ingress gateway.
when a user try to access my app in <ingress host>/app , it should automatically redirect to keycloak login page.
How do i properly redirect the page to keycloak login screen for authentication ?
problem:
When i try to access <ingress host>/app, the page will take 10 seconds to load and it gives status 403 access denied.
if i remove the authorization policy (kubectl delete -f authorization-policy.yaml) within that 10 seconds, it will redirect to the login screen (keycloak)
oauth2.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
type: NodePort
selector:
app: oauth-proxy
ports:
- name: http-oauthproxy
port: 4180
nodePort: 31023
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
replicas: 1
selector:
matchLabels:
app: "oauth-proxy"
template:
metadata:
labels:
app: oauth-proxy
spec:
containers:
- name: oauth-proxy
image: "quay.io/oauth2-proxy/oauth2-proxy:v7.2.0"
ports:
- containerPort: 4180
args:
- --http-address=0.0.0.0:4180
- --upstream=http://test-web-app:3000
- --set-xauthrequest=true
- --pass-host-header=true
- --pass-access-token=true
env:
# OIDC Config
- name: "OAUTH2_PROXY_PROVIDER"
value: "keycloak-oidc"
- name: "OAUTH2_PROXY_OIDC_ISSUER_URL"
value: "http://192.168.1.2:31020/realms/my_login_realm"
- name: "OAUTH2_PROXY_CLIENT_ID"
value: "my_nodejs_client"
- name: "OAUTH2_PROXY_CLIENT_SECRET"
value: "JGEQtkrdIc6kRSkrs89BydnfsEv3VoWO"
# Cookie Config
- name: "OAUTH2_PROXY_COOKIE_SECURE"
value: "false"
- name: "OAUTH2_PROXY_COOKIE_SECRET"
value: "ZzBkN000Wm0pQkVkKUhzMk5YPntQRUw_ME1oMTZZTy0="
- name: "OAUTH2_PROXY_COOKIE_DOMAINS"
value: "*"
# Proxy config
- name: "OAUTH2_PROXY_EMAIL_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_WHITELIST_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_HTTP_ADDRESS"
value: "0.0.0.0:4180"
- name: "OAUTH2_PROXY_SET_XAUTHREQUEST"
value: "true"
- name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER
value: "true"
- name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY
value: "true"
- name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
value: "true"
- name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER
value: "true"
keycloak.yaml
apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
type: NodePort
selector:
app: keycloak
ports:
- name: http-keycloak
port: 8080
nodePort: 31020
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
spec:
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.0
ports:
- containerPort: 8080
args: ["start-dev"]
env:
- name: KEYCLOAK_ADMIN
value: "admin"
- name: KEYCLOAK_ADMIN_PASSWORD
value: "admin"
istio-operator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
extensionProviders:
- name: "oauth2-proxy"
envoyExtAuthzHttp:
service: "oauth-proxy.default.svc.cluster.local"
port: "4180" # The default port used by oauth2-proxy.
includeHeadersInCheck: ["authorization", "cookie","x-forwarded-access-token","x-forwarded-user","x-forwarded-email","x-forwarded-proto","proxy-authorization","user-agent","x-forwarded-host","from","x-forwarded-for","accept","x-auth-request-redirect"] # headers sent to the oauth2-proxy in the check request.
headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token","x-forwarded-access-token"] # headers sent to backend application when request is allowed.
headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied.
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: test-gateway
namespace : istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-vs
spec:
hosts:
- '*'
gateways:
- istio-system/test-gateway
http:
- match:
- uri:
prefix: /oauth2
route:
- destination:
host: oauth-proxy.default.svc.cluster.local
port:
number: 4180
- match:
- uri:
prefix: /app
route:
- destination:
host: test-web-app.default.svc.cluster.local
port:
number: 3000
authorization-policy.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
The redirection issue solved by updating authorization policy
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
namespace: istio-system
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
selector:
matchLabels:
app: istio-ingressgateway
Added istio-system namespace instead of workload namespace (it was default in my case)
Forgot to add matchLabels.

How To Enable SSL on Google Kubernetes Engine while using ingress-nginx?

I am using GKE with ingress-nginx (https://kubernetes.github.io/ingress-nginx/). I tried many tutorials using cert-manager but was unable to learn it.
Could you give me a yaml file as an example if you are able to get SSL working with ingress-nginx in google kubernetes engine?
You can use this as a starting point and expand on it
apiVersion: apps/v1
kind: Deployment
metadata:
name: arecord-depl
spec:
replicas: 1
selector:
matchLabels:
app: arecord
template:
metadata:
labels:
app: arecord
spec:
containers:
- name: arecord
image: gcr.io/clear-shell-346807/arecord
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
---
apiVersion: v1
kind: Service
metadata:
name: arecord-srv
spec:
selector:
app: arecord
ports:
- name: arecord
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: ssl-ip
spec:
tls:
- hosts:
- vareniyam.me
secretName: echo-tls
rules:
- host: vareniyam.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: arecord-srv
port:
number:
8080
You have said you're using nginx ingress, but your ingress class is saying gce:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
You have not indicated which ClusterIssuer or Issuer you want to use. cert-manager issues certificates only after you tell it you want it to create a certificate
I am unsure what tutorials you have tried, but have you tried looking at the cert-manager docs here: https://cert-manager.io/docs/

have a local domain rgpd.local for my local app with Kubernetes / Traefik

Right now, I can test my local apps with: http://localhost/rgpd/api/...
Here is my rgpd-ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: rgpd-ingress
namespace: rgpd
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- path: /rgpd/api
backend:
serviceName: rgpd-api-local
servicePort: "rgpd-port"
I would like to change it to: rgpd.local
So I changed it to:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: rgpd-ingress
namespace: rgpd
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: rgpd.local
http:
paths:
- backend:
serviceName: rgpd-api-local
servicePort: "port-rgpd"
But now, if I enter old url, I get 404 and I can't connect the new one neither.
Here are my files:
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
labels:
app: rgpd-api-local
name: rgpd-api-local
namespace: rgpd
spec:
replicas: 1
template:
metadata:
labels:
app: rgpd-api-local
spec:
containers:
- image: rgpd_api:local
envFrom:
- secretRef:
name: rgpd-env
name: rgpd-api-local
ports:
- containerPort: 10000
And service:
apiVersion: v1
kind: Service
metadata:
labels:
app: rgpd-api-local
name: rgpd-api-local
namespace: rgpd
spec:
type: NodePort
ports:
- name: "rgpd-port"
port: 10000
selector:
app: rgpd-api-local
Why can't it work ?
Add an entry in hosts file:
/etc/hosts
rgpd.local 127.0.0.1
update /etc/hosts or just supply Host header i.e curl -H "Host: rgpd.local" 127.0.0.1

Traefik Ingress bad rule

I am working with Kubernetes on Google Cloud. I am trying to set Traefik as Ingress for the cluster. I'm based the code on the official site docs https://docs.traefik.io/user-guide/kubernetes/ but I have an error with the rule for Path Prefix Strip.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-api
labels:
app: auth-api
spec:
replicas: 2
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
version: v0.0.1
spec:
containers:
- name: auth-api
image: gcr.io/r10c-dev/auth-api:v0.1
ports:
- containerPort: 3000
env:
- name: AMQP_SERVICE
value: broker:5672
- name: CACHE_SERVICE
value: session-cache
---
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
ports:
- name: http
targetPort: 80
port: 3000
type: NodePort
selector:
app: auth-api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-api
servicePort: http
In the GKE console it seems the deployment is linked to the service and the ingress, but when I try to access the IP, the server returns and error 502.
Also I am using and static IP
gcloud compute addresses create web-static-ip --global