I have been trying to implement istio authorization using Oauth2 and keycloak. I have followed few articles related to this API Authentication: Configure Istio IngressGateway, OAuth2-Proxy and Keycloak, Authorization Policy
Expected output: My idea is to implement keycloak authentication where oauth2 used as an external Auth provider in the istio ingress gateway.
when a user try to access my app in <ingress host>/app , it should automatically redirect to keycloak login page.
How do i properly redirect the page to keycloak login screen for authentication ?
problem:
When i try to access <ingress host>/app, the page will take 10 seconds to load and it gives status 403 access denied.
if i remove the authorization policy (kubectl delete -f authorization-policy.yaml) within that 10 seconds, it will redirect to the login screen (keycloak)
oauth2.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
type: NodePort
selector:
app: oauth-proxy
ports:
- name: http-oauthproxy
port: 4180
nodePort: 31023
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: oauth-proxy
name: oauth-proxy
spec:
replicas: 1
selector:
matchLabels:
app: "oauth-proxy"
template:
metadata:
labels:
app: oauth-proxy
spec:
containers:
- name: oauth-proxy
image: "quay.io/oauth2-proxy/oauth2-proxy:v7.2.0"
ports:
- containerPort: 4180
args:
- --http-address=0.0.0.0:4180
- --upstream=http://test-web-app:3000
- --set-xauthrequest=true
- --pass-host-header=true
- --pass-access-token=true
env:
# OIDC Config
- name: "OAUTH2_PROXY_PROVIDER"
value: "keycloak-oidc"
- name: "OAUTH2_PROXY_OIDC_ISSUER_URL"
value: "http://192.168.1.2:31020/realms/my_login_realm"
- name: "OAUTH2_PROXY_CLIENT_ID"
value: "my_nodejs_client"
- name: "OAUTH2_PROXY_CLIENT_SECRET"
value: "JGEQtkrdIc6kRSkrs89BydnfsEv3VoWO"
# Cookie Config
- name: "OAUTH2_PROXY_COOKIE_SECURE"
value: "false"
- name: "OAUTH2_PROXY_COOKIE_SECRET"
value: "ZzBkN000Wm0pQkVkKUhzMk5YPntQRUw_ME1oMTZZTy0="
- name: "OAUTH2_PROXY_COOKIE_DOMAINS"
value: "*"
# Proxy config
- name: "OAUTH2_PROXY_EMAIL_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_WHITELIST_DOMAINS"
value: "*"
- name: "OAUTH2_PROXY_HTTP_ADDRESS"
value: "0.0.0.0:4180"
- name: "OAUTH2_PROXY_SET_XAUTHREQUEST"
value: "true"
- name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER
value: "true"
- name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY
value: "true"
- name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
value: "true"
- name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER
value: "true"
keycloak.yaml
apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
type: NodePort
selector:
app: keycloak
ports:
- name: http-keycloak
port: 8080
nodePort: 31020
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
spec:
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.0
ports:
- containerPort: 8080
args: ["start-dev"]
env:
- name: KEYCLOAK_ADMIN
value: "admin"
- name: KEYCLOAK_ADMIN_PASSWORD
value: "admin"
istio-operator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
extensionProviders:
- name: "oauth2-proxy"
envoyExtAuthzHttp:
service: "oauth-proxy.default.svc.cluster.local"
port: "4180" # The default port used by oauth2-proxy.
includeHeadersInCheck: ["authorization", "cookie","x-forwarded-access-token","x-forwarded-user","x-forwarded-email","x-forwarded-proto","proxy-authorization","user-agent","x-forwarded-host","from","x-forwarded-for","accept","x-auth-request-redirect"] # headers sent to the oauth2-proxy in the check request.
headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token","x-forwarded-access-token"] # headers sent to backend application when request is allowed.
headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied.
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: test-gateway
namespace : istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-vs
spec:
hosts:
- '*'
gateways:
- istio-system/test-gateway
http:
- match:
- uri:
prefix: /oauth2
route:
- destination:
host: oauth-proxy.default.svc.cluster.local
port:
number: 4180
- match:
- uri:
prefix: /app
route:
- destination:
host: test-web-app.default.svc.cluster.local
port:
number: 3000
authorization-policy.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
The redirection issue solved by updating authorization policy
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: example-auth-policy
namespace: istio-system
spec:
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/app"]
notPaths: ["/oauth2/*"]
selector:
matchLabels:
app: istio-ingressgateway
Added istio-system namespace instead of workload namespace (it was default in my case)
Forgot to add matchLabels.
Related
I would like to either be able to customise the existing 500 error page OR in case it is not possible, redirect to my custom page.
I am using Okta oidc and everything is setup on Kubernetes cluster. I could not find where I can configure this 500 server error page.
Below is my POC for a somewhat similar setup but this is using github provider.
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
labels:
app: demo
version: new
spec:
containers:
- name: nginx
image: nikk007/nginxwebsite
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: demo
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway-configuration
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: selfsigned-cert-tls-secret
hosts:
- "*" # Domain name of the external website
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: test-selfsigned
namespace: istio-system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: selfsigned-cert
namespace: istio-system
spec:
dnsNames:
- mymy.com
secretName: selfsigned-cert-tls-secret
issuerRef:
name: test-selfsigned
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: myvs # "just" a name for this virtualservice
namespace: default
spec:
hosts:
- "*" # The Service DNS (ie the regular K8S Service) name that we're applying routing rules to.
gateways:
- ingress-gateway-configuration
http:
- route:
- destination:
host: oauth2-proxy # The Target DNS name
port:
number: 4180
---
kind: DestinationRule # Defining which pods should be part of each subset
apiVersion: networking.istio.io/v1alpha3
metadata:
name: grouping-rules-for-our-photograph-canary-release # This can be anything you like.
namespace: default
spec:
host: myservice.default.svc.cluster.local # Service
subsets:
- labels: # SELECTOR.
version: new # find pods with label "safe"
name: new-group
- labels:
version: old
name: old-group
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --provider=github
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
# Register a new application
# https://github.com/settings/applications/new
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <myapp-client-id>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <my-app-secret>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <cookie-secret-I-generated-via-python>
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
I am using cert-manager for self signed certificates.
I am using istio ingress gateway instead of k8s ingress controller.
Everything is working fine and I am able to access istio gateway, which directs to oauth2-proxy page, which directs to github login.
In case there is issue with authorization provider, then we get an error 500 page via outh2-proxy. I want to configure oauth2-proxy to redirect to my custom page(or i would like to configure the oauth2-proxy internal html for error page) in case of 500 error. I tried kubectl exec into oauth2-proxy pod but I could not locate their html. Moreover, I dont have root access inside oauth2-proxy pod.
If my own app throws 500 error then I can handle it by configuring its nginx.
I am having trouble exposing a service over http and https using traefik 2.9 in Kubernetes.
The http endpoint kinda works, I introduced CORS errors somehow once I tried to add https but that is not my main concern. The https ingress is broken and I cant find any indication of why its not working. The traefik pod doesn't log any errors and the dotnet service isn't receiving the requests. Also both routes show up in the dashboard and websecure is displayed as having TLS enabled.
Excluding ClusterRole, ServiceAccount, and ClusterRoleBinding because I believe that's configured correctly as the http route wouldn't work if it wasnt.
Traefik config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
Traefik services:
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: dashboard
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.1.38
ports:
- targetPort: web
port: 80
name: http
- targetPort: websecure
port: 443
name: https
selector:
app: traefik
Secret for tls:
apiVersion: v1
data:
comptech.pem: <contents of pem file base64 encoded>
comptech.crt: <contents of crt file base64 encoded>
comptech.key: <contents of key file base64 encoded>
kind: Secret
metadata:
name: comptech-cert
namespace: default
type: Opaque
Service for dotnet application:
apiVersion: v1
kind: Service
metadata:
name: control-api-service
spec:
ports:
- name: http
port: 80
targetPort: 5000
protocol: TCP
- name: https
port: 443
targetPort: 5000
protocol: TCP
selector:
app: control-api
Ingresses:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-secure-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: https
tls:
- secretName: comptech-cert
My hope here is that someone with much more experience with traefik/tls will be able to quickly realize what I'm doing incorrectly. Any input is greatly appreciated!
UPDATE:
The firewall was only allowing http traffic, we reconfigured it to support https and it is responding with Traefiks default certs. So i can hit the container but tls is still not configured using my supplied cert.
The pem file is not needed and the crt file was generated incorrectly using openssl the command that worked for me was: openssl crl2pkcs7 -nocrl -certfile comptech.pem | openssl pkcs7 -print_certs -out cert.crt
Pointing to the https port of the control-api-service was not working and needed to be changed to http
A config map needed to be created for the traefik deployment to work correctly:
apiVersion: v1 kind: ConfigMap metadata: name: traefik-config labels:
name: traefik-config namespace: default data: dyn.yaml: |
# https://doc.traefik.io/traefik/https/tls/
tls:
stores:
default:
defaultCertificate:
certFile: '/certs/tls.crt'
keyFile: '/certs/tls.key'
Finally the configmap and secret must be used in the traefik deployment like below:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
- --providers.file.filename=/config/dyn.yaml
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
volumeMounts:
- name: comptech-cert-volume
mountPath: /certs
- name: traefik-config-volume
mountPath: /config
volumes:
- name: comptech-cert-volume
secret:
secretName: comptech-cert
- name: traefik-config-volume
configMap:
name: traefik-config
In my setup, I use the IngressRoute CRD implementation from Traefik.
The CRDs were automatically installed when I setup the Traefik controller using Helm.
Is it a possibility for you to use this in your setup? You can check if the CRDs already exist using below command on your k8s cluster.
kubectl get crd
Below is a snippet from one of my projects where I also use a custom wildcard certificate from a secret using the IngressRoute manifest.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
tls:
secretName: bluecert
You can also include other custom resources that are available from Traefik. The complete set of configuration that is available can be seen here. For example, below is the same snippet with middleware and tlsoptions resources included for improving the security of the endpoint.
apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
name: tlsoptions
spec:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_AES_256_GCM_SHA384
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_FALLBACK_SCSV
curvePreferences:
- CurveP521
- CurveP384
sniStrict: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: security
spec:
headers:
frameDeny: true
sslRedirect: true
browserXssFilter: true
contentTypeNosniff: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 31536000
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
middlewares:
- name: security
tls:
secretName: bluecert
options:
name: tlsoptions
I'm using Kiali on Istio/Kubernetes to monitor my mesh.
I need to route 2 different pods based in URL contain, and for this, I'm following the tutorial in Split large virtual services and destination rules into multiple resources. So, I created 2 VirtualServices for the same host and gateway:
Service 1:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-product-composite
spec:
hosts:
- "kubernetes.b-thinking.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /product-composite
route:
- destination:
port:
number: 80
host: product-composite
Service 2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uaa
spec:
replicas: 1
selector:
matchLabels:
app: uaa
template:
metadata:
labels:
app: uaa
spec:
containers:
- name: uaa
image: bthinking/uaa
imagePullPolicy: Never
env:
- name: LOGGING_LEVEL_ROOT
value: DEBUG
ports:
- containerPort: 8090
resources:
limits:
memory: 350Mi
---
apiVersion: v1
kind: Service
metadata:
name: uaa
spec:
type: NodePort
selector:
app: uaa
ports:
- port: 8090
nodePort: 31090
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-auth
spec:
hosts:
- "kubernetes.b-thinking.com"
gateways:
- gw-ingress
http:
- match:
- uri:
prefix: /oauth
rewrite:
uri: "/uaa/oauth"
route:
- destination:
port:
number: 8090
host: uaa
But, I'm getting the warning KIA1106 - More than one Virtual Service for same host
Looking in docs, it explain the case (it's my case), but it redirect to the same guide I have followed.
I have a cluster with istio injection enabled and cockroach db stateful set defined:
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb-serviceaccount
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: tcp
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb-statefulset
labels:
version: v20.1.2
spec:
serviceName: cockroachdb
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
version: v20.1.2
spec:
serviceAccountName: cockroachdb-serviceaccount
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v20.1.2
ports:
- containerPort: 26257
name: tcp
- containerPort: 8080
name: http
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
env:
- name: COCKROACH_CHANNEL
value: kubernetes-insecure
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- "exec /cockroach/cockroach start --logtostderr --insecure --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join cockroachdb-statefulset-0.cockroachdb,cockroachdb-statefulset-1.cockroachdb,cockroachdb-statefulset-2.cockroachdb --cache 25% --max-sql-memory 25%"
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 4Gi
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: cockroachdb-public
spec:
host: cockroachdb-public
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: cockroachdb-public
spec:
hosts:
- cockroachdb-public
http:
- match:
- port: 8080
route:
- destination:
host: cockroachdb-public
port:
number: 8080
tcp:
- match:
- port: 26257
route:
- destination:
host: cockroachdb-public
port:
number: 26257
and a service that accesses it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: downstream-serviceaccount
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: downstream-deployment-v1
labels:
app: downstream
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: downstream
version: v1
template:
metadata:
labels:
app: downstream
version: v1
spec:
serviceAccountName: downstream-serviceaccount
containers:
- name: downstream
image: downstream:0.1
ports:
- containerPort: 80
env:
- name: DATABASE_URL
value: postgres://roach#cockroachdb-public:26257/roach?sslmode=disable
---
apiVersion: v1
kind: Service
metadata:
name: downstream-service
labels:
app: downstream
spec:
type: ClusterIP
selector:
app: downstream
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: downstream-service
spec:
host: downstream-service
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: downstream-service
spec:
hosts:
- downstream-service
http:
- name: "downstream-service-routes"
match:
- port: 80
route:
- destination:
host: downstream-service
port:
number: 80
Now I'd like to restrict access to cockroach db only to downstream-service and to cockroachdb itself (since nodes need intercommunication between each other).
I'm trying to restrict the traffic with something like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all
namespace: default
spec:
{}
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-downstream
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/downstream-serviceaccount"]
- to:
- operation:
ports: ["26257"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
- to:
- operation:
ports: ["26257"]
but doesn't seem to do anything. I can still e.g. access cockroachdb-public:8080 cluster HTTP UI from downstream-service.
Now when I add the following:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all-to-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: DENY
rules:
- to:
- operation:
ports: ["26257"]
then all the traffic is blocked (including the traffic between cockroachdb nodes).
What am I doing wrong here?
You are having the same problem that a guy couple of days ago. In your authorization policy you have two policies:
service account downstream-serviceaccount (and cockroachdb-serviceaccount for the other authorization policy) from default namespace can access the service with labels app: cockroachdb on any port on default namespace.
Any service account, from any namespace can access the service with labels app: cockroachdb, on port 26257.
In order to make it an AND, you would do this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
to: <- remove the dash from here
- operation:
ports: ["26257"]
Same with the other AuthorizationPolicy object. Also note that you don't need to explicitly create a DENY policy. When you create an ALLOW one, it automatically denies everything else.
I am attempting to protect a service's status page with an oauth2_proxy, using Azure AD as the external auth provider. Currently if I browse to the public url of the app (https://sub.domain.com/service/hangfire) I got a 504 gateway timeout, where it should be directing me to authenticate.
I had been mostly following this guide for reference: https://msazure.club/protect-kubernetes-webapps-with-azure-active-directory-aad-authentication/
If I disable the annotations that direct the authentication, I can get to the public status page without a problem. If I browse to https://sub.domain.com/oauth2, I get a prompt to authenticate with my provider, which I would expect. I am not sure where the issue lies in the ingress config but I was unable to find any similar cases to this online, stackoverflow or otherwise.
In this case, everything (oauth deployment, service, and ingress rules) lives in a 'dev' namespace except the actual ingress deployment, which lives in its own namespace. I don't suspect this makes a difference, but SSL termination is handled by a gateway outside the cluster.
oauth2 deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.2.0
imagePullPolicy: IfNotPresent
args:
- --provider=azure
- --email-domain=domain.com
- --upstream=http://servicename
- --http-address=0.0.0.0:4180
- --azure-tenant=id
- --client-id=id
- --client-secret=number
env:
- name: OAUTH2_PROXY_COOKIE_SECRET
value: secret
ports:
- containerPort: 4180
protocol : TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
Ingress rules:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: https://sub.domain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: https://sub.domain.com/oauth2/start?rd=$https://sub.domain.com/service/hangfire"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /service/hangfire
backend:
serviceName: service
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-oauth2-proxy
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
I am getting 504 errors when I browse to the url but I do not see any errors in the ingress pods.
I ended up finding the resolution here: https://github.com/helm/charts/issues/5958
I had to use the internal service address for the auth-url, which I had not seen mentioned anywhere else.
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.development.svc.cluster.local:4180/oauth2/auth
This is what I've been doing with my oAuth proxy for Azure AD:
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
And I've been using this oAuth proxy:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: xxx
- name: OAUTH2_PROXY_CLIENT_ID
value: yyy
- name: OAUTH2_PROXY_CLIENT_SECRET
value: zzz
- name: OAUTH2_PROXY_COOKIE_SECRET
value: anyrandomstring
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "http://where_to_redirect_to:443"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
My setup is similar to 4c74356b41's
oauth2-proxy deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --azure-tenant=TENANT-GUID
- --email-domain=company.com
- --http-address=0.0.0.0:4180
- --provider=azure
- --upstream=file:///dev/null
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: client-id
name: oauth2-proxy
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client-secret
name: oauth2-proxy
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
key: cookie-secret
name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.1.0
name: oauth2-proxy
oauth2-proxy service
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: oauth2-proxy
type: ClusterIP
oauth2-proxy ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
oauth2-proxy configuration
apiVersion: v1
kind: Secret
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
data:
# Values below are fake
client-id: AAD_CLIENT_ID
client-secret: AAD_CLIENT_SECRET
cookie-secret: COOKIE_SECRET
Application using AAD Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
labels:
app: myapp
name: myapp
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: myapp
servicePort: 80
path: /
tls:
- hosts:
- myapp.hostname.net
An additional step that needs to be done is to add the redirect URI to the AAD App registration. Navigate to your AAD App Registration in the Azure portal > Authentication > Add https://myapp.hostname.net/oauth2/callback to Redirect URIs > Save