Keycloak behind Kong and strange redirect - kubernetes

Setup:
minikube version: v0.27.0
Kong (helm install stable/kong) / version 1.0.2
Keycloak (helm install stable/keycloak) / version 4.8.3.Final
I have a self signed SSL certificate for my "hello.local".
What I need to achieve: Keycloak behind Kong at "https://hello.local/".
My steps:
1) fresh minikube
2) Install Keycloak with helm, following values.yaml:
keycloak:
basepath: ""
replicas: 1
...
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
(that would create service auth-keycloak-http)
3) Install Kong with helm, following values.yaml:
replicaCount: 1
admin:
ingress:
enabled: true
hosts: ['hello.local']
proxy:
type: LoadBalancer
ingress:
enabled: true
hosts: ['hello.local']
tls:
- hosts:
- hello.local
secretName: tls-certificate
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
tls:
enabled: true
postgresql:
...
4) I setup service and route for Kong
Service:
Protocol: http
Host: auth-keycloak-http
Port: 80
Route:
Hosts: hello.local
After that I can open "https://hello.local" and can see welcome page from Keycloak where I can click Administration Console and after that I have redirect to "https://hello.local:8443/admin/master/console/" in my browser. So we should not have redirect with another port at this point.
Setup with 2 docker images (Keycloak + Kong) is working if PROXY_ADDRESS_FORWARDING is true.
How can I make Keycloak (helm chart) to work behind Kong (helm chart) in kubernetes cluster as expected, without redirect?
This is being discussed in github issue 1, github issue 2 and github issue 3. Also, Similar questions on stackoverflow

Original answer:
Seems, it is necessary to setup following environment variables in values.yaml of keycloak helm chart:
...
extraEnv: |
- name: KEYCLOAK_HTTP_PORT
value: "80"
- name: KEYCLOAK_HTTPS_PORT
value: "443"
- name: KEYCLOAK_HOSTNAME
value: example.com
...
All of them are required, after that, redirect would work correctly.
Added 2021 Sep:
Issue with weird behavior with redirect to port 8443 for some action (like go to Account management with the link on the top right of admin console).
In fact we do not need to set any KEYCLOAK_HTTP_PORT or KEYCLOAK_HTTPS_PORT.
Some changes are required on proxy side. On proxy we need to set x-forwarded-port to 443 for this route.
In my case we use Kong:
On the route, where Keycloak is exposed, we need to add (this one worked for me):
serverless > post function with following content:
ngx.var.upstream_x_forwarded_port=443
More info on KONG and x_forwarded_*

Related

Auto-scrape realm metrics from Keycloak with Prometheus-Operator

I installed Keycloak using the bitnami/keycloak Helm chart (https://bitnami.com/stack/keycloak/helm).
As I'm also using Prometheus-Operator for monitoring I enabled the metrics endpoint and the service monitor:
keycloak:
...
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
release: my-prom-operator-release
As I'm way more interested in actual realm metrics I installed the keycloak-metrics-spi provider (https://github.com/aerogear/keycloak-metrics-spi) by setting up an init container that downloads it to a shared volume.
keycloak:
...
extraVolumeMounts:
- name: providers
mountPath: /opt/bitnami/keycloak/providers
extraVolumes:
- name: providers
emptyDir: {}
...
initContainers:
- name: metrics-spi-provider
image: SOME_IMAGE_WITH_WGET_INSTALLED
imagePullPolicy: Always
command:
- sh
args:
- -c
- |
KEYCLOAK_METRICS_SPI_VERSION=2.5.2
wget --no-check-certificate -O /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar \
https://github.com/aerogear/keycloak-metrics-spi/releases/download/${KEYCLOAK_METRICS_SPI_VERSION}/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
chmod +x /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
touch /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar.dodeploy
volumeMounts:
- name: providers
mountPath: /providers
The provider enables metrics endpoints on the regular public-facing http port instead of the http-management port, which is not great for me. But I can block external access to them in my reverse proxy.
What I'm missing is some kind of auto-scraping of those endpoints. Right now I created an additional template, that creates a new service monitor for each element of a predefined list in my chart:
values.yaml
keycloak:
...
metrics:
extraServiceMonitors:
- realmName: master
- realmName: my-realm
servicemonitor-metrics-spi.yaml
{{- range $serviceMonitor := .Values.keycloak.metrics.extraServiceMonitors }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ $.Release.Name }}-spi-{{ $serviceMonitor.realmName }}
...
spec:
endpoints:
- port: http
path: /auth/realms/{{ $serviceMonitor.realmName }}/metrics
...
{{- end }}
Is there a better way of doing this? So that Prometheus can auto-detect all my realms and scrape their endpoints?
Thanks in advance!
As commented by #jan-garaj there is no need to query all the endpoints. All return the accumulated data of all realms. So it is enough to just scrape the endpoint of one realm (e.g. the master realm).
Thanks a lot!
It might help someone, the bitnami image so the helm chart already include the metrics-spi-provider. So do not need any further installation action but the metrics must be enabled in values.

Zipkin tracing not working for docker-compose and Dapr

Traces that should have been sent by dapr runtime to zipkin server somehow fails to reach it.
The situation is the following:
I'm using Docker Desktop on my Windows PC. I have downloaded the sample from dapr repository (https://github.com/dapr/samples/tree/master/hello-docker-compose) which runs perfectly out of the box with docker-compose up.
Then I've added Zipkin support as per dapr documentation:
added this service in the bottom of docker-compose.yml
zipkin:
image: "openzipkin/zipkin"
ports:
- "9411:9411"
networks:
- hello-dapr
added config.yaml in components folder
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprsystem
spec:
mtls:
enabled: false
tracing:
enabled: true
exporterType: zipkin
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
When application runs, it should send traces to the server, but nothing is found in zipkin UI and logs.
Strange thing start to appear in the logs from nodeapp-dapr_1 service: error while reading spiffe id from client cert
pythonapp-dapr_1 | time="2021-03-15T19:14:17.9654602Z" level=debug msg="found mDNS IPv4 address in cache: 172.19.0.7:34549" app_id=pythonapp instance=ce32220407e2 scope=dapr.contrib type=log ver=edge
nodeapp-dapr_1 | time="2021-03-15T19:14:17.9661792Z" level=debug msg="error while reading spiffe id from client cert: unable to retrieve peer auth info. applying default global policy action" app_id=nodeapp instance=773c486b5aac scope=dapr.runtime.grpc.api type=log ver=edge
nodeapp_1 | Got a new order! Order ID: 947
nodeapp_1 | Successfully persisted state.
Additional info - current dapr version used is 1.0.1. I made sure that security (mtls) is disabled in config file.
Configuration file is supposed to be in different folder then components.
Create new folder e.g. dapr next to the components folder.
Move components folder into newly created dapr folder.
Then create config.yaml in dapr folder.
Update docker-compose accordingly.
docker-compose
services:
nodeapp-dapr:
image: "daprio/daprd:edge"
command: ["./daprd",
"-app-id", "nodeapp",
"-app-port", "3000",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002",
"-components-path", "/dapr/components",
"-config", "/dapr/config.yaml"]
volumes:
- "./dapr/components/:/dapr"
depends_on:
- nodeapp
network_mode: "service:nodeapp"
config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprConfig
spec:
mtls:
enabled: false
tracing:
enabled: true
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: http://host.docker.internal:9411/api/v2/spans
I had issue with localhost and 127.0.0.1 in URL which I resolved using host.docker.internal as hostname.
PS: Don't forget to kill all *-dapr_1 containers so it can load new configuration.

How to secure Kibana dashboard using keycloak-gatekeeper?

Current flow:
incoming request (/sso-kibana) --> Envoy proxy --> /sso-kibana
Expected flow:
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
-->
keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana
I deployed keycloak-gatekeeper as a k8s cluster which has the following configuration:
keycloak-gatekeeper.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak-gatekeeper
name: keycloak-gatekeeper
spec:
selector:
matchLabels:
app: keycloak-gatekeeper
replicas: 1
template:
metadata:
labels:
app: keycloak-gatekeeper
spec:
containers:
- image: keycloak/keycloak-gatekeeper
imagePullPolicy: Always
name: keycloak-gatekeeper
ports:
- containerPort: 3000
args:
- "--config=/keycloak-proxy-poc/keycloak-gatekeeper/gatekeeper.yaml"
- "--enable-logging=true"
- "--enable-json-logging=true"
- "--verbose=true"
volumeMounts:
-
mountPath: /keycloak-proxy-poc/keycloak-gatekeeper
name: secrets
volumes:
- name: secrets
secret:
secretName: gatekeeper
gatekeeper.yaml
discovery-url: https://keycloak/auth/realms/MyRealm
enable-default-deny: true
listen: 0.0.0.0:3000
upstream-url: https://kibana.k8s.cluster:5601
client-id: kibana
client-secret: d62e46c3-2a65-4069-b2fc-0ae5884a4952
Envoy.yaml
- name: kibana
hosts: [{ socket_address: { address: keycloak-gatekeeper, port_value: 3000}}]
Problem:
I am able to invoke keycloak login on /Kibana but after login user is not going to /Kibana url i.e. Kibana dashboard is not loading.
Note: Kibana is also running as k8s cluster.
References:
https://medium.com/#vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382
https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d
Update 1:
I'm able to invoke keycloak login on /sso-kibana but after entering credentials its giving 404. The flow is following:
Step 1. Clicked on http://something/sso-kibana
Step 2. Keycloak login page opens at https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth?...
Step 3. After entering credentials redirected to this URL https://something/sso-kibana/oauth/callback?state=890cd02c-f...
Step 4. 404
Update 2:
404 error was solved after I added a new route in Envoy.yaml
Envoy.yaml
- match: { prefix: /sso-kibana/oauth/callback }
route: { prefix_rewrite: "/", cluster: kibana.k8s.cluster }
Therefore, Expected flow (as shown below) is working fine now.
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
--> keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana
In your config you explicitly enabled enable-default-deny which is explained in the documentation as:
enables a default denial on all requests, you have to explicitly say what is permitted (recommended)
With that enabled, you will need to specify urls, methods etc. either via resources entries as shown in [1] or an commandline argument [2]. In case of Kibana, you can start with:
resources:
- uri: /app/*
[1] https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration
[2] https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing

Unable to get a Grafana helm-charts URL to work with subpath

I am setting up a Grafana server on my local kube cluster using helm-charts. I am trying to get it to work on a subpath in order to implement it on a production env with tls later on, but I am unable to access Grafana on http://localhost:3000/grafana.
I have tried all most all the recommendations out there on the internet about adding a subpath to ingress, but nothing seems to work.
The Grafana login screen shows up on http://localhost:3000/ when I remove root_url: http://localhost:3000/grafana from Values.yaml
But when I add root_url: http://localhost:3000/grafana back into values.yaml file I see the error attached below (towards the end of this post).
root_url: http://localhost:3000/grafana and ingress as:
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
labels: {}
path: /grafana
hosts:
- localhost
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
I expect the http://localhost:3000/grafana url to show me the login screen instead i see the below errors:
If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath
3. If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build
4. Sometimes restarting grafana-server can help
Can you please help me fix the ingress and root_url on values.yaml to get Grafana URL working at /grafana ?
As you check documentation for Configuring grafana behind Proxy, root_url should be configured in grafana.ini file under [server] section. You can modify your values.yaml to achieve this.
grafana.ini:
...
server:
root_url: http://localhost:3000/grafana/
Also your ingress in values should look like this.
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
labels: {}
path: /grafana/
hosts:
- ""
Hope it helps.
I followed exact steps mentioned by #coolinuxoid however, I still faced issue when trying to access UI by hitting http://localhost:3000/grafana/
I got redirected to http://localhost:3000/grafana/login with no UI displayed.
A small modification helped me achieve accessing UI through http://localhost:3000/grafana/
In the grafana.ini configuration, I added "serve_from_sub_path: true", so my final grafana.ini looked something like this:
grafana.ini:
server:
root_url: http://localhost:3000/grafana/
serve_from_sub_path: true
Ingress Configuration were exactly same. If it is version specific issue, I cannot be sure but I'm using Grafana v8.2.1.
You need to tell the grafana application, that it is run not under the root url / (the default), but under some subpath. The easiest way is via GF_ prefixed env vars:
grafana:
env:
GF_SERVER_ROOT_URL: https://myhostname.example.com/grafana
GF_SERVER_SERVE_FROM_SUB_PATH: 'true'
ingress:
enabled: true
hosts:
- myhostname.example.com
path: /grafana($|(/.*))
pathType: ImplementationSpecific
Above example works for the kubernetes' nginx-ingress-controller. Depending on the ingress controller you use, you may need
path: /grafana
pathType: Prefix
instead.

How can I get LetsEncrypt working with a wildcard domain on Traefik?

I'm trying to set up LetsEncrypt with a wildcard domain on my Traefik instance. Traefik has been installed from the Helm Chart stable/traefik.
We're using Google Cloud for DNS so I want to use gcloud as my Traefik acme provider.
As mentioned, it's a wildcard. I'm trying to have Traefik manage LetsEncrypt for *.domain.com with domain.com as a SAN.
I'm currently using a K8s claim for storing the acme.json file and it's been populated with a private key but no certificates.
Traefik Helm values
# LetsEncrypt
acme:
acmeLogging: true
challengeType: 'dns-01'
enabled: true
domains:
enabled: true
main: '*.<domain>'
sans:
- <domain>
defaultEntryPoints:
- http
- https
dnsProvider:
name: 'gcloud'
gcloud:
GCE_PROJECT: <redacted>
GCE_SERVICE_ACCOUNT_FILE: /secrets/gcloud-credentials.json
email: <redacted>
entryPoint: 'https'
entryPoints:
http:
address: ':80'
https:
address: ':443'
persistence:
enabled: true
existingClaim: 'certificate-store'
provider: 'gcloud'
staging: true
# SSL configuration
ssl:
enabled: true
enforced: true
acme.json
{
"Account": {
"Email": "<redacted>",
"Registration": {
"body": {
"status": "valid",
"contact": [
"mailto:<redacted>"
]
},
"uri": "https://acme-staging-v02.api.letsencrypt.org/acme/acct/9091953"
},
"PrivateKey": "<redacted>",
"KeyType": "4096"
},
"Certificates": null,
"HTTPChallenges": {},
"TLSChallenges": {}
}
All responses from Traefik should be being served with a wildcard LetsEncrypt cert for said domain that should auto-renew.
What additional steps might I need to carry out to have Traefik start generating the certificates, and how can I configure Traefik to use this certificate by default? (Rather than the built-in one)
Thank you
I figured this one out.
I set the following (in addition or replacement of the above) in my Helm chart overrides YAML.
acme:
caServer: 'https://acme-v02.api.letsencrypt.org/directory'
domains:
enabled: true
domainsList:
- main: '*.<domain>'
- sans:
- <domain>
I also got rid of persistence.existingClaim and let Traefik make its own claim, but if you already have an existing one keeping this definition shouldn't cause you any issues!
All Traefik ingresses are now serving the correct LetsEncrypt certificate without any additional configuration.
Thank you Vasily Angapov for your response - you were correct in terms of the acme.domains.domainsList section. :-)
Are you 100% sure that "domains" stanza should look like that? In stable/traefik chart I see slightly another format of domains:
domains:
enabled: false
# List of sets of main and (optional) SANs to generate for
# for wildcard certificates see https://docs.traefik.io/configuration/acme/#wildcard-domains
domainsList:
# - main: "*.example.com"
# - sans:
# - "example.com"
# - main: "*.example2.com"
# - sans:
# - "test1.example2.com"
# - "test2.example2.com"
But may be it's just a matter of newer chart version, I don't know... If you have older chart version then you can try to upgrade...