How to secure Kibana dashboard using keycloak-gatekeeper? - kubernetes

Current flow:
incoming request (/sso-kibana) --> Envoy proxy --> /sso-kibana
Expected flow:
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
-->
keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana
I deployed keycloak-gatekeeper as a k8s cluster which has the following configuration:
keycloak-gatekeeper.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak-gatekeeper
name: keycloak-gatekeeper
spec:
selector:
matchLabels:
app: keycloak-gatekeeper
replicas: 1
template:
metadata:
labels:
app: keycloak-gatekeeper
spec:
containers:
- image: keycloak/keycloak-gatekeeper
imagePullPolicy: Always
name: keycloak-gatekeeper
ports:
- containerPort: 3000
args:
- "--config=/keycloak-proxy-poc/keycloak-gatekeeper/gatekeeper.yaml"
- "--enable-logging=true"
- "--enable-json-logging=true"
- "--verbose=true"
volumeMounts:
-
mountPath: /keycloak-proxy-poc/keycloak-gatekeeper
name: secrets
volumes:
- name: secrets
secret:
secretName: gatekeeper
gatekeeper.yaml
discovery-url: https://keycloak/auth/realms/MyRealm
enable-default-deny: true
listen: 0.0.0.0:3000
upstream-url: https://kibana.k8s.cluster:5601
client-id: kibana
client-secret: d62e46c3-2a65-4069-b2fc-0ae5884a4952
Envoy.yaml
- name: kibana
hosts: [{ socket_address: { address: keycloak-gatekeeper, port_value: 3000}}]
Problem:
I am able to invoke keycloak login on /Kibana but after login user is not going to /Kibana url i.e. Kibana dashboard is not loading.
Note: Kibana is also running as k8s cluster.
References:
https://medium.com/#vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382
https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d
Update 1:
I'm able to invoke keycloak login on /sso-kibana but after entering credentials its giving 404. The flow is following:
Step 1. Clicked on http://something/sso-kibana
Step 2. Keycloak login page opens at https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth?...
Step 3. After entering credentials redirected to this URL https://something/sso-kibana/oauth/callback?state=890cd02c-f...
Step 4. 404
Update 2:
404 error was solved after I added a new route in Envoy.yaml
Envoy.yaml
- match: { prefix: /sso-kibana/oauth/callback }
route: { prefix_rewrite: "/", cluster: kibana.k8s.cluster }
Therefore, Expected flow (as shown below) is working fine now.
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
--> keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana

In your config you explicitly enabled enable-default-deny which is explained in the documentation as:
enables a default denial on all requests, you have to explicitly say what is permitted (recommended)
With that enabled, you will need to specify urls, methods etc. either via resources entries as shown in [1] or an commandline argument [2]. In case of Kibana, you can start with:
resources:
- uri: /app/*
[1] https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration
[2] https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing

Related

Auto-scrape realm metrics from Keycloak with Prometheus-Operator

I installed Keycloak using the bitnami/keycloak Helm chart (https://bitnami.com/stack/keycloak/helm).
As I'm also using Prometheus-Operator for monitoring I enabled the metrics endpoint and the service monitor:
keycloak:
...
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
release: my-prom-operator-release
As I'm way more interested in actual realm metrics I installed the keycloak-metrics-spi provider (https://github.com/aerogear/keycloak-metrics-spi) by setting up an init container that downloads it to a shared volume.
keycloak:
...
extraVolumeMounts:
- name: providers
mountPath: /opt/bitnami/keycloak/providers
extraVolumes:
- name: providers
emptyDir: {}
...
initContainers:
- name: metrics-spi-provider
image: SOME_IMAGE_WITH_WGET_INSTALLED
imagePullPolicy: Always
command:
- sh
args:
- -c
- |
KEYCLOAK_METRICS_SPI_VERSION=2.5.2
wget --no-check-certificate -O /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar \
https://github.com/aerogear/keycloak-metrics-spi/releases/download/${KEYCLOAK_METRICS_SPI_VERSION}/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
chmod +x /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
touch /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar.dodeploy
volumeMounts:
- name: providers
mountPath: /providers
The provider enables metrics endpoints on the regular public-facing http port instead of the http-management port, which is not great for me. But I can block external access to them in my reverse proxy.
What I'm missing is some kind of auto-scraping of those endpoints. Right now I created an additional template, that creates a new service monitor for each element of a predefined list in my chart:
values.yaml
keycloak:
...
metrics:
extraServiceMonitors:
- realmName: master
- realmName: my-realm
servicemonitor-metrics-spi.yaml
{{- range $serviceMonitor := .Values.keycloak.metrics.extraServiceMonitors }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ $.Release.Name }}-spi-{{ $serviceMonitor.realmName }}
...
spec:
endpoints:
- port: http
path: /auth/realms/{{ $serviceMonitor.realmName }}/metrics
...
{{- end }}
Is there a better way of doing this? So that Prometheus can auto-detect all my realms and scrape their endpoints?
Thanks in advance!
As commented by #jan-garaj there is no need to query all the endpoints. All return the accumulated data of all realms. So it is enough to just scrape the endpoint of one realm (e.g. the master realm).
Thanks a lot!
It might help someone, the bitnami image so the helm chart already include the metrics-spi-provider. So do not need any further installation action but the metrics must be enabled in values.

Zipkin tracing not working for docker-compose and Dapr

Traces that should have been sent by dapr runtime to zipkin server somehow fails to reach it.
The situation is the following:
I'm using Docker Desktop on my Windows PC. I have downloaded the sample from dapr repository (https://github.com/dapr/samples/tree/master/hello-docker-compose) which runs perfectly out of the box with docker-compose up.
Then I've added Zipkin support as per dapr documentation:
added this service in the bottom of docker-compose.yml
zipkin:
image: "openzipkin/zipkin"
ports:
- "9411:9411"
networks:
- hello-dapr
added config.yaml in components folder
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprsystem
spec:
mtls:
enabled: false
tracing:
enabled: true
exporterType: zipkin
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
When application runs, it should send traces to the server, but nothing is found in zipkin UI and logs.
Strange thing start to appear in the logs from nodeapp-dapr_1 service: error while reading spiffe id from client cert
pythonapp-dapr_1 | time="2021-03-15T19:14:17.9654602Z" level=debug msg="found mDNS IPv4 address in cache: 172.19.0.7:34549" app_id=pythonapp instance=ce32220407e2 scope=dapr.contrib type=log ver=edge
nodeapp-dapr_1 | time="2021-03-15T19:14:17.9661792Z" level=debug msg="error while reading spiffe id from client cert: unable to retrieve peer auth info. applying default global policy action" app_id=nodeapp instance=773c486b5aac scope=dapr.runtime.grpc.api type=log ver=edge
nodeapp_1 | Got a new order! Order ID: 947
nodeapp_1 | Successfully persisted state.
Additional info - current dapr version used is 1.0.1. I made sure that security (mtls) is disabled in config file.
Configuration file is supposed to be in different folder then components.
Create new folder e.g. dapr next to the components folder.
Move components folder into newly created dapr folder.
Then create config.yaml in dapr folder.
Update docker-compose accordingly.
docker-compose
services:
nodeapp-dapr:
image: "daprio/daprd:edge"
command: ["./daprd",
"-app-id", "nodeapp",
"-app-port", "3000",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002",
"-components-path", "/dapr/components",
"-config", "/dapr/config.yaml"]
volumes:
- "./dapr/components/:/dapr"
depends_on:
- nodeapp
network_mode: "service:nodeapp"
config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprConfig
spec:
mtls:
enabled: false
tracing:
enabled: true
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: http://host.docker.internal:9411/api/v2/spans
I had issue with localhost and 127.0.0.1 in URL which I resolved using host.docker.internal as hostname.
PS: Don't forget to kill all *-dapr_1 containers so it can load new configuration.

Keycloak behind Kong and strange redirect

Setup:
minikube version: v0.27.0
Kong (helm install stable/kong) / version 1.0.2
Keycloak (helm install stable/keycloak) / version 4.8.3.Final
I have a self signed SSL certificate for my "hello.local".
What I need to achieve: Keycloak behind Kong at "https://hello.local/".
My steps:
1) fresh minikube
2) Install Keycloak with helm, following values.yaml:
keycloak:
basepath: ""
replicas: 1
...
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
(that would create service auth-keycloak-http)
3) Install Kong with helm, following values.yaml:
replicaCount: 1
admin:
ingress:
enabled: true
hosts: ['hello.local']
proxy:
type: LoadBalancer
ingress:
enabled: true
hosts: ['hello.local']
tls:
- hosts:
- hello.local
secretName: tls-certificate
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
tls:
enabled: true
postgresql:
...
4) I setup service and route for Kong
Service:
Protocol: http
Host: auth-keycloak-http
Port: 80
Route:
Hosts: hello.local
After that I can open "https://hello.local" and can see welcome page from Keycloak where I can click Administration Console and after that I have redirect to "https://hello.local:8443/admin/master/console/" in my browser. So we should not have redirect with another port at this point.
Setup with 2 docker images (Keycloak + Kong) is working if PROXY_ADDRESS_FORWARDING is true.
How can I make Keycloak (helm chart) to work behind Kong (helm chart) in kubernetes cluster as expected, without redirect?
This is being discussed in github issue 1, github issue 2 and github issue 3. Also, Similar questions on stackoverflow
Original answer:
Seems, it is necessary to setup following environment variables in values.yaml of keycloak helm chart:
...
extraEnv: |
- name: KEYCLOAK_HTTP_PORT
value: "80"
- name: KEYCLOAK_HTTPS_PORT
value: "443"
- name: KEYCLOAK_HOSTNAME
value: example.com
...
All of them are required, after that, redirect would work correctly.
Added 2021 Sep:
Issue with weird behavior with redirect to port 8443 for some action (like go to Account management with the link on the top right of admin console).
In fact we do not need to set any KEYCLOAK_HTTP_PORT or KEYCLOAK_HTTPS_PORT.
Some changes are required on proxy side. On proxy we need to set x-forwarded-port to 443 for this route.
In my case we use Kong:
On the route, where Keycloak is exposed, we need to add (this one worked for me):
serverless > post function with following content:
ngx.var.upstream_x_forwarded_port=443
More info on KONG and x_forwarded_*

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"

Connect to Google Cloud SQL from Container Engine with Java App

I'm having a tough time connecting to a Cloud SQL Instance from a Java App running in a Google Container Engine Instance.
I whitelisted the external instance IP from the Access Control of CloudSQL. Connecting from my local machine works well, however I haven't managed to establish a connection from my App yet.
I'm configuring the Container as (cloud-deployment.yaml):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APPNAME
spec:
replicas: 1
template:
metadata:
labels:
app: APPNAME
spec:
imagePullSecrets:
- name: APPNAME.com
containers:
- image: index.docker.io/SOMEUSER/APPNAME:latest
name: web
env:
- name: MYQL_ENV_DB_HOST
value: 111.111.111.111 # the cloud sql instance ip
- name: MYQL_ENV_MYSQL_PASSWORD
value: THEPASSWORD
- name: MYQL_ENV_MYSQL_USER
value: THEUSER
ports:
- containerPort: 9000
name: APPNAME
using the connection url jdbc:mysql://111.111.111.111:3306/databaseName, resulting in:
Error while executing: Access denied for user 'root'#'ip adress of the instance' (using password: YES)`
I can confirm that the Container Engine external IP is set on the SQL instance.
I don't want to use the Cloud Proxy Image for now as I'm still in development stage.
Any help is greatly appreciated.
You must use the cloud SQL proxy as described here: https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/README.md