Keycloak login page reload issue. /auth/realms/master/protocol/openid-connect/3p-cookies/step1.html 404 error - keycloak

Keycloak login page is reloading continuously automatically. After inspecting networking tab, I came to know that an html page(/auth/realms/master/protocol/openid-connect/3p-cookies/step1.html) is giving 404 in response.
I am running quay.io/keycloak/keycloak:18.0.0 keycloak image.
I am providing the following variables in environment(also providing database vars.)
name: KC_HOSTNAME
value: my-domain
name: KC_HTTP_RELATIVE_PATH
value: "/auth"
name: PROXY_ADDRESS_FORWARDING
value: "True"
name: KC_HTTP_ENABLED
value: "True"
Which configuration am I missing in environment variables?

Related

Sign out fails for Gitlab integrated with Keycloak OIDC

When logged into gitlab using the oauth2 provider keycloak and trying to log out, Gitlab redirects to the sign_in page, but doesn't end out session on Keycloak, so we are logged in again.
These are the environment variables used in gitlab kubernetes deployment:
- name: OAUTH2_GENERIC_APP_ID
value: <client-name>
- name: OAUTH2_GENERIC_APP_SECRET
value: "<client-secret>"
- name: OAUTH2_GENERIC_CLIENT_AUTHORIZE_URL
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/auth"
- name: OAUTH2_GENERIC_CLIENT_END_SESSION_ENDPOINT
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/logout"
- name: OAUTH2_GENERIC_CLIENT_SITE
value: "https://<keycloak-url>/auth/realms/<realm-name>"
- name: OAUTH2_GENERIC_CLIENT_TOKEN_URL
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/token"
- name: OAUTH2_GENERIC_CLIENT_USER_INFO_URL
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/userinfo"
- name: OAUTH2_GENERIC_ID_PATH
value: sub
- name: OAUTH2_GENERIC_NAME
value: Keycloak
- name: OAUTH2_GENERIC_USER_EMAIL
value: email
- name: OAUTH2_GENERIC_USER_NAME
value: preferred_username
- name: OAUTH2_GENERIC_USER_UID
value: sub
- name: OAUTH_ALLOW_SSO
value: Keycloak
- name: OAUTH_AUTO_LINK_LDAP_USER
value: "false"
- name: OAUTH_AUTO_LINK_SAML_USER
value: "false"
- name: OAUTH_AUTO_SIGN_IN_WITH_PROVIDER
value: Keycloak
- name: OAUTH_BLOCK_AUTO_CREATED_USERS
value: "false"
- name: OAUTH_ENABLED
value: "true"
- name: OAUTH_EXTERNAL_PROVIDERS
value: Keycloak
I have tried a workaround mentioned here: https://gitlab.com/gitlab-org/gitlab/-/issues/31203 , but no luck. Please help.
Note:
Gitlab version: 14.9.2
Keycloak version: 17
Kubernetes Version: 1.21.5
To be perfectly clear: the expectation is that you should be signed out of GitLabn, not necessarily keycloak altogether. This is happening correctly since you see the sign-in page after signing out. For example, if you sign into GitLab using Google and sign out of GitLab, you should only be signed out of GitLab, not Google.
The behavior you are observing is due to the fact that you have auto-login (auto_sign_in_with_provider) enabled, which automatically redirects users from the sign-in page to login again with keycloak (again) immediately after (successfully) signing out.
To avoid this problem, in the GitLab settings (under Admin -> Settings -> General -> Sign-in Restrictions) set the After sign-out path to be /users/sign_in?auto_sign_in=false or in other words https://gitlab.example.com/users/sign_in?auto_sign_in=false
Note the query string ?auto_sign_in=false will prevent the auto-redirect to sign back into keycloak. You can also choose a different URL entirely.
See sign-in information and sign in with provider automatically for more information.

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

Set forceBackendUrlToFrontendUrl from KeyCloak by using Kubernetes

It is possible to set forceBackendUrlToFrontendUrl as an environment variable in Kubernetes?
My problem is that the backend communication from pod to pod is over unencrypted HTTP. Keycloak (frontend) is only reachable over HTTPS.
The JWT has the "iss" claim https://......, and the service calls Keycloak to check this token. Keycloak says the token is invalid because the "issues" is invalid - and yes, it is right, https is not http.
I think i must set the variable forceBackendUrlToFrontendUrl from the Keycloak documentation, but I have no idea how I can set this in Kubernetes.
I had a similar problem, configuration below is working for me:
Keycloak:
Outside: https://keycloak.mydomain.com
Inside: https://keycloak.namespace.svc:8443
- Keycloak Container Env variables:
env:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_FRONTEND_URL
value: https://keycloak.mydomain.com/auth/
---
Frontend:
Outside: https://myfrontend.com
Inside: http://myfrontend.namespace.svc:8080
- Keycloak.json: "url": "https://keycloak.mydomain.com/auth",
- Keycloak Admin Console:
- frontend-client: RootURL: https://myfrontend.com
---
Backend:
Outside: https://myfrontend.com/api
Inside: http://mybackend.namespace.svc:8080
- Keycloak Admin Console:
- backend-client: RootURL: http://mybackend.namespace.svc:8080
This is a spring boot application:
- application.yml
spring
security:
oauth2:
client:
provider:
keycloak:
authorization-uri: "https://keycloak.mydomain.com/auth/realms/<realm>/protocol/openid-connect/auth"
jwk-set-uri: "https://keycloak.namespace.svc:8443/auth/realms/<realm>/protocol/openid-connect/certs"
token-uri: "https://keycloak.namespace.svc:8443/auth/realms/<realm>/protocol/openid-connect/token"
user-info-uri: "https://keycloak.namespace.svc:8443/auth/realms/<realm>/protocol/openid-connect/userinfo"
issuer-uri: "https://keycloak.mydomain.com/auth/realms/<realm>"

How to secure Kibana dashboard using keycloak-gatekeeper?

Current flow:
incoming request (/sso-kibana) --> Envoy proxy --> /sso-kibana
Expected flow:
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
-->
keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana
I deployed keycloak-gatekeeper as a k8s cluster which has the following configuration:
keycloak-gatekeeper.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak-gatekeeper
name: keycloak-gatekeeper
spec:
selector:
matchLabels:
app: keycloak-gatekeeper
replicas: 1
template:
metadata:
labels:
app: keycloak-gatekeeper
spec:
containers:
- image: keycloak/keycloak-gatekeeper
imagePullPolicy: Always
name: keycloak-gatekeeper
ports:
- containerPort: 3000
args:
- "--config=/keycloak-proxy-poc/keycloak-gatekeeper/gatekeeper.yaml"
- "--enable-logging=true"
- "--enable-json-logging=true"
- "--verbose=true"
volumeMounts:
-
mountPath: /keycloak-proxy-poc/keycloak-gatekeeper
name: secrets
volumes:
- name: secrets
secret:
secretName: gatekeeper
gatekeeper.yaml
discovery-url: https://keycloak/auth/realms/MyRealm
enable-default-deny: true
listen: 0.0.0.0:3000
upstream-url: https://kibana.k8s.cluster:5601
client-id: kibana
client-secret: d62e46c3-2a65-4069-b2fc-0ae5884a4952
Envoy.yaml
- name: kibana
hosts: [{ socket_address: { address: keycloak-gatekeeper, port_value: 3000}}]
Problem:
I am able to invoke keycloak login on /Kibana but after login user is not going to /Kibana url i.e. Kibana dashboard is not loading.
Note: Kibana is also running as k8s cluster.
References:
https://medium.com/#vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382
https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d
Update 1:
I'm able to invoke keycloak login on /sso-kibana but after entering credentials its giving 404. The flow is following:
Step 1. Clicked on http://something/sso-kibana
Step 2. Keycloak login page opens at https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth?...
Step 3. After entering credentials redirected to this URL https://something/sso-kibana/oauth/callback?state=890cd02c-f...
Step 4. 404
Update 2:
404 error was solved after I added a new route in Envoy.yaml
Envoy.yaml
- match: { prefix: /sso-kibana/oauth/callback }
route: { prefix_rewrite: "/", cluster: kibana.k8s.cluster }
Therefore, Expected flow (as shown below) is working fine now.
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
--> keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana
In your config you explicitly enabled enable-default-deny which is explained in the documentation as:
enables a default denial on all requests, you have to explicitly say what is permitted (recommended)
With that enabled, you will need to specify urls, methods etc. either via resources entries as shown in [1] or an commandline argument [2]. In case of Kibana, you can start with:
resources:
- uri: /app/*
[1] https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration
[2] https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"