Keycloak-nodejs-connect grantManager can't validateToken when configured with internal kubernetes keycloak service address - kubernetes

I have an issue when validating tokens using the keycloak-nodejs-connect library deployed to a kubernetes cluster - specifically when using the internal kubernete's service address for keycloak as the auth-server-url. I am using keycloak version 10.0.1.
Our workflow is as follows - our web app authenticates with a public keycloak client to obtain an access token. This token is attached to requests to the db for data. The db (hasura) uses an auth hook to validate the token before allowing access to its data. This auth hook implements the keycloak-nodejs-connect lib and through the provided middleware calls the grantManager's validateToken. However when the connect lib is configured with kubernete's service address (http://keycloak:8080/auth/) it is guaranteed to error on the issuer match because the issuer property in the JWT token (iss) will be the frontend url configured in the keycloak server (https://keycloak.public.address.uk/auth/).
Is there a way to provide a frontend and backend url to the keycloak-nodejs-connect library so that the issuer validation can occur whilst using the backend url to speak to keycloak via a kubernete's service - or should I be configuring keycloak a certain way so that the issuer is different? I am specifically needing to use a kubernete's service address here rather than a public address for keycloak communications in my cluster.
The following source location hyperlinks try to highlight the issue in code:
nodejs connect server url config (note only one url available used
for both keycloak server communication and issuer validation)
Where the config is applied
Where the token issuer is validated against the configured keycloak auth server
Keycloak server's front end url
One example of how the issuer is set to the frontend url when the token
is being generated
Many thanks for any help,
Andy.

Related

No Login Page shown with Keycloak and Quarkus

I have a keycloak Server running on my localhost with port 8081.
I'm trying to connect my Quarkus application with it to secure REST-Endpoints.
However I'm not able to Login to my Keycloak server.
I annotated an /test endpoint with #RolesAllowed("user"). Since then I can't access the endpoint but I get an Empty page with a 401 Unauthorized error in the Web console.
What I want is that I get redirected to the Keycloak default page so I can authorize myself. Any ideas why that is not happening?
Here is my application.properties Keycloak configuration:
quarkus.oidc.auth-server-url=http://localhost:8081/realms/TestRealm
quarkus.oidc.client-id=testclient
quarkus.oidc.credentials.secret=MYSECRET
quarkus.oidc.tls.verification=none
quarkus.keycloak.policy-enforcer.enable=false
logging.level.org.keycloak=DEBUG
resteasy.role.based.security=true
quarkus.http.cors=true
quarkus.http.port=8080
when I set policy enforcer to true I can't access any endpoint.
TestRealm has a Resource configured with a /test endpoint.
In the Quarkus documentation for keycloak they said that you don't need to setup your own Keycloak Server in Dev mode since Quarkus comes with one. Might that be the Problem? is my Quarkus Application not connecting to my Keycloak server? And if so, how can I force quarkus in dev mode to use my Keycloak server?
EDIT: I figured out that I have access to my endpoint if I send the request with the Bearer token, so I guess Quarkus is accessing my Keycloak instance.
Still, why don't I get forwarded to the default Keycloak login page when trying to access my Rest endpoint via my browser? Am I missing any configuration?
For anyone with the same issue I fixed it by adding:
quarkus.oidc.auth-mechanism=keycloak
quarkus.oidc.application-type=web-app
quarkus.http.auth.permission.authenticated.paths=/*
quarkus.http.auth.permission.authenticated.policy=authenticated
To the config

Keycloak behind a Load Balancer with SSL gives a "Mixed Content" error

I have set up Keycloak (docker container) on the GCP Compute Engine (VM). After setting the sslRequired=none, I'm able to access Keycloak over a public IP (e.g. http://33.44.55.66:8080) and manage the realm.
I have configured the GCP CLassic (HTTPS) Load Balancer and added two front-ends as described below. The Load Balancer forwards the request to the Keycloak instance on the VM.
HTTP: http://55.44.33.22/keycloak
HTTPS: https://my-domain.com/keycloak
In the browser, the HTTP URL works fine and I'm able to login to Keycloak and manage the realm. However, for the HTTPS URL, I get the below error
Mixed Content: The page at 'https://my-domain.com/auth/admin/master/console/' was loaded over HTTPS, but requested an insecure script 'http://my-domain.com/auth/js/keycloak.js?version=gyc8p'. This request has been blocked; the content must be served over HTTPS.
Note: I tried this suggestion, but it didn't work
Can anyone help with this, please?
I would never expose Keycloak on plain http protocol. Keyclok admin console itself is secured via OIDC protocol and OIDC requires to use https protocol. So default sslRequired=EXTERNAL is safe and smart configuration option from the vendor.
SSL offloading must be configured properly:
Keycloak container with PROXY_ADDRESS_FORWARDING=true
loadbalancer/reverse proxy (nginx, GCP Classic Load Balancer, AWS ALB, ...) with proper request header X-Forwarded-* configuration, so Keycloak container will know correct protocol, domain which is used for the users

How to use JWT Auth0 token for Cloud Run Service to Service communication if the Metaserver Token is overriding the Auth0 Token

Prerequisites
I have two Cloud Run services a frontend and a backend. The frontend is written in Vue.js/Nuxt.js and is using a Node backend therefore. The backend is written in Kotlin with Spring Boot.
Problem
To have an authenticated internal communication between the frontend and the backend I need to use a token thttps://cloud.google.com/run/docs/authenticating/service-to-service#javahat is fetched from the google metaserver. This is documented here: https://cloud.google.com/run/docs/authenticating/service-to-service#java
I did set it all up and it works.
For my second layer of security I integrated the Auth0 authentication provider both in my frontend and my backend. In my frontend a user can log in. The frontend is calling the backend API. Since only authorized users should be able to call the backend I integrated Spring Security to secure the backend API endpoints.
Now the backend verifies if the token of the caller's request are valid before allowing it to pass on to the API logic.
However this theory does not work. And that simply is because I delegate the API calls through the Node backend proxy. The proxy logic however is already applying a token to the request to the backend; it is the google metaserver token. So let me illustrate that:
Client (Browser) -> API Request with Auth0 Token -> Frontend Backend Proxy -> Overriding Auth0 Token with Google Metaserver Token -> Calling Backend API
Since the backend is receiving the metaserver token instead of the Auth0 Token it can never successfully authorize the API call.
Question
Due the fact that I was not able to find any articles about this problem I wonder if it's simply because I am doing it basically wrong.
What do I need to do to have a valid Cloud Run Service to Service communication (guaranteed by the metaserver token) but at the same time have a secured backend API with Auth0 authorization?
I see two workarounds to make this happen:
Authorize the API call in the Node backend proxy logic
Make the backend service public available thus the metaserver token is unnecessary
I don't like any of the above - especially the latter one. I would really like to have it working with my current setup but I have no idea how. There is no such thing like multiple authorization token, right?
Ok I figured out a third way to have a de-facto internal service to service communication.
To omit the meta-server token authentication but still restrict access from the internet I did the following for my backend cloud run service:
This makes the service available from the internet however the ingress is preventing any outsider from accessing the service. The service is available without IAM but only for internal traffic.
So my frontend is calling the backend API now via the Node backend proxy. Even though the frontend node-backend and the backend service are both somewhat "in the cloud" they do not share the same "internal network". In fact the frontend node-backend requests would be redirected via egress to the internet and call the backend service just like any other internet-user would do.
To make it work "like it is coming from internal" you have to do something similar like VPN but it's called VPC (Virtual Private Cloud). And luckily that is very simple. Just create a VPC Connector in GCP.
BUT be aware to create a so called Serverless VPC Access (Connector). Explained here: https://cloud.google.com/vpc/docs/serverless-vpc-access
After the Serverless VPC Access has been created you can select it in your Cloud Run Service "Connection" settings. For the backend service it can be simply selected. For the frontend service however it is important to select the second option:
At least that is important in my case since I am calling the backend service by it's assigned service URL instead of a private IP.
After all that is done my JWT token from the frontend is successfully delivered to the backend API without being overwritten by a MetaServer token.

Protect my RESTful services by a Keycloak Oauth2 Provider Using NGINX

We have some Restful services on a certain URI and we wanted to publish our services on the web to use them in our mobile app(written in java),
Our services was on a server which cannot handle too much requests at a same time and used it's proxy_pass functionality for this,
So I used Nginx on an intermediate server to control access to our REST server,
Now we want to protect our services by Oauth2 with Password or Client Credentials(as our mobile users should not login into our servers we cannot display any login page to them),
I setup a Keycloak server which is working and I could get token for my client. I'm going to give my auth/token URI to our mobile developers to get Oauth2 token at first and use it in their requests.
The problem is I don't know how to configure Nginx to authorize incoming REST requests with provided token in request header.
Should I config Keycloak to Handle requests and forward authorized ones to NGINX?
Thanks for your help
After some tries I found this solution:
1- you have to add njs module to nginx this is necessary, you have to compile it first(so in windows it will be much trouble, I tried mingw and stopped at a dependency called expect which is not written for mingw and it wasted a lot of time from me, actually I moved our IAM to ubuntu and compiling njs and nginx in there was done in few minutes!)
2- Introspection is the key subject here, and keycloak supports it, its URI is the same as token URI plus introspect, using Basic Authorization in header and token in body
3- nginx also supports introspection after adding njs module to it, which will let nginx support js code inculde inside config file, a great exaple is NGINX Demoes- Oauth2 Introspection OSS, just copy config file and oauth2.js file and its done. I added api directive at location tag in nginx config file to let callers know it is protected.
4- create one client for nginx in keycloak to do the introspection operation, it should be in confidential mode and Service Accounts should be enabled for it.
5- nginx should forward(proxy pass) auth/token request to IAM, so a location for this should be added in config file.
6- [for ubuntu] I have an error in nginx which tells me it cannot resolve localhost! and installing Bind 9 fixes this for me(another time wasting effort was done here).
7- So any one wants to use our service should request token first and then sends its request attached with token to nginx, nginx introspects token and if token was ok and {"active": true} were received forwards request to resource and passws reply to requester.
All done.

What is meaning of Kubernetes webhook user client-certificate config?

I need to implement a custom authentication and authorisation module for Kubernetes. This is going to have to be done via a web hook.
The documentation for the authentication and authorisation webhooks describes a config file that the API Server needs to be started with.
The config file looks identical for both authentication and authorisation and looks like this:
# clusters refers to the remote service.
clusters:
- name: name-of-remote-authn-service
cluster:
certificate-authority: /path/to/ca.pem # CA for verifying the remote service.
server: https://authn.example.com/authenticate # URL of remote service to query. Must use 'https'.
# users refers to the API server's webhook configuration.
users:
- name: name-of-api-server
user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require a context. Provide one for the API server.
current-context: webhook
contexts:
- context:
cluster: name-of-remote-authn-service
user: name-of-api-sever
name: webhook
I can see that the clusters section refers to the remote service, i.e. it's defining the webhook, thereby answering the question the API Server needs to have answered: "what is the URL endpoint to hit when an authn/authz decision is required, and when I connect via HTTPS, who is the CA authority for the webhook's TLS certificate so that I know I can trust the remote webhook?"
I'm not sure of the users section. What is the purpose of the client-certificate and client-key fields? The comment in the file says "cert for the webhook plugin to use", but as this config file is given to the API Server, not the web hook, I don't understand what this means. Is this a certificate that will allow the webhook service to authenticate the connection that the API Server will initiate with it? i.e. the client certificate needs to go into the truststore of the webhook server?
Are both of these assumptions correct?
Kubernetes webhook is using two-way SSL authentication, so the fields in users section is used to configure the certificates for "client side's authentication".
clusters section configuration just works normal one way SSL authentication, which is server (here is your webhook module) will validate client's (here is Kubernetes) request with configured certificate.
As long as you configured certificates in users section, client (Kubernetes) could have the ability to validate server's (webhook module) response, just acting like a reverse CA authentication of one way SSL.