Running User Interfaces and APIs behind keycloak gatekeeper - keycloak

New to keycloak, and authentication in general, so sorry for missing something obvious, and not using accurate terminology.
I'm trying to run a simple Angular UI that talks to a Java (dropwizard) API. I'd like both of those to need auth. I'm (almost) able to get them running fine behind keycloak and keycloak gatekeeper using a single realm and a confidential client. In this case gatekeeper has an upstream-url that is a traefik instance, that then routes to either the UI or API docker container. Something like:
Gatekeeper upstream-url ----> Traefik (my.domain/*) ----> UI (my.domain/ui/*)
\---> API (my.domain/api/*)
This works fine until the session times out, and when the user on the (already loaded) UI page clicks a button that tries to send an ajax request to hit the API (eg https://my.domain/api/getstuff), then Gatekeeper redirects (ie 301) that to the the keycloak login page. This redirect is a little nonsensical for an API request...
At this point both my UI and API projects are auth agnostic (ie they are not running any of the adapters etc just yet - I'm relying on the docker setup to prevent "direct" access to UI and API for now. I'll add the adapters once I need to know something about the user). I can see in https://www.keycloak.org/docs/latest/securing_apps/index.html#configuration-options the autodetect-bearer-only option which seems to describe my issue, ie
It allows you to redirect unauthenticated users of the web application to the Keycloak login page, but send an HTTP 401 status code to unauthenticated SOAP or REST clients instead as they would not understand a redirect to the login page
but seems to apply at the adapter layer, ie after gatekeeper in my scenario.
this seems similar too.
I think I want unauthenticated (eg never logged in, or timed out) access requests to https://my.domain/ui/* to be redirected to the keycloak login page, but https://my.domain/api/* to 401.
And from https://my.domain/ui/somepage the ajax request to https://my.domain/api/getstuff to use the JWT/token/cookie that the browser has from the login (which is working now).
How do I do this? What stupidly obvious step have I missed!?

Unfortunately, you cannot tell Gatekeeper to return 401(403) response codes instead of redirect. There is similar issue: https://issues.jboss.org/browse/KEYCLOAK-11082
What you can do is to remove Gatekeeper completely and implement public client authentication on frontend (JS adapter) and bearer-only client on backend (Java Adapter). If your Java application serves frontend you can implement only confidential client authentication and return 401(403) response for /api/* requests.

Related

How to use JWT Auth0 token for Cloud Run Service to Service communication if the Metaserver Token is overriding the Auth0 Token

Prerequisites
I have two Cloud Run services a frontend and a backend. The frontend is written in Vue.js/Nuxt.js and is using a Node backend therefore. The backend is written in Kotlin with Spring Boot.
Problem
To have an authenticated internal communication between the frontend and the backend I need to use a token thttps://cloud.google.com/run/docs/authenticating/service-to-service#javahat is fetched from the google metaserver. This is documented here: https://cloud.google.com/run/docs/authenticating/service-to-service#java
I did set it all up and it works.
For my second layer of security I integrated the Auth0 authentication provider both in my frontend and my backend. In my frontend a user can log in. The frontend is calling the backend API. Since only authorized users should be able to call the backend I integrated Spring Security to secure the backend API endpoints.
Now the backend verifies if the token of the caller's request are valid before allowing it to pass on to the API logic.
However this theory does not work. And that simply is because I delegate the API calls through the Node backend proxy. The proxy logic however is already applying a token to the request to the backend; it is the google metaserver token. So let me illustrate that:
Client (Browser) -> API Request with Auth0 Token -> Frontend Backend Proxy -> Overriding Auth0 Token with Google Metaserver Token -> Calling Backend API
Since the backend is receiving the metaserver token instead of the Auth0 Token it can never successfully authorize the API call.
Question
Due the fact that I was not able to find any articles about this problem I wonder if it's simply because I am doing it basically wrong.
What do I need to do to have a valid Cloud Run Service to Service communication (guaranteed by the metaserver token) but at the same time have a secured backend API with Auth0 authorization?
I see two workarounds to make this happen:
Authorize the API call in the Node backend proxy logic
Make the backend service public available thus the metaserver token is unnecessary
I don't like any of the above - especially the latter one. I would really like to have it working with my current setup but I have no idea how. There is no such thing like multiple authorization token, right?
Ok I figured out a third way to have a de-facto internal service to service communication.
To omit the meta-server token authentication but still restrict access from the internet I did the following for my backend cloud run service:
This makes the service available from the internet however the ingress is preventing any outsider from accessing the service. The service is available without IAM but only for internal traffic.
So my frontend is calling the backend API now via the Node backend proxy. Even though the frontend node-backend and the backend service are both somewhat "in the cloud" they do not share the same "internal network". In fact the frontend node-backend requests would be redirected via egress to the internet and call the backend service just like any other internet-user would do.
To make it work "like it is coming from internal" you have to do something similar like VPN but it's called VPC (Virtual Private Cloud). And luckily that is very simple. Just create a VPC Connector in GCP.
BUT be aware to create a so called Serverless VPC Access (Connector). Explained here: https://cloud.google.com/vpc/docs/serverless-vpc-access
After the Serverless VPC Access has been created you can select it in your Cloud Run Service "Connection" settings. For the backend service it can be simply selected. For the frontend service however it is important to select the second option:
At least that is important in my case since I am calling the backend service by it's assigned service URL instead of a private IP.
After all that is done my JWT token from the frontend is successfully delivered to the backend API without being overwritten by a MetaServer token.

Not able to load keycloak authentication page from application, calling protected resource with ajax request

I have configured keycloak for IAM with gatekeeper as a proxy. When I call protected resource from my angular application through ajax request, it's not redirecting me to login page of keycloak, although in browser request call its showing me request going for login page. Any help would be much appreciated.
enter image description here
To me, it sounds like you have set up Gatekeeper to only protect your backend resources? Otherwise, the redirect would happen when you try to access your frontend.
If you are running your frontend as a separate application you need to obtain a Bearer token from Keycloak and pass it along in your ajax request. You can use the JS adapter to do that: https://www.keycloak.org/docs/latest/securing_apps/#_javascript_adapter
In that case, you should also configure Gatekeeper with the --no-redirect option, so that it denies any unauthorized request.

keycloak/louketo gatekeeper -- doesn't automatically redirect to keycloak login

I am setting up gatekeeper/louketo as a reverse proxy for a browser app. I have the proxy deployed as a sidecar in a kubernetes pod, with keycloak elsewhere in the same cluster (but accessed by a public URL). Gatekeeper is behind an nginx ingress, which does tls termination.
[I have tried both the most current louketo version and also the fork oneconcern/keycloak-gatekeeper. Some differences, but the issue is the same, so I think its a problem in my configuration.]
Gatekeeper, no matter how I set up the config, reads the discovery url of my realm, but then doesn't redirect on login there. Rather it redirects to my upstream app, using the /oauth/authorize path. I can manually force my app to redirect again to keycloak, but on return from keycloak, gatekeeper doesn't recognize the cookie, and catches me in a redirect loop.
It would seem I am making some simple config error, but I've been working on this for two days, and am at my wit's ends. (Even hacked in extra debugging into the go code, but haven't studied it enough to really know what it is doing.)
My config (best guess of many different variants tried):
- --config=/var/secrets/auth-proxy-keycloak-config.yaml
- --discovery-url=https://auth.my-domain.com/auth/realms/my-realm
- --listen=:4000
- --upstream-url=http://127.0.0.1:3000
- --redirection-url=https://dev.my-domain.com/
- --enable-refresh-tokens=true
- --enable-default-deny=true
- --resources=uri=/*|roles=developer
- --resources=uri=/about|white-listed=true
- --resources=uri=/oauth/*|white-listed=true
The ingress serves https://dev.my-domain.com and routes to port 4000, which is the auth proxy sidecar. It is setup with a lets-encrypt certificate, and terminates tls. I don't use tls in the proxy (should I?). Upstream app at port 3000. Keycloak is at auth.my-domain.com. In auth-proxy-keycloak-config.yaml I have encryption key, and client_id. The keycloak client is setup for public access and standard flow (hence no client_secret needed, I presume). I have fiddled with the various uri settings, and also put in web origins "*" for CORS for testing.
When I try a protected url in the browser, I see:
no session found in request, redirecting for authorization {"error": "authentication session not found"}
in the proxy logs, and it redirects me to /oauth/authorize, not to https://auth.my-domain.com/auth/realms/my-realm/protocol/openid-connect/auth where I think it should redirect me.
UPDATE -- as #jan-garaj noted in comment to answer, /oauth/* shouldn't have been whitelisted. (I got that from a possibly mistaken interpretation of someone else's answer.) I then had to make the cookie not http-only, and finally hit on this issue - Keycloak-gatekeeper: 'aud' claim and 'client_id' do not match ... after that it works!
From the Louketo-proxy doc:
/oauth/authorize is authentication endpoint which will generate the OpenID redirect to the provider
So that redirect is correct. It is louketo-proxy endpoint. It is not request for your app, it will be processed by louketo-proxy. It will generate another redirect to your IDP, where user needs to login.
Off topic:
you really need confidential client and client secret for authorization code flow
web origins "*" for CORS is correct only for http protocol, explicit origin specification is needed for https

OAuth2.0 Auth Server and IAM

I'm building a microservice based REST API and a native SPA Web Frontend for an application.
The API should be protected using OAuth2.0 to allow for other clients in the future. It should use the Authorization Code Flow ideally with Proof Key for Code Exchange (PKCE)
As I understand it I need to run my own OAuth Auth Server that's managing the API Clients and generating access tokens, etc.
Also I need my own Authentication/IAM service with it's own fronted for user login and client authorization granting. This service is the place the users login credentials are ultimately checked against a backend. That last part should be flexible and the backend might be an LDAP server in some private cloud deployment.
These components (Auth Server and IAM servicve) are outside of the OAuth scope but appear, correct me if I'm wrong, to be required if I'm running my own API for my own users.
However creating these services myself appears to be more work than I appreciate besides the obvious security risks involved.
I read about auth0 and okta but I'm not sure if they are suited for my use case with the application potentially deployed in private cloud.
I also thought about running Hydra (OAuth Server) and Kratos (IAM) by ory but I'm not sure if this is adding too many dependencys to my project.
Isn't there an easy way to secure an API with OAuth that deals with the Auth Server and the IAM that's good for small projects?!

Protect my RESTful services by a Keycloak Oauth2 Provider Using NGINX

We have some Restful services on a certain URI and we wanted to publish our services on the web to use them in our mobile app(written in java),
Our services was on a server which cannot handle too much requests at a same time and used it's proxy_pass functionality for this,
So I used Nginx on an intermediate server to control access to our REST server,
Now we want to protect our services by Oauth2 with Password or Client Credentials(as our mobile users should not login into our servers we cannot display any login page to them),
I setup a Keycloak server which is working and I could get token for my client. I'm going to give my auth/token URI to our mobile developers to get Oauth2 token at first and use it in their requests.
The problem is I don't know how to configure Nginx to authorize incoming REST requests with provided token in request header.
Should I config Keycloak to Handle requests and forward authorized ones to NGINX?
Thanks for your help
After some tries I found this solution:
1- you have to add njs module to nginx this is necessary, you have to compile it first(so in windows it will be much trouble, I tried mingw and stopped at a dependency called expect which is not written for mingw and it wasted a lot of time from me, actually I moved our IAM to ubuntu and compiling njs and nginx in there was done in few minutes!)
2- Introspection is the key subject here, and keycloak supports it, its URI is the same as token URI plus introspect, using Basic Authorization in header and token in body
3- nginx also supports introspection after adding njs module to it, which will let nginx support js code inculde inside config file, a great exaple is NGINX Demoes- Oauth2 Introspection OSS, just copy config file and oauth2.js file and its done. I added api directive at location tag in nginx config file to let callers know it is protected.
4- create one client for nginx in keycloak to do the introspection operation, it should be in confidential mode and Service Accounts should be enabled for it.
5- nginx should forward(proxy pass) auth/token request to IAM, so a location for this should be added in config file.
6- [for ubuntu] I have an error in nginx which tells me it cannot resolve localhost! and installing Bind 9 fixes this for me(another time wasting effort was done here).
7- So any one wants to use our service should request token first and then sends its request attached with token to nginx, nginx introspects token and if token was ok and {"active": true} were received forwards request to resource and passws reply to requester.
All done.