Keycloak client deployment best practice - keycloak

So I want to deploy a client-app (java, with spring security, if that matters) to different companies. The keycloak will obviously run on servers of my organization but the client-app as to run on the servers of the client-companies.
Should the keycloak-client's access type be public or confidential?
i.e. what is the client-secret used for? (Encryption)?
Is it therefore a problem if the companies admins can theoretically read the secret by decompiling the jar of the client-app I give them?
Concerning the valid redirect URIs: Idealy I would like to use grant-type: password, so the user of the company enters his credentials into the frontend of the company deployed client-app and it logs into keycloak. Potentially the client-app deployed in the company is only reacable from the company intranet.
What can the redirect URI be for this case?

Should the keycloak-client's access type be public or confidential?
From the RFC 6749 OAuth 2.0 specification one can read:
confidential
Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
restricted access to the client credentials), or capable of secure
client authentication using other means.
public
Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the
resource owner, such as an installed native application or a web
browser-based application), and incapable of secure client
Since you are not using a pure web-browser application, or a mobile phone, but rather a web application with a secure backend, you should use a confidential client.
i.e. what is the client-secret used for? (Encryption)?
From the Keycloak documentation:
Confidential clients are required to provide a client secret when they
exchange the temporary codes for tokens. Public clients are not
required to provide this client secret.
Therefore, you need the client-secret because you have chosen a confidential client. The client-secret is used so that the application requesting the access token from Keycloak can be properly authenticated. In your case, the servers from the companies (using your app) requesting an access token from Keycloak. Consequently, Keycloak has to ensure that the server making the request is legit.
That is the purpose of the client-secret. It is similar to when you go to the ATM and request money, the bank knows that you are the owner of that resource (i.e, the bank account) if you have inserted the correct code (i.e., analogous to a client-secret).
Is it therefore a problem if the companies admins can theoretically
read the secret by decompiling the jar of the client-app I give them?
The client_secret has to be known by the application requesting the token (i.e., the company) and the authorization server (i.e., Keycloak). So in theory, if the companies do not mind their admins having access to such information, it should be fine for you. At the end of the day, the client-secret has to be known by both parties anyway. A way of mitigating potential problems with the leaking of client secrets is to change client-secrets once in a while, and communicate that change to interested parties.
As long as one company cannot reverse engineer the client secret of the other company you should be fine.
What can the redirect URI be for this case?
It should be the URL of the frontend leading page of the company deploying the client-app, after the user has been successfully authenticated.
Bear in mind, however:
You should take extra precautions when registering valid redirect URI
patterns. If you make them too general you are vulnerable to attacks.
See Threat Model Mitigation chapter for more information.
(source)

Related

OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

OAuth2.0 Auth Server and IAM

I'm building a microservice based REST API and a native SPA Web Frontend for an application.
The API should be protected using OAuth2.0 to allow for other clients in the future. It should use the Authorization Code Flow ideally with Proof Key for Code Exchange (PKCE)
As I understand it I need to run my own OAuth Auth Server that's managing the API Clients and generating access tokens, etc.
Also I need my own Authentication/IAM service with it's own fronted for user login and client authorization granting. This service is the place the users login credentials are ultimately checked against a backend. That last part should be flexible and the backend might be an LDAP server in some private cloud deployment.
These components (Auth Server and IAM servicve) are outside of the OAuth scope but appear, correct me if I'm wrong, to be required if I'm running my own API for my own users.
However creating these services myself appears to be more work than I appreciate besides the obvious security risks involved.
I read about auth0 and okta but I'm not sure if they are suited for my use case with the application potentially deployed in private cloud.
I also thought about running Hydra (OAuth Server) and Kratos (IAM) by ory but I'm not sure if this is adding too many dependencys to my project.
Isn't there an easy way to secure an API with OAuth that deals with the Auth Server and the IAM that's good for small projects?!

Keycloak client vs user

I understand that keycloak has built-in clients and we add the users later on.
But in general, what is the difference between a client and a user in Keycloak?
According to the Keycloak documentation
User - Users are entities that are able to log into your system
Client - Clients are entities that can request Keycloak to authenticate a user. Most
often, clients are applications and services that want to use Keycloak to secure
themselves and provide a single sign-on solution. Clients can also be entities that
just want to request identity information or an access token so that they can
securely invoke other services on the network that are secured by Keycloak
In short words, not only for keycloak but for OAuth and OpenId Connect too, a client represents a resource which some users can access. The built-in clients for keycloak represent some resources for keycloak itself.
Clients and users are two completely different constructs in keycloak.
In plain English, client is an application. Example for an application could be a e.g. yelp.com or any mobile application. Client can be a simple REST API. Keycloak's built in clients are for keycloak internal use, But any user-defined application has to be registered as a client in keycloak.
Users are the one which authenticate via keycloak to gain access to these applications/clients. Users are stored in keycloak DB or any externally hosted LDAP but synced with keycloak.

Security for on-prem/cloud REST Application

I've been reading security articles for several days, but have no formal training in the field. I am developing a configuration and management application for an IoT device. It is meant to be run either on an internal network, or accessed over the web.
My application will be used by IT admins, managers, and factory-floor workers. Depending on the installation, there will be varying levels of infrastructure in place. It could run on a laptop on the floor itself, on a server, or hosted in the cloud. For this reason, we can not assume that our clients will have the kind of infrastructure you might find at a datacenter or in the cloud, for example CAS or NTP.
Our application provides a REST API for client applications to gather data. We'd like to use roles to restrict what data users can access. I've gathered that a common solution for authentication is to encode the username/pass in the REST Header. However, this is completely insecure unless sent over a secure channel.
As I understand it, SSL Certification Authorities grant certs for a specific domain. Our application will have no set domain, and a different IP depending on the installation. Many web applications do not trust self-signed certs. It's not clear to me whether a self-signed application is good enough for a typical application-developer who will be consuming our interface.
With this being the case:
1) What are my options to set up a secure channel, internally or via the web?
2) Am I making assumptions about how our product will be used that damage our users' security unnecessarily?
Well you can use custom encryption to encrypt the data being sent to the applications.
You can also use JSON web tokens to secure your REST API. https://en.wikipedia.org/wiki/JSON_Web_Token. The JSON tokens could be generated by a centralized authentication server and included in all requests sent by the client applications to the server

Sharing Security Context between web app and RESTful service using Spring Security

We are designing security for a green field project with a UI web module (Spring MVC) - the client, and a RESTful services web module (CXF) - the server, to be deployed as separate war files in the same Websphere app server. The system should be secured with Spring Security, authenticating against LDAP and authorizing against a database. We have been looking for the best solution to share the security context between the 2 apps, so a user can authenticate in the web UI and invoke its AJAX calls to the secured RESTful services. Options found:
OAuth: seems overkill for our requirements, introduces a fairly complex authentication process, and reportedly some enterprise integration issues
CAS: would amount to setting up an enterprise SSO solution, something beyond the scope of our engagement
Container-based (Websphere) security, although not recommended by Spring Security, and we're not clear if this could provide a solution to our specific needs
We're looking for a simpler solution. How can we propagate the Security Context between the 2 apps? Should we implement authentication in the UI web app, then persist sessions in the DB, for the RESTful services to lookup? Can CXF provide a solution? We read many threads about generating a 'security token' that can be passed around, but how can this be done exactly with Spring Security, and is it safe enough?
Looking forward to any thoughts or advice.
You want to be able to perform the REST web services on the server on behalf the user authenticated in UI web module.
The requirements you described called SingleSignOn.
The simplest way to do it is passing the HTTP header with the user name during REST WS calls.
(I hope your REST client allows to do it).
To do it in secure way use one of the following:
Encrypt the user name in REST client and decrypt it in REST server
Ensure that the header is sent from the local host (since your application deployed on the same container)
Therefore, protect both application using SpringSecurity authenticate against LDAP.
In the first application (Rest Client) use regular Form Authentication
In the second application (Rest Server) add the your own PreAuthenticatedProcessingFilter:
http://static.springsource.org/spring-security/site/docs/3.1.x/reference/springsecurity-single.html#d0e6167
Edited
The “Authentication” is the process of verifying of a principal’s identity.
In our case both REST Client (Spring MVC application) and REST server (CXF application) verify an identity against LDAP. LDAP “says” OK or Not. LDAP is a user repository. It stateless and does not remember the previous states. It should be kept in applications.
According to my understanding, a user will not access directly to REST server – the user always access REST Client. Therefore, when the user access REST Client he/ she provides a user name and a password and REST Client authenticate against LDAP. So, if REST Client access REST server the user is authenticated and REST Client knows his name.
So, if request come to REST server with a user header name - REST server for sure knows that the user was authenticated and it should not authenticate it again against LDAP.
(The header should be passed in the secured way as described above).
Rest Server should take the user name, to access to LDAP and to collect related user information without providing of the user password (since the user already authenticated).