OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials - keycloak

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.

Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

Related

Keycloak client deployment best practice

So I want to deploy a client-app (java, with spring security, if that matters) to different companies. The keycloak will obviously run on servers of my organization but the client-app as to run on the servers of the client-companies.
Should the keycloak-client's access type be public or confidential?
i.e. what is the client-secret used for? (Encryption)?
Is it therefore a problem if the companies admins can theoretically read the secret by decompiling the jar of the client-app I give them?
Concerning the valid redirect URIs: Idealy I would like to use grant-type: password, so the user of the company enters his credentials into the frontend of the company deployed client-app and it logs into keycloak. Potentially the client-app deployed in the company is only reacable from the company intranet.
What can the redirect URI be for this case?
Should the keycloak-client's access type be public or confidential?
From the RFC 6749 OAuth 2.0 specification one can read:
confidential
Clients capable of maintaining the confidentiality of their
credentials (e.g., client implemented on a secure server with
restricted access to the client credentials), or capable of secure
client authentication using other means.
public
Clients incapable of maintaining the confidentiality of their
credentials (e.g., clients executing on the device used by the
resource owner, such as an installed native application or a web
browser-based application), and incapable of secure client
Since you are not using a pure web-browser application, or a mobile phone, but rather a web application with a secure backend, you should use a confidential client.
i.e. what is the client-secret used for? (Encryption)?
From the Keycloak documentation:
Confidential clients are required to provide a client secret when they
exchange the temporary codes for tokens. Public clients are not
required to provide this client secret.
Therefore, you need the client-secret because you have chosen a confidential client. The client-secret is used so that the application requesting the access token from Keycloak can be properly authenticated. In your case, the servers from the companies (using your app) requesting an access token from Keycloak. Consequently, Keycloak has to ensure that the server making the request is legit.
That is the purpose of the client-secret. It is similar to when you go to the ATM and request money, the bank knows that you are the owner of that resource (i.e, the bank account) if you have inserted the correct code (i.e., analogous to a client-secret).
Is it therefore a problem if the companies admins can theoretically
read the secret by decompiling the jar of the client-app I give them?
The client_secret has to be known by the application requesting the token (i.e., the company) and the authorization server (i.e., Keycloak). So in theory, if the companies do not mind their admins having access to such information, it should be fine for you. At the end of the day, the client-secret has to be known by both parties anyway. A way of mitigating potential problems with the leaking of client secrets is to change client-secrets once in a while, and communicate that change to interested parties.
As long as one company cannot reverse engineer the client secret of the other company you should be fine.
What can the redirect URI be for this case?
It should be the URL of the frontend leading page of the company deploying the client-app, after the user has been successfully authenticated.
Bear in mind, however:
You should take extra precautions when registering valid redirect URI
patterns. If you make them too general you are vulnerable to attacks.
See Threat Model Mitigation chapter for more information.
(source)

OAuth2.0 Auth Server and IAM

I'm building a microservice based REST API and a native SPA Web Frontend for an application.
The API should be protected using OAuth2.0 to allow for other clients in the future. It should use the Authorization Code Flow ideally with Proof Key for Code Exchange (PKCE)
As I understand it I need to run my own OAuth Auth Server that's managing the API Clients and generating access tokens, etc.
Also I need my own Authentication/IAM service with it's own fronted for user login and client authorization granting. This service is the place the users login credentials are ultimately checked against a backend. That last part should be flexible and the backend might be an LDAP server in some private cloud deployment.
These components (Auth Server and IAM servicve) are outside of the OAuth scope but appear, correct me if I'm wrong, to be required if I'm running my own API for my own users.
However creating these services myself appears to be more work than I appreciate besides the obvious security risks involved.
I read about auth0 and okta but I'm not sure if they are suited for my use case with the application potentially deployed in private cloud.
I also thought about running Hydra (OAuth Server) and Kratos (IAM) by ory but I'm not sure if this is adding too many dependencys to my project.
Isn't there an easy way to secure an API with OAuth that deals with the Auth Server and the IAM that's good for small projects?!

Security for on-prem/cloud REST Application

I've been reading security articles for several days, but have no formal training in the field. I am developing a configuration and management application for an IoT device. It is meant to be run either on an internal network, or accessed over the web.
My application will be used by IT admins, managers, and factory-floor workers. Depending on the installation, there will be varying levels of infrastructure in place. It could run on a laptop on the floor itself, on a server, or hosted in the cloud. For this reason, we can not assume that our clients will have the kind of infrastructure you might find at a datacenter or in the cloud, for example CAS or NTP.
Our application provides a REST API for client applications to gather data. We'd like to use roles to restrict what data users can access. I've gathered that a common solution for authentication is to encode the username/pass in the REST Header. However, this is completely insecure unless sent over a secure channel.
As I understand it, SSL Certification Authorities grant certs for a specific domain. Our application will have no set domain, and a different IP depending on the installation. Many web applications do not trust self-signed certs. It's not clear to me whether a self-signed application is good enough for a typical application-developer who will be consuming our interface.
With this being the case:
1) What are my options to set up a secure channel, internally or via the web?
2) Am I making assumptions about how our product will be used that damage our users' security unnecessarily?
Well you can use custom encryption to encrypt the data being sent to the applications.
You can also use JSON web tokens to secure your REST API. https://en.wikipedia.org/wiki/JSON_Web_Token. The JSON tokens could be generated by a centralized authentication server and included in all requests sent by the client applications to the server

Single Sign-On for Rich Clients ("Fat Client") without Windows Logon

single sign-on (SSO) for web applications (used through a browser) is well-documented and established. Establishing SSO for Rich Clients is harder, and is usually suggested on the basis of Kerberos tickets, in particular using a Windows login towards an ActiveDirectory in a domain.
However, I'm looking for a more generic solution for the following: I need to establish "real" SSO (one identity for all applications, i.e. not just a password synchronization across applications), where on client's side (unmanaged computers, incl. non-Windows), the "end clients" are a Java application and a GTK+ application. Both communicate with their server counterparts using a HTTP-based protocol (say, WebServices over HTTPS). The clients and the server do not necessarily sit in the same LAN/Intranet, but the client can access the servers from the extranet. The server-side of all the applications sit in the same network area, and the SSO component can access the identity provider via LDAP.
My question is basically "how can I do that"? More specifically,
a) is there an agreed-upon mechanism for secure, protected client-side "sso session storage", as it is the case with SSO cookies for browser-accessed applications? Possibly something like emulating Kerberos (TGT?) or even directly re-using it even where no ActiveDirectory authentication has been performed on the client side?
b) are there any protocols/APIs/frameworks for the communication between rich clients and the other participants of SSO (as it is the case for cookies)?
c) are there any APIs/frameworks for pushing kerberos-like TGTs and session tickets over the network?
d) are there any example implementations / tutorials available which demonstrate how to perform rich-client SSO?
I understand that there are "fill-out" agents which learn to enter the credentials into the application dialogues on the client side. I'd rather not use such a "helper" if possible.
Also, if possible, I would like to use CAS, Shibboleth and other open-source components where possible.
Thanks for comments, suggestions and answers!
MiKu
Going with AD account IS the generic solution. Kerberos is ubiquitous. This is the only mechanism which will ask you for your credentials once and just once at logon time.
This is all feasable, you need:
A KDC
Correct DNS entries
KDC accounts
Correct SPN entries
Client computers configured to talk to the KDC
Java app using JAAS with JGSS to obtain service tickets
GSS-API with your GTK+ app to obtain service tickets
What did you figure out yourself yet?
Agreed with Michael that GSSAPI/Kerberos is what you want to use. I'll add that there’s a snag with Java, however: by default, JGSS uses its own GSSAPI and Kerberos implementations, written in Java in the JDK, and not the platform’s libraries. Thus, it doesn’t obey your existing configuration and doesn’t work like anything else (e.g. on Unix it doesn’t respect KRB5CCNAME or other environment variables you’re used to, can’t use the DNS to locate KDCs, has a different set of supported ciphers, etc.). It is also buggy and limited; it can’t follow referrals, for example.
On Unix platforms, you can tell JGSS to bypass the JDK code and use an external GSSAPI library by starting the JVM with:
-Dsun.security.jgss.native=true -Dsun.security.jgss.lib=/path/to/libgssapi_krb5.so
There is no analogous option on Windows to use SSPI, however. This looks promising:
http://dblock.github.com/waffle/
... but I haven’t gotten to addressing this issue yet.

How can I trust that the SiteMinder HTTP headers haven't been tampered with?

I am completely new to SiteMinder and SSO in general. I poked around on SO and CA's web site all afternoon for a basic example and can't find one. I don't care about setting up or programming SM or anything like that. All of that is already done by someone else. I just want to adapt my JS web app to use SM for authentication.
I get that SM will add a HTTP header with a key such as SM_USER that will tell me who the user is. What I don't get is -- what prevents anyone from adding this header themselves and bypassing SM entirely? What do I have to put in my server-side code to verify that the SM_USER really came from SM? I suppose secure cookies are involved...
The SM Web Agent installed on the Web Server is designed to intercept all traffic and checks to see if the resource request is...
Protected by SiteMinder
If the User has a valid SMSESSION (i.e. is Authenticated)
If 1 and 2 are true, then the WA checks the Siteminder Policy Server to see if the user is Authorized to access the requested resource.
To ensure that you don't have HTTP Header injections of user info, the SiteMinder WebAgent will rewrite all the SiteMinder specific HTTP Header information. Essentially, this means you can "trust" the SM_ info the WebAgent is presenting about the user since it is created by the Web Agent on the server and not part of the incoming request.
Because all traffic should pass through Siteminder Web Agent so even if the user sets this header it will be overwritten/removed
All Siteminder architectures do indeed make the assumption that the application just has to trust the "SM_" headers.
In practice, this may not be sufficient depending on the architecture of your application.
Basically, you have 3 cases:
The Web Agent is installed on the web server where your application runs (typical case for Apache/PHP applications): as stated above, you can trust the headers as no requests can reach your application without being filtered by the web agent.
The Web Agent is installed on a different web server than the one where your application runs, but on the same machine (typical case: SM Agent installed on an Apache front-end serving a JEE Application Server): you must ensure that no requests can directly reach your application server. Either you bind your application server to the loopback interface or you filter the ports on the server.
The Web Agent runs on a reverse proxy in front of your application. Same remark. The only solution here is to implement an IP filter on your application to only allow requests that come from your reverse proxy.
SiteMinder r12.52 contains a new functionality named Enhanced Session Assurance with DeviceDNA™. DeviceDNA can be used to ensure that the SiteMinder Session Cookie has not been tampered with. If the Session is replayed on a different machine, or from another brower instance on the same machine, DeviceDNA will catch this and block the request.
Click here to view a webcast discussing new features in CA SiteMinder r12.52
Typical enterprise architecture will be Webserver (Siteminder Agent) + AppServer (Applications)
Say IP filtering is not enabled, and webs requests are allowed directly to AppServer, bypassing webserver and the sso-agent.
If applications have to implement a solution to assert the request headers / cookies are not tampered / injected, do we have any solution simillar to the following?
Send the SM_USERID encrypted in a seperate cookie or encrypted (Sym/Asym) along with SMSESSION id
Application will use the key to decrypt the SMSESSION or SM_USERID to retrive the user id, session expiry status and any other addtional details and authorization details if applicable.
Application now trusts the user_id and do authentication