Our authentication stack is based principally on Kerberos and now we want enlarge the offering to modern authentication like JWT based authentication (e.g. OpenID Connect). To achieve that and guarantee backward compatibility we have to provide protocol transition:
Kerberos => JWT-based authentication (OIDC)
This is the easy way and there are already many products that support this like
ADFS
Keycloak
...
JWT-based authentication (OIDC) => Kerberos
This way is not trivial, there are small projects (like this one) but seem to be old and not maintened anymore. It seems that there is no real option for this transition.
Any inputs / suggestion on this topic?
This transition should be supported on Windows, Linux (i.e. RedHat / Centos) and containers (this should be trivial given the before requirements); Java (Spring Boot), .NET and .NET Core.
Related
We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.
I know Central Authentication Service (CAS) and Kerberos both could be utilized to authenticated for establishing the session.The two protocols involves at least three parties,and will create a Ticket Granting Ticket duration authentication,so which differences are there between CAS and Kerberos?
Anyone could help? Thank you!
[UPDATE]
#Fred said (please see reply below)
it(CAS) is a way to proxy authentication services like Kerberos or
LDAP on the Web.
However,JASIG states "CAS then generates a ticket and a transient cookie transmitted over SSL to be stored in Browser memory" (https://wiki.jasig.org/display/CAS/Extended+Authentication+Walkthroughs), so I guess CAS isn't just like a proxy because it itself can generate a ticket. Am I right?
Please shed a light on me, thanks!
CAS is not an authentication service in and of itself, but it is a way to proxy authentication services like Kerberos or LDAP on the Web.
At the time CAS was invented there was little support for kerberos in either the browser or the server. So CAS ( and Stanford WebAuth, and the one Duke wrote and ... ) all came up with various ways to emulate the kind of authentication service Kerberos provides using what was available in the browser. (i.e. stuffing things that look a lot like kerberos service tickets into browser cookies... )
Even now, kerberos support is not uniformly available in all browsers and all servers. Configuring your browser to do kerberos authentication via SPNEGO can vary from completely automatic to next to impossible. If you have a web based application, your best bet is to use something like CAS to do cookie-based authentication. A proxy service like CAS will work with any browser that supports cookies.
Kerberos does not support session key and only use algorithm verification。
As I understand it,
SPN is an authenticating tool for windows services.
Kerberos is a
user authentication service
SPNEGO-GSSAPI is the third party API to
be able to use those services.
SSPI : is the Neutral layer to send
request from SPNEGO to SPN service.
Am I completely lost?
Trying to figure out how it works but information, is either too precise or not enough.
Ok a more verbose answer:-
SPN - Service Principal Name. It is an identifier associated with each account in a KDC implementation(AD, OpenLDAP etc). Basically if your account acts as a service to which a client authenticates, the client has to specify "who" it wants to communicate to. This "who" identifier is the SPN. This is the strict definition. Many people often call the client name (UPN - User Principal Name) of a service as SPN. This happens when the service itself may act as a client( google the delegation scenario ). This is not strictly correct but widely assumed true.
Kerberos is a protocol for authentication. It is a name for a framework. It involves a third party server(called KDC or Key Distribution Centre) and involves a series of steps of acquiring tickets(tokens of authentication). It is really complicated so http://en.wikipedia.org/wiki/Kerberos_(protocol)
To some extent you got this right. GSSAPI is an API but SPNEGO is not. GSSAPI is technically agnostic to the auth mechanism you use, but most folks use it for kerberos authentication. SPNEGO is a pseudo mechanism, in the sense it declares an RFC for authentication based communication in HTTP domain. Strictly speaking SPNEGO is a specification but most folks also consider it as an implementation. For instance, Sun and IBM JDK provides "mechanism providers" for SPNEGO token generation but GSSAPI is used to actually call it. This is done in many projects(Tomcat as a Server is and example that come to the top of my head and one of the folks who answered this question developed it).
SSPI is an analogue to GSSAPI in windows. Its a different API which ends up doing something very similar to GSSAPI.
Not quite.
SPN simply means 'Server Principal Name' and is the AD or Kerberos slang for the service you try to authenticate against.
Kerberos is a user authentication service, more or less yes. It also provides security for network messages and calls between services.
SPNEGO-GSSAPI* is a kind of strange beast. GSSAPI (Generic Security Service Application Program Interface) is an API to (in principle) different authentication services, it provides negotiation of the mechanisms used. Often the only mechanism available will be Kerberos though. It is the usual API to attach 3rd party programs to Kerberos when you are on Unix (defined in various RFCs, for example RFC 2743 )
On the windows platform SSPI is the generic layer, so it compares to GSSAPI.
SPNEGO is kind of a strange hybrid. It is a mechanism to be used in SSPI, HTTP Auth or GSSAPI which negotiates another auth protocol (for example Kerberos or NTLM if you are on Windows), so it basically does the same thing GSSAPI does again in a different way.
Typical uses of SPNEGO are HTTP authentication to a windows domain, for example IIS uses it if you use 'Integrated windows authentication'. It is also used when you select the 'Negotiate' options for SSPI. See for example RFC 4559
Almost all of your understandings are wrong.
Here it goes:
SPN: A specific service-class is bound to a specific account, e.g. HTTP to www.stackoverflow.com => HTTP/www.stackoverflow.com#STACKOVERFLOW.COM
Yes
3./4. GSS-API (Unix)/SSPI (Windows): Mechanism neutral API to interact with. E.g, Kerberos 5, NTLM, SPNEGO, etc.
SPNEGO: It is one of many mechnisms supported by GSS-API/SSPI. It is actually a pseudo-mech.
single sign-on (SSO) for web applications (used through a browser) is well-documented and established. Establishing SSO for Rich Clients is harder, and is usually suggested on the basis of Kerberos tickets, in particular using a Windows login towards an ActiveDirectory in a domain.
However, I'm looking for a more generic solution for the following: I need to establish "real" SSO (one identity for all applications, i.e. not just a password synchronization across applications), where on client's side (unmanaged computers, incl. non-Windows), the "end clients" are a Java application and a GTK+ application. Both communicate with their server counterparts using a HTTP-based protocol (say, WebServices over HTTPS). The clients and the server do not necessarily sit in the same LAN/Intranet, but the client can access the servers from the extranet. The server-side of all the applications sit in the same network area, and the SSO component can access the identity provider via LDAP.
My question is basically "how can I do that"? More specifically,
a) is there an agreed-upon mechanism for secure, protected client-side "sso session storage", as it is the case with SSO cookies for browser-accessed applications? Possibly something like emulating Kerberos (TGT?) or even directly re-using it even where no ActiveDirectory authentication has been performed on the client side?
b) are there any protocols/APIs/frameworks for the communication between rich clients and the other participants of SSO (as it is the case for cookies)?
c) are there any APIs/frameworks for pushing kerberos-like TGTs and session tickets over the network?
d) are there any example implementations / tutorials available which demonstrate how to perform rich-client SSO?
I understand that there are "fill-out" agents which learn to enter the credentials into the application dialogues on the client side. I'd rather not use such a "helper" if possible.
Also, if possible, I would like to use CAS, Shibboleth and other open-source components where possible.
Thanks for comments, suggestions and answers!
MiKu
Going with AD account IS the generic solution. Kerberos is ubiquitous. This is the only mechanism which will ask you for your credentials once and just once at logon time.
This is all feasable, you need:
A KDC
Correct DNS entries
KDC accounts
Correct SPN entries
Client computers configured to talk to the KDC
Java app using JAAS with JGSS to obtain service tickets
GSS-API with your GTK+ app to obtain service tickets
What did you figure out yourself yet?
Agreed with Michael that GSSAPI/Kerberos is what you want to use. I'll add that there’s a snag with Java, however: by default, JGSS uses its own GSSAPI and Kerberos implementations, written in Java in the JDK, and not the platform’s libraries. Thus, it doesn’t obey your existing configuration and doesn’t work like anything else (e.g. on Unix it doesn’t respect KRB5CCNAME or other environment variables you’re used to, can’t use the DNS to locate KDCs, has a different set of supported ciphers, etc.). It is also buggy and limited; it can’t follow referrals, for example.
On Unix platforms, you can tell JGSS to bypass the JDK code and use an external GSSAPI library by starting the JVM with:
-Dsun.security.jgss.native=true -Dsun.security.jgss.lib=/path/to/libgssapi_krb5.so
There is no analogous option on Windows to use SSPI, however. This looks promising:
http://dblock.github.com/waffle/
... but I haven’t gotten to addressing this issue yet.
I'am are trying to implement a Web SSO with claim based identity using WIF and AD FS 2.0 right now. Right now I have a existing ASP.Net application which delegates authentification to the AD FS 2.0 server and trust issued security tokens. That works just fine.
However, in the organization there is an existing JA-SIG Central Authentication Service (CAS) server which supports the SAML 2 protocol. I would like to replace AD FS 2.0 with the existing CAS service.
In my understanding WIF uses WS-Federation, which is like a container around a SAML token. Is it possible to use the plain SAML 2 protocol and it's bindings (redirect or POST)? If that is not possible (as I guess), a second alternative might be to use federate identity and federate AD FS 2.0 with CAS. Is that possible? There is little to no information about that on the web.
Thanks :-)
After some research I came up with the following issues. CAS 3.x supports SAML 1.1 tokens and the SAML 1.1 protocol including Web SSO. There is support for SAML 1.1/2.0 tokens in ADFS 2.0. However, only the SAML 2.0 protocol is supported. That means no out of the box federation between CAS and ADFS 2.0 is possible.
We are researching OpenSSO as an alternative now, which provides support for all necessary protocols including WS-Federation for attaching WIF clients.
Access control Service v2 (ACS v2) may be an option. It supports both SAML1.1 and 2.0 as well in addition to other ones like simple web token (SWT) etc. It then allows to translate tokens from the source system to the relying party format..
https://portal.appfabriclabs.com/Default.aspx