Single Sign-On for Rich Clients ("Fat Client") without Windows Logon - single-sign-on

single sign-on (SSO) for web applications (used through a browser) is well-documented and established. Establishing SSO for Rich Clients is harder, and is usually suggested on the basis of Kerberos tickets, in particular using a Windows login towards an ActiveDirectory in a domain.
However, I'm looking for a more generic solution for the following: I need to establish "real" SSO (one identity for all applications, i.e. not just a password synchronization across applications), where on client's side (unmanaged computers, incl. non-Windows), the "end clients" are a Java application and a GTK+ application. Both communicate with their server counterparts using a HTTP-based protocol (say, WebServices over HTTPS). The clients and the server do not necessarily sit in the same LAN/Intranet, but the client can access the servers from the extranet. The server-side of all the applications sit in the same network area, and the SSO component can access the identity provider via LDAP.
My question is basically "how can I do that"? More specifically,
a) is there an agreed-upon mechanism for secure, protected client-side "sso session storage", as it is the case with SSO cookies for browser-accessed applications? Possibly something like emulating Kerberos (TGT?) or even directly re-using it even where no ActiveDirectory authentication has been performed on the client side?
b) are there any protocols/APIs/frameworks for the communication between rich clients and the other participants of SSO (as it is the case for cookies)?
c) are there any APIs/frameworks for pushing kerberos-like TGTs and session tickets over the network?
d) are there any example implementations / tutorials available which demonstrate how to perform rich-client SSO?
I understand that there are "fill-out" agents which learn to enter the credentials into the application dialogues on the client side. I'd rather not use such a "helper" if possible.
Also, if possible, I would like to use CAS, Shibboleth and other open-source components where possible.
Thanks for comments, suggestions and answers!
MiKu

Going with AD account IS the generic solution. Kerberos is ubiquitous. This is the only mechanism which will ask you for your credentials once and just once at logon time.
This is all feasable, you need:
A KDC
Correct DNS entries
KDC accounts
Correct SPN entries
Client computers configured to talk to the KDC
Java app using JAAS with JGSS to obtain service tickets
GSS-API with your GTK+ app to obtain service tickets
What did you figure out yourself yet?

Agreed with Michael that GSSAPI/Kerberos is what you want to use. I'll add that there’s a snag with Java, however: by default, JGSS uses its own GSSAPI and Kerberos implementations, written in Java in the JDK, and not the platform’s libraries. Thus, it doesn’t obey your existing configuration and doesn’t work like anything else (e.g. on Unix it doesn’t respect KRB5CCNAME or other environment variables you’re used to, can’t use the DNS to locate KDCs, has a different set of supported ciphers, etc.). It is also buggy and limited; it can’t follow referrals, for example.
On Unix platforms, you can tell JGSS to bypass the JDK code and use an external GSSAPI library by starting the JVM with:
-Dsun.security.jgss.native=true -Dsun.security.jgss.lib=/path/to/libgssapi_krb5.so
There is no analogous option on Windows to use SSPI, however. This looks promising:
http://dblock.github.com/waffle/
... but I haven’t gotten to addressing this issue yet.

Related

OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

Security for on-prem/cloud REST Application

I've been reading security articles for several days, but have no formal training in the field. I am developing a configuration and management application for an IoT device. It is meant to be run either on an internal network, or accessed over the web.
My application will be used by IT admins, managers, and factory-floor workers. Depending on the installation, there will be varying levels of infrastructure in place. It could run on a laptop on the floor itself, on a server, or hosted in the cloud. For this reason, we can not assume that our clients will have the kind of infrastructure you might find at a datacenter or in the cloud, for example CAS or NTP.
Our application provides a REST API for client applications to gather data. We'd like to use roles to restrict what data users can access. I've gathered that a common solution for authentication is to encode the username/pass in the REST Header. However, this is completely insecure unless sent over a secure channel.
As I understand it, SSL Certification Authorities grant certs for a specific domain. Our application will have no set domain, and a different IP depending on the installation. Many web applications do not trust self-signed certs. It's not clear to me whether a self-signed application is good enough for a typical application-developer who will be consuming our interface.
With this being the case:
1) What are my options to set up a secure channel, internally or via the web?
2) Am I making assumptions about how our product will be used that damage our users' security unnecessarily?
Well you can use custom encryption to encrypt the data being sent to the applications.
You can also use JSON web tokens to secure your REST API. https://en.wikipedia.org/wiki/JSON_Web_Token. The JSON tokens could be generated by a centralized authentication server and included in all requests sent by the client applications to the server

How does SPN with Kerberos works

As I understand it,
SPN is an authenticating tool for windows services.
Kerberos is a
user authentication service
SPNEGO-GSSAPI is the third party API to
be able to use those services.
SSPI : is the Neutral layer to send
request from SPNEGO to SPN service.
Am I completely lost?
Trying to figure out how it works but information, is either too precise or not enough.
Ok a more verbose answer:-
SPN - Service Principal Name. It is an identifier associated with each account in a KDC implementation(AD, OpenLDAP etc). Basically if your account acts as a service to which a client authenticates, the client has to specify "who" it wants to communicate to. This "who" identifier is the SPN. This is the strict definition. Many people often call the client name (UPN - User Principal Name) of a service as SPN. This happens when the service itself may act as a client( google the delegation scenario ). This is not strictly correct but widely assumed true.
Kerberos is a protocol for authentication. It is a name for a framework. It involves a third party server(called KDC or Key Distribution Centre) and involves a series of steps of acquiring tickets(tokens of authentication). It is really complicated so http://en.wikipedia.org/wiki/Kerberos_(protocol)
To some extent you got this right. GSSAPI is an API but SPNEGO is not. GSSAPI is technically agnostic to the auth mechanism you use, but most folks use it for kerberos authentication. SPNEGO is a pseudo mechanism, in the sense it declares an RFC for authentication based communication in HTTP domain. Strictly speaking SPNEGO is a specification but most folks also consider it as an implementation. For instance, Sun and IBM JDK provides "mechanism providers" for SPNEGO token generation but GSSAPI is used to actually call it. This is done in many projects(Tomcat as a Server is and example that come to the top of my head and one of the folks who answered this question developed it).
SSPI is an analogue to GSSAPI in windows. Its a different API which ends up doing something very similar to GSSAPI.
Not quite.
SPN simply means 'Server Principal Name' and is the AD or Kerberos slang for the service you try to authenticate against.
Kerberos is a user authentication service, more or less yes. It also provides security for network messages and calls between services.
SPNEGO-GSSAPI* is a kind of strange beast. GSSAPI (Generic Security Service Application Program Interface) is an API to (in principle) different authentication services, it provides negotiation of the mechanisms used. Often the only mechanism available will be Kerberos though. It is the usual API to attach 3rd party programs to Kerberos when you are on Unix (defined in various RFCs, for example RFC 2743 )
On the windows platform SSPI is the generic layer, so it compares to GSSAPI.
SPNEGO is kind of a strange hybrid. It is a mechanism to be used in SSPI, HTTP Auth or GSSAPI which negotiates another auth protocol (for example Kerberos or NTLM if you are on Windows), so it basically does the same thing GSSAPI does again in a different way.
Typical uses of SPNEGO are HTTP authentication to a windows domain, for example IIS uses it if you use 'Integrated windows authentication'. It is also used when you select the 'Negotiate' options for SSPI. See for example RFC 4559
Almost all of your understandings are wrong.
Here it goes:
SPN: A specific service-class is bound to a specific account, e.g. HTTP to www.stackoverflow.com => HTTP/www.stackoverflow.com#STACKOVERFLOW.COM
Yes
3./4. GSS-API (Unix)/SSPI (Windows): Mechanism neutral API to interact with. E.g, Kerberos 5, NTLM, SPNEGO, etc.
SPNEGO: It is one of many mechnisms supported by GSS-API/SSPI. It is actually a pseudo-mech.

SiteMinder Single-Sign-On without installing agent on web application server

Our IT staff refuses to install the SiteMinder agent on our application's IIS 6.0 web server, citing security concerns as it is a third-party software, as well as the possibility of high resource utilization impacting application performance.
They suggest that we set up an independent, segregated web server containing only a bare-bones IIS, the SiteMinder Agent, and a "shim" to authenticate login attempts.
This shim would be a single ASPX page marked to be protected by the agent. It would use the SiteMinder agent to authenticate the user ID, look up the user ID in the application's database, and return the user ID and password to the user's browser. A JavaScript function would then POST the user ID and password to the application's existing login page as if they typed it in themselves.
Are their concerns warranted? Why or why not?
Have you ever heard of anyone implementing a similar architecture?
Is their proposed solution good, bad, or ugly?
It does not look like it would work, because the agent manages not only the initial login, but subsequent calls to the application, i.e. authenticated session. The agent examines the cookie, validates it, etc. Your scenario does not describe how that would happen.
In our environment, all internet traffic goes through an Apache reverse proxy before hitting IIS. IIS is behind firewall. The Apache reverse proxy has the SM agent all it does is redirect the traffic to IIS. I suppose it would be feasible to do a similar setup with IIS acting as a reverse proxy.
BTW, tell your IT guy that his proposed shoestring and bubblegum login solution is a much bigger security concern than installing SiteMinder on IIS.
The apache reverse proxy solution will definitely work, but with SiteMinder r12.51, Secure Proxy Server is included, which is basically SiteMinder's version of a reverse proxy (plus a lot more).
SPS will let you configure a single server as a "gateway" for all of your applications that can't or won't install a SiteMinder agent. The web agent is embedded in SPS and a proprietary Java app does the heavy lifting. SPS also has a GUI which follows the look and feel of the r12 WAMUI, which makes configuring it very simple.
Secure Proxy Server also has a Federation Gateway feature, so you don't need to install the web agent option pack if you are doing SAML Federation. All of your fcc pages can also be served by the SPS, so you can reduce the number of webservers needed to support your SSO environment.

LDAP to SAML/REST proxy

We are doing a Cloud POC, we will have applications hosted in the cloud that can only talk LDAP. Is there any system/appliance/virtual directory in the cloud that can appear to be an LDAP server from the application side, and on the output side talk SAML/REST based over the Internet to talk to our SSO product that can authenticate users against our corporate LDAP, which is tucked inside our internal firewall?
You need to deploy an Identity provider connected to the ldap. You can adopt CAS or SAML technology.
In that wikipedia entry you can check the differents products (commercial and free software):
http://en.wikipedia.org/wiki/SAML-based_products_and_services
Most of them support Ldap as the authentication source backend.
Also Take a look on this thread:
Way to single sign on between PHP, Python, Ruby applications
The emerging SCIM (System for Cross-domain Identity Management) protocol might make more sense for the use case you're illustrating. It's intended to provide a simple REST API around an identity store so you can perform Create/Read/Update/Delete operatons. What will be available could theoritically be controlled via some policy within a SCIM server to alloy your clients to essentially interact with the backend LDAP directory.
Many products are adopting the SCIM standard now, such as ones from Ping Identity, Salesforce and UnboundID.