How can I trust that the SiteMinder HTTP headers haven't been tampered with? - single-sign-on

I am completely new to SiteMinder and SSO in general. I poked around on SO and CA's web site all afternoon for a basic example and can't find one. I don't care about setting up or programming SM or anything like that. All of that is already done by someone else. I just want to adapt my JS web app to use SM for authentication.
I get that SM will add a HTTP header with a key such as SM_USER that will tell me who the user is. What I don't get is -- what prevents anyone from adding this header themselves and bypassing SM entirely? What do I have to put in my server-side code to verify that the SM_USER really came from SM? I suppose secure cookies are involved...

The SM Web Agent installed on the Web Server is designed to intercept all traffic and checks to see if the resource request is...
Protected by SiteMinder
If the User has a valid SMSESSION (i.e. is Authenticated)
If 1 and 2 are true, then the WA checks the Siteminder Policy Server to see if the user is Authorized to access the requested resource.
To ensure that you don't have HTTP Header injections of user info, the SiteMinder WebAgent will rewrite all the SiteMinder specific HTTP Header information. Essentially, this means you can "trust" the SM_ info the WebAgent is presenting about the user since it is created by the Web Agent on the server and not part of the incoming request.

Because all traffic should pass through Siteminder Web Agent so even if the user sets this header it will be overwritten/removed

All Siteminder architectures do indeed make the assumption that the application just has to trust the "SM_" headers.
In practice, this may not be sufficient depending on the architecture of your application.
Basically, you have 3 cases:
The Web Agent is installed on the web server where your application runs (typical case for Apache/PHP applications): as stated above, you can trust the headers as no requests can reach your application without being filtered by the web agent.
The Web Agent is installed on a different web server than the one where your application runs, but on the same machine (typical case: SM Agent installed on an Apache front-end serving a JEE Application Server): you must ensure that no requests can directly reach your application server. Either you bind your application server to the loopback interface or you filter the ports on the server.
The Web Agent runs on a reverse proxy in front of your application. Same remark. The only solution here is to implement an IP filter on your application to only allow requests that come from your reverse proxy.

SiteMinder r12.52 contains a new functionality named Enhanced Session Assurance with DeviceDNA™. DeviceDNA can be used to ensure that the SiteMinder Session Cookie has not been tampered with. If the Session is replayed on a different machine, or from another brower instance on the same machine, DeviceDNA will catch this and block the request.
Click here to view a webcast discussing new features in CA SiteMinder r12.52

Typical enterprise architecture will be Webserver (Siteminder Agent) + AppServer (Applications)
Say IP filtering is not enabled, and webs requests are allowed directly to AppServer, bypassing webserver and the sso-agent.
If applications have to implement a solution to assert the request headers / cookies are not tampered / injected, do we have any solution simillar to the following?
Send the SM_USERID encrypted in a seperate cookie or encrypted (Sym/Asym) along with SMSESSION id
Application will use the key to decrypt the SMSESSION or SM_USERID to retrive the user id, session expiry status and any other addtional details and authorization details if applicable.
Application now trusts the user_id and do authentication

Related

OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

SAML SSO request with redirect response to local private IP

I have a situation where I need to authenticate my application users with SAML2.0. SAML server is hosted at a public domain while our application runs in local intranet.
I wonder, if SAML would support redirecting my users back to the local application (local IP) after authentication.
This is by default or is there any special configuration need to be done in SAML server side?
If the application is being accessed from the local intranet, it will definitely work. There won't be any additional configuration required for that. The reason is simple - SAML uses HTTP Redirects and Form-Post to pass SAML tokens. So if the two applications (IdP and SP) are accessible, it will work.
I have an ADFS server on Azure (available publicly) and the SP is in my local development environment (accessible only on my laptop). It works.
SAML protocol provides the 'RelayState' request parameter for certain bindings. E.g. have a a loot at http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf
This allows to specify a target URL for your application acting as SAML SP.

Behaviour of SAML when HTTP Server used for high availability

I have implemented the supporting of SAML SSO to have my application act as the Service Provider using Spring Security SAML Extension. I was able to integrate my SP with different IDPs. So for example I have HostA,HostB, and HostC, all these have different instances of my application. I had an SP metadata file specified for each host and set the AssertionConsumerServiceURL with the URL of that host( EX:https:HostA.com/myapp/saml/sso ). I added each metadata file to the IDP and tested all of them and it is working fine.
However, my project also supports High Availability by having an IBM HTTP Server configured for load balancing. So in this case the HTTP Server will configure the hosts(A,B,C) to be the hosts used for load balancing, the user will access the my application using the URL of the HTTP server: https:httpserver.com/myapp/
If I defined one SP metadata file and had the URL of the HTTP Server specified in the AssertionConsumerServiceURL( https://httpserver.com/saml/sso ) and changed my implementation to accept assertions targeted to my HTTP Server, what will be the outcome of this scenario:
User accesses the HTTPServer which dispatched the user to HostA(behind the scenes)
My SP application in HostA sends a request to the IDP for authentication.
The IDP sends back the response to my httpserver as: https://httpserver.com/saml/sso .
Will the HTTP Server redirect to HostA, to have it like this: https://HostA.com/saml/sso
Thanks.
When deploying same instance of application in a clustered mode behind a load balancer you need to instruct the back-end applications about the public URL on the HTTP server (https://httpserver.com/myapp/) which they are deployed behind. You can do this using the SAMLContextProviderLB (see more in the manual). But you seem to have already successfully performed this step.
Once your HTTP Server receives a request, it will forward it to one of your hosts to URL e.g. https://HostA.com/saml/sso and usually will also provide the original URL as an HTTP header. The SAMLContextProviderLB will make the SP application think that the real URL was https://httpserver.com/saml/sso which will make it pass all the SAML security checks related to destination URL.
As the back-end applications store state in their HttpSessions make sure to do one of the following:
enable sticky session on the HTTP server (so that related requests are always directed to the same server
make sure to replicate HTTP session across your cluster
disable checking of response ID by including bean EmptyStorageFactory in your Spring configuration (this option also makes Single Logout unavailable)

SiteMinder Single-Sign-On without installing agent on web application server

Our IT staff refuses to install the SiteMinder agent on our application's IIS 6.0 web server, citing security concerns as it is a third-party software, as well as the possibility of high resource utilization impacting application performance.
They suggest that we set up an independent, segregated web server containing only a bare-bones IIS, the SiteMinder Agent, and a "shim" to authenticate login attempts.
This shim would be a single ASPX page marked to be protected by the agent. It would use the SiteMinder agent to authenticate the user ID, look up the user ID in the application's database, and return the user ID and password to the user's browser. A JavaScript function would then POST the user ID and password to the application's existing login page as if they typed it in themselves.
Are their concerns warranted? Why or why not?
Have you ever heard of anyone implementing a similar architecture?
Is their proposed solution good, bad, or ugly?
It does not look like it would work, because the agent manages not only the initial login, but subsequent calls to the application, i.e. authenticated session. The agent examines the cookie, validates it, etc. Your scenario does not describe how that would happen.
In our environment, all internet traffic goes through an Apache reverse proxy before hitting IIS. IIS is behind firewall. The Apache reverse proxy has the SM agent all it does is redirect the traffic to IIS. I suppose it would be feasible to do a similar setup with IIS acting as a reverse proxy.
BTW, tell your IT guy that his proposed shoestring and bubblegum login solution is a much bigger security concern than installing SiteMinder on IIS.
The apache reverse proxy solution will definitely work, but with SiteMinder r12.51, Secure Proxy Server is included, which is basically SiteMinder's version of a reverse proxy (plus a lot more).
SPS will let you configure a single server as a "gateway" for all of your applications that can't or won't install a SiteMinder agent. The web agent is embedded in SPS and a proprietary Java app does the heavy lifting. SPS also has a GUI which follows the look and feel of the r12 WAMUI, which makes configuring it very simple.
Secure Proxy Server also has a Federation Gateway feature, so you don't need to install the web agent option pack if you are doing SAML Federation. All of your fcc pages can also be served by the SPS, so you can reduce the number of webservers needed to support your SSO environment.

Sharing Security Context between web app and RESTful service using Spring Security

We are designing security for a green field project with a UI web module (Spring MVC) - the client, and a RESTful services web module (CXF) - the server, to be deployed as separate war files in the same Websphere app server. The system should be secured with Spring Security, authenticating against LDAP and authorizing against a database. We have been looking for the best solution to share the security context between the 2 apps, so a user can authenticate in the web UI and invoke its AJAX calls to the secured RESTful services. Options found:
OAuth: seems overkill for our requirements, introduces a fairly complex authentication process, and reportedly some enterprise integration issues
CAS: would amount to setting up an enterprise SSO solution, something beyond the scope of our engagement
Container-based (Websphere) security, although not recommended by Spring Security, and we're not clear if this could provide a solution to our specific needs
We're looking for a simpler solution. How can we propagate the Security Context between the 2 apps? Should we implement authentication in the UI web app, then persist sessions in the DB, for the RESTful services to lookup? Can CXF provide a solution? We read many threads about generating a 'security token' that can be passed around, but how can this be done exactly with Spring Security, and is it safe enough?
Looking forward to any thoughts or advice.
You want to be able to perform the REST web services on the server on behalf the user authenticated in UI web module.
The requirements you described called SingleSignOn.
The simplest way to do it is passing the HTTP header with the user name during REST WS calls.
(I hope your REST client allows to do it).
To do it in secure way use one of the following:
Encrypt the user name in REST client and decrypt it in REST server
Ensure that the header is sent from the local host (since your application deployed on the same container)
Therefore, protect both application using SpringSecurity authenticate against LDAP.
In the first application (Rest Client) use regular Form Authentication
In the second application (Rest Server) add the your own PreAuthenticatedProcessingFilter:
http://static.springsource.org/spring-security/site/docs/3.1.x/reference/springsecurity-single.html#d0e6167
Edited
The “Authentication” is the process of verifying of a principal’s identity.
In our case both REST Client (Spring MVC application) and REST server (CXF application) verify an identity against LDAP. LDAP “says” OK or Not. LDAP is a user repository. It stateless and does not remember the previous states. It should be kept in applications.
According to my understanding, a user will not access directly to REST server – the user always access REST Client. Therefore, when the user access REST Client he/ she provides a user name and a password and REST Client authenticate against LDAP. So, if REST Client access REST server the user is authenticated and REST Client knows his name.
So, if request come to REST server with a user header name - REST server for sure knows that the user was authenticated and it should not authenticate it again against LDAP.
(The header should be passed in the secured way as described above).
Rest Server should take the user name, to access to LDAP and to collect related user information without providing of the user password (since the user already authenticated).