Is there any way to disable the replay cache when authenticate with SSPI? - sspi

I uses SSPI Kerberos do user authentication on Windows. While the AcceptSecurityContext() always fail when it receives a same token from client. It seems caused by replay cache. Is there any way to disable it on Windows with C?

Related

OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

Caching TGT from browser/other krb5 client

I'm playing around with Kerberos SSO. As experimented so far,
When I open a web app that is configured with Kerberos, from the browser, it prompts me for the username and credential, once I enter, I'm logged into the web app .
When I do a kinit from the terminal and give my credentials, I'm signed into the KDC for the given user. After kinit, when I open a web app I'm signed into the web app, without any credentials.
One possible explanation is, when I do a kinit, the TGT is stored in the OS which is available for other clients in the host machine so that my browser was able to use that TGT without prompting me for password.
Now my questions are,
Will I be able to cache the TGT without using kinit?
If yes, how can I do it using a Java client?
If the answer for the first question is yes, will I be able to do it from my web app opened in the browser?
Whenever kinit is executed, a TGT is requested and stored in OS ticket cache.
This TGT can be used to get TGS (service ticket) for multiple services.
If you haven't added your app url as a 'trusted intranet site' in browser, then browser will give you pop-up for the first time for every new session.
Browser accepts the credentials, gets the TGT from your KDC, and puts it in cache. Furthermore, using this TGT, it ask the KDC for the TGS to your app url (usually identified as "HTTP(S)/APP_SERVER_HOSTNAME").
You can verify this-
Perform klist purge to clean all the tickets from cache.
Open browser and hit your app url.
Provide credentials in pop-up and submit.
Execute klist- observe there are two tickets in cache.
One of the ticket is TGT, which spn like - krbtgt#XXX.domain.
The other is TGS for your service - usually "HTTP(S)/APP_SERVER_HOSTNAME".
Please note:
TGT is created by default when you login to the OS. So you can see there's a TGT for your user in OS cache.
OS ticket cache behavior can be platform specific (not verified by me).
You can obtain TGT/TGS or even delegate the credentials using (java)code.
Cache mentioned in your KRB conf is not necessarily the OS ticket cache.
For credential delegation, check out this - Java SPNEGO Authentication & Kerberos Constrained Delegation (KCD) to backend service

SailsJS in production mode, API routes give forbidden error

Working from some time on a sails web application.
So far overcome all issues by hard reading, trial and error.
Recently had to install the app for a close beta test on the client's ec2 free trial instance where it works just fine in development mode.
The app is behind a nginx proxy which listens on the 80 port and redirects to http://server_IP:1337.
CORS and CSRF are enabled, allowOrigins and onlyAllowOrigins are set to the server IP, web domain and localhost in production.js and, security.js and sockets.js.
But when switching to production mode all API requests, except GET, give 403 forbidden.
Tried everything I could find on Google, it simply doesn't work on production but it completely works on development.
If anyone could share a shred of light on this will be greatly appreciated.
EDIT:
Running the app with debug silly, showed this:
A socket is being allowed to connect, but the session could not be loaded. Creating an empty, one-time session to use for the life of this socket connection.
This log often shows up because a client socket from a previous lift or another Sails app is trying to reconnect (e.g. from an open browser tab), but the session indicated by its cookie no longer exists-- because either this app is not currently using a persistent session store like Redis, or the session entry has been removed from the session store (e.g. by a scheduled job or because it expired naturally).
Details:
Error: Session could not be loaded.
at Immediate._onImmediate (/var/www/allscubashops.com/node_modules/sails/lib/hooks/session/index.js:543:42) at processImmediate (internal/timers.js:445:19)
Then I have deleted the old browser cookie and got this:
Could not fetch session, since connecting socket has no cookie in its handshake.
Generated a one-time-use cookie:
sails.sid=s%3APlHbdXvOZRo5yNlKPdFKkaPgVTNaNN8i.DwZzwHPhb1%2Fs9Am49lRxRTFjRqUzGO8UN90uC7rlLHs
and saved it on the socket handshake.
This means the socket started off with an empty session, i.e. (req.session === {})
That "anonymous" session will only last until the socket is disconnected. To work around this,
make sure the socket sends a cookie header or query param when it initially connects.
(This usually arises due to using a non-browser client such as a native iOS/Android app,
React Native, a Node.js script, or some other connected device. It can also arise when
attempting to connect a cross-origin socket in the browser, particularly for Safari users.
To work around this, either supply a cookie manually, or ignore this message and use an
approach other than sessions-- e.g. an auth token.)
Also no new cookie was set.
The apparent conclusion is that somehow in production mode something is wrong with setting the session.
EDIT 2:
The latest find is that if I run the app without nginx proxy, I do not have the forbidden API requests issue but I still have the one related to the session not being created.
I am sure the nginx proxy settins are OK but now I am thinking of implementing the redis way to store sessions instead of the default memory one and see what happens
EDIT 3:
I have implemented the Redis sessions which works both for dev and prod modes.
Still same situation, the ec2 instance without nginx proxy works in production mode while the same files (git replicated) on the ec2 instance with nginx proxy doesn't work in production mode (API requests 403 forbidden) but works great in development mode.
The X-CSRF token is sent, screenshot
The sails error message I get in production (besides the network 403 forbidden error for all requests except GET) is:
A socket is being allowed to connect, but the session could not be loaded. Creating an empty, one-time session to use for the life of this socket connection.
This log often shows up because a client socket from a previous lift or another Sails app is trying to reconnect (e.g. from an open browser tab), but the session indicated by its cookie no longer exists-- because either this app is not currently using a persistent session store like Redis, or the session entry has been removed from the session store (e.g. by a scheduled job or because it expired naturally).
Details:
Error: Session could not be loaded.
at /var/www/example.com/node_modules/sails/lib/hooks/session/index.js:543:42
at Command.callback (/var/www/example.com/node_modules/#sailshq/connect-redis/lib/connect-redis.js:148:25)
at normal_reply (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:714:21)
at RedisClient.return_reply (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:816:9)
at JavascriptRedisParser.returnReply (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:188:18)
at JavascriptRedisParser.execute (/var/www/example.com/node_modules/redis-parser/lib/parser.js:574:12)
at Socket. (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:267:27)
at Socket.emit (events.js:193:13)
at addChunk (_stream_readable.js:296:12)
at readableAddChunk (_stream_readable.js:277:11)
at Socket.Readable.push (_stream_readable.js:232:10)
at TCP.onStreamRead (internal/stream_base_commons.js:150:17)
Therefore I assume that the sockets connect but the session is not created.
Redis works OK, I see sessions in it for when in development.
Have you exposed the csrf endpoint and are you making a call to that endpoint first, to get a token, before making further requests? This tipped me up once.

Reuse the authentication of the user against LDAP in ActiveMQ Artemis

LDAP authentication is configured in ActiveMQ Artemis(2.6.1 -Redhat AMQ7.2) environment but i am noticing authentication for a user happens against LDAP server very frequently. Even when no messages received, seems the authentication is happening.
Attempted increasing security-invalidation-interval but doesn't take that into account.
I noticed this behavior by turning on logs.
The security-invalidation-interval applies to authorization but not to authentication. Authentication always hits whatever is providing the account information (in this case LDAP).

Can't Authenticate with Local .NET back-end

I've followed the guide:
Getting Started with Authentication with Mobile Services .NET for Windows Store
I'm able to run the service locally as long as I don't need to authenticate the user. I can also authenticate the user if I publish the service to Azure. But I want to be able to test and authenticate the user locally. How can this be done?
I'm using Live ID and I have the correct ClientID and ClientSecret set in the Web.config. When I attempt to call LoginAsync from the client the call fails with The request could not be completed. (Method Not Allowed)
** Update 2014-03-20 **
Based on the comments of Carlos and Henrik, I've updated my local service to look exactly like my server instance. I followed Scot Hanselmans excellent guide and now I have my service running locally on port 80 and port 443 with a completely valid SSL certificate. It's even running on the exact same https://xxxx.azure-mabile.net hostname.
With these changes, there is now no configuration difference whatsoever between running the app against my local machine or running it against Azure. I can go to https://xxxx.azure-mabile.net in the browser, get redirected to Live login, sign in, and get redirected back to the service successfully. In the browser it all works. However it still doesn't work in the app.
I attached the debugger, set CLR errors to "break when thrown" and I managed to trap the exception in the service. Here's what I see in the immediate window:
The Response property is not helpful. It does not provide any additional information about the problem.
The only thing that stands out to me is that the app is trying to do a POST to /login/microsoftaccount while the browser would normally be doing a GET at this address (then getting redirected).
** Update #2 2014-03-20 **
After following Henriks guide for remote debugging I was able to load symbols and get a tiny bit more information:
"An existing connection was forcibly closed by the remote host"
The error code is 10054 (WSAECONNRESET) Connection reset by peer.
It appears the Live Authentication server may be forcibly terminating the connection, but only when I'm authenticating with the app. Again, authentication within the browser is fine. This, combined with the fact that /login/microsoftaccount is a POST from the app seems to suggest there is a problem with the authentication token I'm getting back from LiveClient.LoginAsync. I'll do some more digging...
At the moment, it is set up so that you don't need authentication when running locally and access the service from localhost. In this case, anonymous access is let through (this is of course disabled while running in the cloud).
We don't really have a way for your to authenticate locally as redirect URIs won't work (they can't point to localhost as there is not way that Facebook, say, can resolve "localhost").
One option is that we somehow can mock the authentication locally and give you a token without connecting with the various identity providers. I am not sure exactly what that would look like but it is something we can consider.
Henrik
Did you perhaps set Mobile client app: Yes in your Live Connect project? I think that setting is meant to be used with the Live Connect SDK (client) flow, not the browser-based (server) flow. The client flow isn't supported yet with a .NET backend.
You also want to make sure you are using LoginAsync(MobileServiceAuthenticationProvider.MicrosoftAccount) on the client to trigger the server flow.