Reuse the authentication of the user against LDAP in ActiveMQ Artemis - activemq-artemis

LDAP authentication is configured in ActiveMQ Artemis(2.6.1 -Redhat AMQ7.2) environment but i am noticing authentication for a user happens against LDAP server very frequently. Even when no messages received, seems the authentication is happening.
Attempted increasing security-invalidation-interval but doesn't take that into account.
I noticed this behavior by turning on logs.

The security-invalidation-interval applies to authorization but not to authentication. Authentication always hits whatever is providing the account information (in this case LDAP).

Related

OpenID Connect: transparent authentication for legacy clients using Resource Owner Password Credentials

We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.

How does one use Kafka with OpenID-Connect?

I'm starting out with Kafka.
I see that I'm able to pass headers when producing messages.
Traditionally one would have a web client (single page app) where to user logs in via some remote oidc idp and receives a token. That token is then sent via Authentication: Bearer token-here header to some RESTful backend where the token is checked for validity and the payload is processed, saved to database or other and something is returned or not.
Now there's Apache Kafka. It has a REST proxy. I can pass headers to the REST proxy and produce messages, or consume them, but I'm interested in the "secure my RESTful JSON API" part.
Currently, without Kafka, I have either a oidc proxy (using keycloak, that's keycloak-gatekeeper) that does the filtering of which request makes it to the backend, or I have a oidc client that does token validation as some middleware function inside the backend. In any case invalid requests doesn't get "logged" as they would in Kafka, I assume.
Where does oidc token validation and request filtering fit in the Kafka/Confluent ecosystem?
Assume we have a SPA that talks to the Confluent REST Proxy. Some logged in user wants to post messages and some non-logged in user should not be able to.
How does Kafka and/or its tools deal with that scenario?
Kafka commonly uses SASL and other Authorization plugins to prevent access.
Certificates would be distributed amongst clients (here, that is the REST Proxy). You would need other proxies or plugins around that to prevent further access or audit the requests, as with any other web server.
HTTPS certificates would be used to secure traffic to the REST proxy, but seems you're asking about something more specific.
There is no reference to OpenID in the documentation, only LDAP RBAC, as a commercial offering

Is there a way to disable user registration in Keycloak realm's local database as part of first broker login?

As per Keycloak documentation when the user is logs in through identity brokering, Keycloak checks and creates user in realms local database as part of First Broker Login Flow.
Is there a way to disable user onboarding in Keycloak local database and always check for the authentication from IDP instead of local database?
And what is the rationale behind this default Keycloak implementation as there are some basic issues like user data synchronisation between Keycloak and IDP?

jboss cluster session replication not working (multiple jsessionid cookies)

I'm trying to authenticate on my web application deployed on a jboss working in cluster mode with 2 nodes.
After a succesful authentication I get redirected to an admin page where a Filter checks if I am logged in.
On standalone mode it works just fine but when I deployed into production, which uses cluster mode, the filter rejects my request because it can't access the session parameters I have established on authentication.
Using the developer tools I see there are 3 JSESSIONID cookies set: one for /, one for /myapplication path and another one called JSESSIONID-34234 also for /myapplication path (I've cleared all them before starting the process).
Browsing the jboss docs I can't see no explanation for this although it seems the source of my problem.
How can I get to work authentication (I am using spring security http form based authentication) in my JBoss cluster?
Solved by enabling sticky session by adding the following to the virtualhost configuration file:
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/myapplication" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://jboss6-hc-001-8109>
BalancerMember ajp://jboss2.imatiasl.lan:8109 route=jboss2-hc-001-server-02
BalancerMember ajp://jboss3.imatiasl.lan:8109 route=jboss3-hc-001-server-02
ProxySet lbmethod=byrequests stickysession=ROUTEID
</Proxy>
Web session clustering should work if:
You enabled <distributed/> in web.xml.
Your app's server group is using ha or full-ha profile
If you want your clustered app perform better, consider implementing a good load-balancing policy. For most webapps load-balancing with sticky sessions is OK.
In some webapps, it is enough not to ask for re-authentication in case of failover or session is very easy to rebuild if authentication info is available. In such cases you even don't need web session clustering. Clustered SSO is enough, the caveat is you'll have to use container level security for authentication (most probably supported by spring-security). This way only authentication info is replicated, so you'll have to design session data management to be resilient to situations, when session suddenly becomes empty.

Kerberos/negotiate S4U2proxy authenticate on behalf of user

I would like to do authentication at proxy on behalf of user via Kerberos/Negotiate protocol.
user will authenticate with form login with server, server knows the who the user is, and server has to authenticate to backend server on behalf of user using kerberos.
Please help me with sample code or point me to some good references.
thank you in advance
-csr
Michael: the OP is asking about what MS calls "constrained delegation," the S4U Kerberos extension they invented, rather than the standard delegation (TGT forwarding) to which you're referring.
CSR: first off: is the user providing their Kerberos password in your "form login?" If so, you don't need to use S4U; you can just kinit with the password and get credentials directly.
If not, then this is indeed one use case for which S4U is intended. You didn't say, but I'll assume you're in a Windows environment, although S4U has been added to MIT Kerberos as well.
Rather than have the client forward a TGT, S4U allows the domain administrator to authorize a service principal to independently impersonate any user to a limited set of other services. To enable a service for constrained delegation:
MMC "Users and Computers" snap-in
select the properties for the service account
"Delegation" tab
"trust... for delegation to specified services"
In your case, you'll also need to set "use any authentication protocol;" this enables "protocol transition." If your service were authenticating the user with Kerberos, you could have the DC require the service to present a recent ticket from the client, proving it has some business doing this. Since you're using different authentication method, though, you have to forgo that check; that's what protocol transition does.
I'd start here for understanding S4U: http://msdn.microsoft.com/en-us/library/cc246071(PROT.13).aspx.
I have written a patch for Apache mod_auth_kerb implementing constrained delegation for Unix web services, so it does in fact work. :)
Have the client to send a forwardable (OK-AS-DELEGATE) ticket to you. You can extract the TGT from that and impersonate to perform your task. This works in my case when I receive a service ticket from IE or FF and bind against the AD for the user.