Wildfly Elytron: Principal not available in SimpleSecurityManager - wildfly

I implemented an authentication mechanism similar to CustomHeaderHttpAuthenticationMechanism in https://github.com/wildfly-security-incubator/elytron-examples/tree/master/simple-http-mechanism, using PasswordGuessEvidence and also the other Callbacks mentioned in the example. Reason for the custom mechanism is that beside a simple credential check we need also to validate more constraints to check if a user is validated.
Stepping through this authentication mechanism looks quite good, the authenticationComplete method is called and also the authorizeCallback is successful. However, when accessing an EJB via a resteasy endpoint (EJB is annotated with #SecurityDomain and #RolesAllowed...) the SimpleSecurityManager.authorize method fails because the securityContext.getUtil method neither provides a principal nor something else. If accessing a method annotated by #PermitAll it is successful.
I guess the principal should be created by the ServerAuthenticationContext when working through the different callbacks, right?
How do I manage that the SimpleSecurityManager can recognize the principal, would I need to create it in my authentication mechanism, and how?

In this case it sounds like your EJB deployment has not been mapped to the WildFly Elytron security domain so is still making use of PicketBox security in the EJB tier which is why you are not seeing the identity already established.
Within the EJB subsystem you can also add an application-security-domain mapping to map from the security domain specified in the deployment to the WildFly Elytron security domain.
FYI at some point in the future when we are ready to remove PicketBox from the server these additional mappings will no longer be required, they are just unfortunately needed at the moment whilst we have both solutions in parallel.

Related

spring data geode pool is not resolvable as a Pool in the application context

I've come back to a #SpringBootApplication project that uses spring-geode-starter with version 1.2.4 although the same error happens with upgrades to 1.5.6 version.
It sets up a Geode client using
#Component
#EnableClusterDefinedRegions(clientRegionShortcut=ClientRegionShortcut.PROXY)
and in order to register interest subscriptions over HTTP, also
#Configuration
#EnableGemFireHttpSession
with a bean
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
On starting the application the spring data geode client connects to the server (Geode version 1.14) and auto copies regions back to the client, which is great.
However, after all the region handles are copied over, there's an error with the #EnableGemFireHttpSession which is
Error creating bean with name 'ClusteredSpringSessions' defined in class path resource [org/springframework/session/data/gemfire/config/annotation/web/http/GemFireHttpSessionConfiguration.class] and [gemfirePool] is not resolvable as a Pool in the application context
The first info message in the logs is:
org.springframework.session.data.gemfire.config.annotation.web.http.GemFireHttpSessionConfiguration 1109 sessionRegionAttributes: Expiration is not allowed on Regions with a data management policy of PROXY
org.springframework.data.gemfire.support.AbstractFactoryBeanSupport 277 lambda$logInfo$3: Falling back to creating Region [ClusteredSpringSessions] in Cache [Web]
So the client is trying to create a region ClusteredSpringSessions but it can't. The problem appears to resolve itself if I define a connection pool for HTTP, with a pool connection bean like this
#Configuration
#EnableGemFireHttpSession(poolName="devPool")
public class SessionConfig {
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
#Bean("devPool")
PoolFactoryBean sessionPool() {
PoolFactoryBean pool = new PoolFactoryBean();
ConnectionEndpoint ce = new ConnectionEndpoint("1.2.3.4", 10334);
pool.setSubscriptionEnabled(true);
pool.addLocators(ce);
return pool;
}
}
There is still the Expiration is not allowed on Regions with a data management policy of PROXY info message in the log, but this time the Falling back to creating Region [ClusteredSpringSessions] in Cache [Web] appears to work.
I don't understand why a default pool can't connect.
If a pool is defined then in version 1.2.4 that can cause this issue.
Since you are using Spring Boot for Apache Geode (SBDG), which is an excellent choice (thank you), then you can simply include the spring-geode-starter-session dependency on your #SpringBootApplication classpath, which removes the need to explicitly annotate your Spring Boot application with SSDG's #EnableGemFireHttpSession annotation.
See here for more details. I also have a Sample application demonstrating the use of SSDG, here. The guide and source code for this example, along with other examples, can be found here).
Also, I would generally advise that users drive the GemFire/Geode cluster configuration from the application and not let the cluster dictate the Regions (and/or other components/configuration) that the client gets. However, SDG's #EnableClusterDefinedRegions annotation is provided and generally useful in the case you do not have control over the GemFire/Geode cluster your application is using. Still, in the (HTTP) Session UC, the GemFire/Geode cluster would need a Session Region (which defaults to "ClusteredSpringSessions" as determined by Spring Session for Apache Geode (SSDG) itself) anyway.
OK, now to the problem at hand...
I think what is happening here is, due to backwards compatibility and legacy reasons, Spring Data for Apache Geode (SDG), on which both SSDG and SBDG are based; SBDG additionally pulls in SSDG as well, defined a GemFire/Geode Pool by the name of "gemfirePool", specifically when using the SDG XML space and using/defining a DataSource configuration.
So, it is somewhat naively assumed users would be explicitly defining a Pool and calling it "gemfirePool", and not simply relying on a "default" Pool connection to the GemFire/Geode cache server (namely "localhost", 40404, or if using Locators (recommended), "localhost" and 10334).
However, for development purposes, and in SBDG specifically, I rely on the fact that GemFire/Geode creates a "DEFAULT" Pool anyway (when no explicit Pool is defined), and forgo the strict requirement that a "gemfirePool" should exist. But, SBDG builds on SSDG and SDG and they still rely on the legacy arrangement (for example).
I have filed an Issue ticket in SSDG to change this and better align SSDG with what SBDG prefers going forward, but I simply have not gotten around to it yet. My apologies for your inconvenience.
Anyway, it is a simple change you can make externally from your Spring Boot application, in application.properties like so (see here from the HTTP Session Sample I referenced from SBDG above). This will allow you to configure the Session Region Pool "name".
Also note, it is possible to change the name of the Session Region used by the client if what comes down from the cluster when you are using SDG's #EnableClusterDefinedRegions and the Region definition pulled down from the cluster is named differently on the server-side using this property.
Additionally, you can also configure the client Session Region data policy using properties as well (for example).
Regarding the Expiration "info" message you are seeing in the logs...
Since the client Session Region is a PROXY by default, then Expiration, Eviction and other Region data management policies (e.g. Compression, etc), do not actually make much sense.
In fact, SSDG is smart about whether to apply additional Region data management policies locally or not (see here, and specifically, this logic).
The message you are seeing in your application logs is in fact coming from SSDG, specifically. This message really serves as a reminder that your Session state management is actually "managed" on the server-side (when the application client is using a PROXY or even a CACHING_PROXY Region for that matter) and that the corresponding server-side, or cluster Sessions Region should be configured manually and appropriately, with Expiration policies as well as other things if necessary. Otherwise, no Session expiration would actually happen!
I hope all this makes since.
If you continue to have problems, feel free to file an Issue ticket and provide an example test or small application replicating your problem.

camunda-webapp and JAAS-authentication

In a Wildfly 8.1.0.Final we deploy:
our own CRM-webapp (Seam2/JSF1.2)
camunda-webapp 7.3.0
camunda-engine 7.3.0 as a module (shared engine)
custom engine-plugin to enable camunda-engine to use the user/group-store of our CRM
We display camunda tasklist in an iframe inside our CRM.
This setup runs fine so far, but we have to login twice.
So we need SSO, but cannot establish AD/LDAP, like in camunda-sso-jboss example.
I thought of Wildfly's JAAS and SSO capabilities, but i'am not sure, if camunda-webapp supports JAAS-authentication.
I think the security-domain configuration in jboss-web.xml is just generated by a maven archetype and has no effect on the camunda-webapp, is that right? I changed that configuration and it had no effect at all.
Can someone give me a hint, where i should hook into camunda-webapp or if it is possible at all?
Ok, i have a first success.
I changed org.camunda.bpm.webapp.impl.security.auth.Authentications.getFromSession to accept HttpServletRequest as parameter instead of HttpSession (called from AuthenticationFilter.doFilter). If the session contains no Authentications, i try to pull the Principle from the request and if one exists, i log em in silently (copied most from UserAuthenticationResource.doLogin).
Then i have a very simple webapp ("testA") with only one JSP and Basic Authentication. Both camunda-webapp and testA have the same security-domain configured, and the host in the undertow-subsystem has the "single-sign-on"-setting.
Now i can login into /testA, then call /camunda in another tab without further authentication.
The code has to be improved a lot. If everythink works fine, i'll post the details.
If someone thinks this is a wrong approach, please let me know ;-)

Jboss EJB and Shiro

I have a CDI -> EJB App.
I do the security in the past with JBoss j_security.
My security with Shiro works.
But my only problem is how can I get SessionContext in my EJB?
With Jboss Security I got the username, who login in the location with:
SessionContext sessionContext;
String email = sessionContext.getCallerPrincipal().getName();
Now I want to get the the username in my EJB.
How ca I set the username with SessionContext?
Thank you for help
This is a bit tricky, and it also depends on the version of JBoss that you are using. In the AS 7.x and EAP 6.x range this can't really be done by using public APIs because of several bugs.
In order to make the sessionContext aware of the user name and roles you can use JBoss specific code like I used here: https://github.com/javaeekickoff/jboss-as-jaspic-patch/commit/d691fd4532d9aeae6136e3adc2537ff81c525673
It should be something like;
SecurityContext context = SecurityActions.getSecurityContext();
context.getUtil().createSubjectInfo(new SimplePrincipal(userName),
null,
someSubject
);
Take a look at the rest of the file to see how someSubject should be created and populated.
Unfortunately for the mentioned JBoss versions #RolesAllowed will never work, since JBoss doesn't take over the already authenticated identity from the local caller, but will always consult a JBoss specific "security domain" just prior to calling the actual bean. Of course it known nothing about Shiro.

Client identifier in jboss httpinvoker (auditing)

I am using httpinvoker in JBoss 4.0.4 (little old) for EJB invocations.
Since there are so many clients that make calls to my server, I want to identify the clients for each call in server.
Is there a way to do this with JBoss httpinvoker?
I could imagine adding a header to identify my client in each HTTP request, but cannot find a way to add a header in httpinvoker.
Auditing builds on a name, and thus on an authentication scheme somehow.
Therefore I suggest using the standard client authentication infrastructure to solve your problem. This works for RMI as well (it's not bound to HTTP), and the user ID is even passed down into your EJBs.
Server
Put the EJB in a security-domain (ejb.jar: META-INF/jboss.xml)
You could use the application-policy other which just the UsersRolesLoginModule (conf/login-config.xml); this is the default policy, it's already configured.
Add users.properties and roles.properties to your ejb.jar file (top level package): These are used by the UsersRolesLoginModule
For each user, add his name and a (dummy) password to users.properties
Client
Create a callback class which implements a javax.security.auth.callback.CallbackHandler: This callback is used, when the authentication needs the user and the password.
Create a javax.security.auth.login.LoginContext; pass the callback handler as the 2nd argument; call login() on the instance of the LoginContext
Connect normally to the EJB server using an InitialContext
Add -Djava.security.auth.login.config=.../jboss-4/client/auth.conf when you start the client
This way a user ID is passed from the client to the EJB (as part of the standard authentication process). Now, in the EJB methods, you can get the user ID by calling getCallerPrincipal() on the SessionContext instance. I have tested this against JBoss 4.2.3
Additional information: JBoss client authentication
Addendum 1:
Using RMI or HTTP, the password is not transported in a secure way. In this case just use a dummy password, this is OK for auditing.
On the other hand, if you use RMI over SSL or HttpInvoker over HTTPS, you could change to a real and secure authentication quickly.
Addendum 2:
I am not sure, if it works without defining roles. Possibly you have to
Add a line in roles.properties for each user: Add a connect role, for example
Add role definitions in ejb-jar.xml as well: security-role-ref for each EJB, and security-role and method-permission in the assembly-descriptor
Update
As there is already a login module, there might be another possibility:
If you have the source code of the login module, you could possibly use another TextCallback to get additional information from the client (in your case a user ID). The information could be used to create a custom Principal. Within the EJB, the result of getCallerPrincipal() could be cast to the custom principal.

How to programmatically un/register POJOs as services in JBoss 4.2.3.GA

I need to be able to circumvent the whole deployer malarkey and programmatically register/unregister (dependency-less) POJOs as services in JBoss.
Currently I'm dynamically creating an MBean interface and registering this with the JBoss MBeanServer, and then registering local/remote with Jndi.
This works ok (I can have a standard service from a vanilla SAR reference one of these service POJOs with the #EJB annotation) - however the container seems to leaves stale references behind as after calling unbind() and unregisterMBean().
Obviously I'm missing something by not dealing with the container in a way it expects, but what am I missing? Or is there an easier way (can't see much in the way of an API)?
thanks.