How to get cluster nodes information on Wildfly 21? - wildfly

I am trying to get the cluster information (esp nodes list) with the below Application code.
MBeanServer server = ManagementFactory.getPlatformMBeanServer();
String clusterMembers = (String) (server.getAttribute(new ObjectName("jboss.infinispan:type=CacheManager,name=\"ejb\",component=CacheManager"), "clusterMembers"));
(or)
Object obj = server.getAttribute(ObjectName.getInstance("jgroups:type=channel,cluster=\"web\""), "View");
Both are throwing InstanceNotFoundExceptions.
javax.management.InstanceNotFoundException: jgroups:type=channel,cluster="web"
2021-02-08 15:39:59,046 ERROR [stderr:71] (default task-1) javax.management.InstanceNotFoundException: jgroups:type=channel,cluster="web"
2021-02-08 15:39:59,047 ERROR [stderr:71] (default task-1) at org.jboss.as.jmx.PluggableMBeanServerImpl.findDelegate(PluggableMBeanServerImpl.java:1113)
2021-02-08 15:39:59,047 ERROR [stderr:71] (default task-1) at org.jboss.as.jmx.PluggableMBeanServerImpl.getAttribute(PluggableMBeanServerImpl.java:389)

There are a number of ways to do this. Programmatically, perhaps the simplest way is to use WildFly's clustering API. For details, see: https://github.com/wildfly/wildfly/blob/master/docs/src/main/asciidoc/_high-availability/Clustering_API.adoc#group-membership
Alternatively, you can also use WildFly's CLI to obtain cluster information.
e.g.
[standalone#embedded /] /subsystem=jgroups/channel=ee:read-attribute(name=view)
{
"outcome" => "success",
"result" => "[localhost|0] (1) [localhost]"
}
Alternatively, you can obtain cluster information using Infinispan or JGroups APIs directly.
e.g.
#Resource(lookup = "java:jboss/infinispan/cache-container/web")
private EmbeddedCacheManager manager;
#Resource(lookup = "java:jboss/jgroups/channel/default")
private JChannel channel;
public void foo() {
System.out.println(this.manager.getMembers());
System.out.println(this.channel.getView());
}
Lastly, if you prefer to go the JMX route, ensure that your WildFly configuration defines a JMX subsystem, otherwise no mbeans will be registered.
Infinispan mbeans are registered using the domain: "org.wildfly.clustering.infinispan". Make sure you are using 22.0.1.Final which includes a fix for https://issues.redhat.com/browse/WFLY-14286
Your jgroups jmx code name is almost correct - using the default configuration, each Infinispan cache manager uses a distinct ForkChannel based on a common JChannel, named "ee".

Related

Liberty : Intermediate context does not exist : jms/xyz

I am working on migrating ear application to liberty. It is a web appliation that uses JMS with MQ messaging provider.
For example in my stage.config.xml, we have following properties:
MQQueue(0).CCSID
MQQueue(0).baseQueueName
MQQueue(0).jndiName
MQQueue(0).name
MQQueueConnectionFactory(0).CCSID
MQQueueConnectionFactory(0).channel
MQQueueConnectionFactory(0).connectionPool.ConnectionPool(0).maxConnections
MQQueueConnectionFactory(0).description
MQQueueConnectionFactory(0).host
MQQueueConnectionFactory(0).jndiName
MQQueueConnectionFactory(0).name
MQQueueConnectionFactory(0).port
MQQueueConnectionFactory(0).provider
MQQueueConnectionFactory(0).queueManager
MQQueueConnectionFactory(0).sessionPool.ConnectionPool(0).maxConnections
MQQueueConnectionFactory(0).transportType
<featureManager>
<feature>jsp-2.3</feature>
<feature>localConnector-1.0</feature>
<feature>jndi-1.0</feature>
<feature>jdbc-4.1</feature>
<feature>samlWeb-2.0</feature>
<feature>wasJmsClient-2.0</feature>
<feature>wasJmsClient-1.1</feature>
<feature>wmqJmsClient-1.1</feature>
<feature>jndi-1.0</feature>
<feature>jmsMdb-3.1</feature>
</featureManager>
<featureManager>
<exclude>jsf-2.2</exclude>
</featureManager>
<variable name="wmqJmsClient.rar.location"
value="${server.config.dir}/wmq/wmq.jmsra.rar"/>
<jmsQueue id="1533A.TRANSPORT.ASSIGNMENT.RESP" jndiName="jms/xyz/queue/transportAssignment/response"></jmsQueue>
<jmsQueue id="1533A.TRANSPORT.ASSIGNMENT.RQST" jndiName="jms/xyz/queue/transportAssignment/request"></jmsQueue>
<jmsQueueConnectionFactory jndiName="jms/xyz" id="xyz_qa_QCF">
<connectionManager maxPoolSize="10"/>
<properties.wmqJms providerVersion="unspecified" transportType="CLIENT" applicationName="xyz" channel="CLIENTS.xyz" hostName="host123.GOT.hst.NET" queueManager="xyz141Q" CCSID="1208"/>
</jmsQueueConnectionFactory>
Exception I get : NameNotFoundException: Intermediate context does not exist: jms/xyz
Can anyone please guide on what all parameters/Configurations I have to use in Server.xml for this to work.Kindly help.
There are several issues with your server.xml:
duplicated jndi-1.0 feature
mixed wasJmsClient and wmqJmsClient - if you only use mq than remove was
mixed versions of wasJmsClient - use only one if you need to connect to internal queues also
<exclude> in features - where did you find such construct, I do not believe it is supported
finally you are using jms\xyz once as QCF name, and once as context name. It is incorrect. Change your QCF jndi name to something differnet e.g. jms\xyz\qcf
UPDATE based on comments
Check how you are using JMS classes.
Here is config and code I used for connecting to MQ:
server.xml fragment:
<feature>jms-2.0</feature>
Java code to send message:
#ApplicationScoped
public class JMSHelper {
private static Logger logger = Logger.getLogger(JMSHelper.class.getName());
#Inject
#JMSConnectionFactory("jms/myapp/NotificationQueueConnectionFactory")
private JMSContext jmsContext;
#Resource(lookup = "jms/myapp/NotificationQueue")
private Queue queue;
#Transactional
void invokeJMS(Object json) throws JMSException, NamingException {
String contents = json.toString();
logger.info("Sending "+contents+" to "+queue.getQueueName());
jmsContext.createProducer().send(queue, contents);
logger.info("JMS Message sent successfully!");
}
}
I am assuming you will use the resource adapter, so please start with reading Liberty and the IBM MQ resource adapter in the IBM documentation.
When you start configuring things like documented by IBM and it still does not work, please post the liberty config and the full exception you get, so we can help you again.

started geode spring boot and save to remote region but failed to start bean gemfireClusterSchemaObjectInitializer

With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction
In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.

java connection refused error with spring boot data geode remote locator

Per my question Apache Geode Web framework I've checked through various spring guides from here and spring data geode samples from here and written a short spring data geode application but it cannot connect to the remote GFSH started Geode locator. The Application class is:
package cm;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication.Locator;
import org.springframework.data.gemfire.config.annotation.EnablePdx;
import org.springframework.data.gemfire.repository.config.EnableGemfireRepositories;
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
and in the resources directory application.properties I've set up the remote locator:
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Build and run the application and it discovers the locator (which it returns as the server name)
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : AutoConnectionSource discovered new locators [UAT:10334]
A couple of seconds later it throws the error:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : locator UAT:10334 is not running.
and
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_232]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_232]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[na:1.8.0_232]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_232]
at java.net.Socket.connect(Socket.java:607) ~[na:1.8.0_232]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:958) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:899) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:888) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.getServerVersion(TcpClient.java:290) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.java:184) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocatorUsingConnection(AutoConnectionSourceImpl.java:209) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocator(AutoConnectionSourceImpl.java:199) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryLocators(AutoConnectionSourceImpl.java:287) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl$UpdateLocatorListTask.run2(AutoConnectionSourceImpl.java:500) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1371) [geode-core-1.9.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_232]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_232]
at org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) [geode-core-1.9.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_232]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server according to Connecting GemFire using Spring Boot and Spring Data GemFire and so I downloaded the ListRegionsOnServerFunction jar and deployed it on the GFSH server get the same result (have not yet restarted the server...) but that causes the same error condition.
If by Spring-Data-Gemfire - Unable to contact a Locator service. Operation either timed out or Locator does not exist I try and change the application.properties from
spring.data.gemfire.pool.locators=1.2.3.4[10334]
to
spring.gemfire.locators=1.2.3.4[10334]
or other variations then the app can't find the remote locator and throws:
[Timer-DEFAULT-3] o.a.g.c.c.i.AutoConnectionSourceImpl : locator localhost/127.0.0.1:10334 is not running.
Writing this question I've finally found How to connect a remote-locator in Geode and also can't PING the GFSH server from the SPRING app. However, the server bind address is setup properly for remote locator clients and various other services and UI using a locally built Geode Native Client for Geode v 1.10 can connect. I suspect PING may be disabled across this (semi-internal) network by default. I also disabled the firewall rules for ports 10334, 1099, 40404 to allow all traffic but still get the same error condition.
It turns out that from repeated INFO messages in the Spring Boot app after the connection refused:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList changing locator list: loc form: LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false] ,loc to: UAT:10334
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList locator list from:[UAT:10334, /1.2.3.4:10334] to: [LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false], LocatorAddress [socketInetAddress=/1.2.3.4:10334, hostname=1.2.3.4, isIpString=true]]
and then running list clients on the server, the connection from the Spring Boot app to the Geode server v 1.10 is in fact established. Arrrgh!
It means the locator logic is working but this doesn't explain why after the first connection there's a java.net.ConnectException: Connection refused: connect error. Any ideas?
1 quick note about your Spring Boot application class...
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
The following statements are true iff you are using Spring Boot for Apache Geode (or Pivotal GemFire), which is highly recommended.
When using SBDG (by declaring the correct org.springframework.geode:spring-geode-starter dependency on your application classpath), then you do not need to explicitly declare the #ClientCacheApplication, #EnableGemfireRepositories or the #EnablePdx annotations since SBDG auto-configures a ClientCache instance by default, auto-configures SD Repositories particularly when all entity classes are in the same package or sub-package as the Spring Boot app and SBDG auto-configures PDX by default, as well.
The locator = #Locator just specifies that the "DEFAULT" GemFire/Geode Pool when configured via the ClientCacheFactory should connect to the cluster via Locators, on localhost using the default Locator port, 10334. Therefore, this attribute is mostly useless and I would recommend the new #EnableClusterAware annotation from SBDG (see here).
The other attributes can be configured via Spring Boot application.properties, like so:
spring.application.name=CmWeb
spring.data.gemfire.pool.subscription-enabled=true
TIP: You can configure subscription on individually "named" Pools, even via properties, if you are using more than 1 Pool (of connections) in your application, perhaps to route different payloads based on workflows to different "grouped" servers in your cluster, etc.
You started to configure the "DEFAULT" Pool in application.properties already with...
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Regarding...
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server
No, SDG does not expect the cluster (of servers) to be configured or bootstrapped with Spring at all. Using Gfsh is perfectly valid. For instance. If the ListRegionsOnServerFunction is not available, SDG falls back to other means (provided by GemFire/Geode itself, which Gfsh knows and uses).
All the messages you are seeing in the Spring Boot app logs are coming from Geode itself, i.e. nothing to do with Spring. In a nutshell, and FWIW, SDG/SBDG is a facade around the Apache Geode (Pivotal GemFire) API and Java client driver. SDG/SBDG is at the mercy of this client doing the right thing, which of course, is partially dependent on proper configuration. Still... I am really just thinking out loud now since I suspect you are already well aware of (or have discovered) all of this.
I would also say the Java client and Native Client are not exactly an apple to apple comparison either. Meaning, if you developed a client using purely the Apache Geode (Pivotal GemFire) API without Spring, you'd have the exact same problem.
I have never seen a case where the first connection is establish but subsequent connections get a "Connection refused", o.O #argh
Have you tried this same configuration/arrangement with older Geode versions, e.g. 1.9?
Sorry for your troubles. I will think on this more.

JBOSS EAP 7 - EJB Client user data

I have migrated my EJB application from jboss 5.0.1 to JBOSS EAP 7.
I want to pass user data from EJB client to my EJB.
I'm using this code to pass custom attribute to ejb server but it does not work anymore.
Client:
public class CustomData extends SimplePrincipal{
String userData1;
public CustomData(String userData1){
this.userData1 = userData1;
}
SecurityClient client = SecurityClientFactory.getSecurityClient();
client.setSimple(new CustomData("MyData"), credentials.getPass());
client.login();
Server:
#Resource
SessionContext ejbCtx;
Principal data= ejbCtx.getCallerPrincipal();
data.getName() --- anonymous
How to fix it on new JBOSS ?
1.Create the client side interceptor
This interceptor must implement the org.jboss.ejb.client.EJBClientInterceptor. The interceptor is expected to pass the additional security token through the context data map, which can be obtained via a call to EJBClientInvocationContext.getContextData().
2.Create and configure the server side container interceptor
Container interceptor classes are simple Plain Old Java Objects (POJOs). They use the #javax.annotation.AroundInvoke to mark the method that is invoked during the invocation on the bean.
a.Create the container interceptor
This interceptor retrieves the security authentication token from the context and passes it to the JAAS (Java Authentication and Authorization Service) domain for verification
b. Configure the container interceptor
3.Create the JAAS LoginModule
This custom module performs the authentication using the existing authenticated connection information plus any additional security token.
Add the Custom LoginModule to the Chain
You must add the new custom LoginModule to the correct location the chain so that it is invoked in the correct order. In this example, the SaslPlusLoginModule must be chained before the LoginModule that loads the roles with the password-stacking option set.
a.Configure the LoginModule Order using the Management CLI
The following is an example of Management CLI commands that chain the custom SaslPlusLoginModule before the RealmDirect LoginModule that sets the password-stacking option.
b. Configure the LoginModule Order Manually
The following is an example of XML that configures the LoginModule order in the security subsystem of the server configuration file. The custom SaslPlusLoginModule must precede the RealmDirect LoginModule so that it can verify the remote user before the user roles are loaded and the password-stacking option is set.
Create the Remote Client
In the following code example, assume the additional-secret.properties file accessed by the JAAS LoginModule
See the link:
https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.2/html/Development_Guide/Pass_Additional_Security_For_EJB_Authentication.html
I have done with this way:
Client:
Properties properties = new Properties();
properties.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
properties.put("org.jboss.ejb.client.scoped.context", "true");
properties.put("remote.connection.default.username", "MyData");
Server:
public class MyContainerInterceptor{
#AroundInvoke
public Object intercept(InvocationContext ctx) throws Exception {
Connection connection = RemotingContext.getConnection();
if (connection != null) {
for (Principal p : connection.getPrincipals()) {
if (p instanceof UserPrincipal) {
if (p.getName() != null && !p.getName().startsWith("$"))
System.out.println(p.getName()); //MyData will be printed
}
}
}
return ctx.proceed();
}
}
Don't forget to configure container interceptor in jboss-ejb3.xml (not in ejb-jar.xml)
<?xml version="1.0" encoding="UTF-8"?>
<jee:assembly-descriptor>
<ci:container-interceptors>
<jee:interceptor-binding>
<ejb-name>*</ejb-name>
<interceptor-class>package...MyContainerInterceptor</interceptor-class>
</jee:interceptor-binding>
</ci:container-interceptors>
</jee:assembly-descriptor>

How can I implement Picketlink Authenticator in the war layer

As the title say, I created a class in the war layer that is annotated with #Picketlink. Note that I have an ear deployment structure (ejb, war).
The custom authenticator:
#PicketLink
public class PicketlinkAuthenticator extends BaseAuthenticator { }
If I put that class in the ejb layer, the authentication is ok but when I put it to the war layer it seems like it's not found by the project as it's throwing:
20:49:46,027 INFO [org.picketlink.common] (default task-10) Using logger implementation: org.picketlink.common.DefaultPicketLinkLogger
20:49:46,043 INFO [org.picketlink.idm] (default task-10) PLIDM001000: Bootstrapping PicketLink Identity Manager
20:49:46,068 WARN [org.picketlink.idm] (default task-10) PLIDM001101: Working directory [\tmp\pl-idm] is marked to be always created. All your existing data will be lost.
20:49:46,111 INFO [org.picketlink.idm] (default task-10) PLIDM001100: Using working directory [\tmp\pl-idm].
20:49:46,127 DEBUG [org.picketlink.idm] (default task-10) No partitions to load from \tmp\pl-idm\pl-idm-partitions.db
20:49:46,152 DEBUG [org.picketlink.idm] (default task-10) Initializing Partition [6a373282-0173-4b7d-bd6a-ff0e5dc43436] with id [6a373282-0173-4b7d-bd6a-ff0e5dc43436].
20:49:46,153 DEBUG [org.picketlink.idm] (default task-10) Loaded Agents for Partition [6a373282-0173-4b7d-bd6a-ff0e5dc43436].
20:49:46,154 DEBUG [org.picketlink.idm] (default task-10) Loaded Credentials for Partition [6a373282-0173-4b7d-bd6a-ff0e5dc43436].
Why not just move the authenticator to the ejb side?
->Because I'm throwing custom error like user expired, etc. I need jsf to post these error messages.
Why not move the picketlink dependency in the web layer?
->Because my account that extended the picketlink account is binded to my services.
As suggested here I already added the picketlink module in the war project:
https://docs.jboss.org/author/display/PLINK/JBoss+Modules
<jboss-deployment-structure>
<ear-subdeployments-isolated>false</ear-subdeployments-isolated>
<sub-deployment name="THE-WAR-MODULE-THAT-REQUIRES-PICKETLINK.war">
<dependencies>
<module name="org.picketlink" />
</dependencies>
</sub-deployment>
</jboss-deployment-structure>
Anyway around this? I just want to show some custom errors :-(
I was not able to solve this problem but I have a work-around solution and that is to move the picketlink module to the web layer and just pass the identity instance to the services that need it.
I have been missing around with the same problem as well for a while now (it's 2016 now ...). What seems to make it work is to add the following CDI annotations:
#PicketLink
#Name
#RequestScoped
public class PicketlinkAuthenticator extends BaseAuthenticator { }
I would have expected the core Authentication Manager to pick this up just based on the #PicketLink Annotation, but without the CDI Annotations, the custom Authenticator class is never even loaded. Maybe there is an other way that will require us to bootstrap PicketLink - but I could not find any references.