MicroProfile LRA on Wildfly - How to setup LRA coordinator host and port on client application runing on WildFly - wildfly

I have introduced LRA on a MicroProfile application already running on WildFly AS.
To get the LRA working I have added the following depedency on my application pom.xml
<dependency>
<groupId>org.jboss.narayana.rts</groupId>
<artifactId>narayana-lra</artifactId>
<version>5.10.6.Final</version>
</dependency>
and I have created an LRA coordinator running on the same host ad listening on port 8080.
The application works as expected.
Now I want to move LRA coordinator on a remote host, but I'm not able to configure my application to point to it (on new host and port).
I have tried to put in my microprofile-config.properties the following parameters:
mp.lra.http.host=<new_host>
mp.lra.http.port=<new_port>
but without effect.
Can anyone suggest me hot to configure LRA coordinator host and port on client application?
Thanks in advance

Narayana doesn't support MicroProfile Config yet even if it is something that it probably should. The properties you want to set are defined only as system properties (i.e., read with System.getProperty(String, String).
Another issue is that the properties you are looking for are defined as lra.http.host and lra.http.port respectively. MP LRA made a deliberate decision to remove all coordinator references from the specification to not specify the implementation architectures (saga can also be implemented as an orchestration pattern).
So you need to set these system properties for instance when you are starting the WildFly server:
bin/standalone.sh -Dlra.http.host=lra-coordinator.com -Dlra.http.port=7777
Finally, if you ever move to the latest Narayana releases, these properties were merged only into single property lra.coordinator.url which is however still read only from system properties.

Related

spring data geode pool is not resolvable as a Pool in the application context

I've come back to a #SpringBootApplication project that uses spring-geode-starter with version 1.2.4 although the same error happens with upgrades to 1.5.6 version.
It sets up a Geode client using
#Component
#EnableClusterDefinedRegions(clientRegionShortcut=ClientRegionShortcut.PROXY)
and in order to register interest subscriptions over HTTP, also
#Configuration
#EnableGemFireHttpSession
with a bean
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
On starting the application the spring data geode client connects to the server (Geode version 1.14) and auto copies regions back to the client, which is great.
However, after all the region handles are copied over, there's an error with the #EnableGemFireHttpSession which is
Error creating bean with name 'ClusteredSpringSessions' defined in class path resource [org/springframework/session/data/gemfire/config/annotation/web/http/GemFireHttpSessionConfiguration.class] and [gemfirePool] is not resolvable as a Pool in the application context
The first info message in the logs is:
org.springframework.session.data.gemfire.config.annotation.web.http.GemFireHttpSessionConfiguration 1109 sessionRegionAttributes: Expiration is not allowed on Regions with a data management policy of PROXY
org.springframework.data.gemfire.support.AbstractFactoryBeanSupport 277 lambda$logInfo$3: Falling back to creating Region [ClusteredSpringSessions] in Cache [Web]
So the client is trying to create a region ClusteredSpringSessions but it can't. The problem appears to resolve itself if I define a connection pool for HTTP, with a pool connection bean like this
#Configuration
#EnableGemFireHttpSession(poolName="devPool")
public class SessionConfig {
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
#Bean("devPool")
PoolFactoryBean sessionPool() {
PoolFactoryBean pool = new PoolFactoryBean();
ConnectionEndpoint ce = new ConnectionEndpoint("1.2.3.4", 10334);
pool.setSubscriptionEnabled(true);
pool.addLocators(ce);
return pool;
}
}
There is still the Expiration is not allowed on Regions with a data management policy of PROXY info message in the log, but this time the Falling back to creating Region [ClusteredSpringSessions] in Cache [Web] appears to work.
I don't understand why a default pool can't connect.
If a pool is defined then in version 1.2.4 that can cause this issue.
Since you are using Spring Boot for Apache Geode (SBDG), which is an excellent choice (thank you), then you can simply include the spring-geode-starter-session dependency on your #SpringBootApplication classpath, which removes the need to explicitly annotate your Spring Boot application with SSDG's #EnableGemFireHttpSession annotation.
See here for more details. I also have a Sample application demonstrating the use of SSDG, here. The guide and source code for this example, along with other examples, can be found here).
Also, I would generally advise that users drive the GemFire/Geode cluster configuration from the application and not let the cluster dictate the Regions (and/or other components/configuration) that the client gets. However, SDG's #EnableClusterDefinedRegions annotation is provided and generally useful in the case you do not have control over the GemFire/Geode cluster your application is using. Still, in the (HTTP) Session UC, the GemFire/Geode cluster would need a Session Region (which defaults to "ClusteredSpringSessions" as determined by Spring Session for Apache Geode (SSDG) itself) anyway.
OK, now to the problem at hand...
I think what is happening here is, due to backwards compatibility and legacy reasons, Spring Data for Apache Geode (SDG), on which both SSDG and SBDG are based; SBDG additionally pulls in SSDG as well, defined a GemFire/Geode Pool by the name of "gemfirePool", specifically when using the SDG XML space and using/defining a DataSource configuration.
So, it is somewhat naively assumed users would be explicitly defining a Pool and calling it "gemfirePool", and not simply relying on a "default" Pool connection to the GemFire/Geode cache server (namely "localhost", 40404, or if using Locators (recommended), "localhost" and 10334).
However, for development purposes, and in SBDG specifically, I rely on the fact that GemFire/Geode creates a "DEFAULT" Pool anyway (when no explicit Pool is defined), and forgo the strict requirement that a "gemfirePool" should exist. But, SBDG builds on SSDG and SDG and they still rely on the legacy arrangement (for example).
I have filed an Issue ticket in SSDG to change this and better align SSDG with what SBDG prefers going forward, but I simply have not gotten around to it yet. My apologies for your inconvenience.
Anyway, it is a simple change you can make externally from your Spring Boot application, in application.properties like so (see here from the HTTP Session Sample I referenced from SBDG above). This will allow you to configure the Session Region Pool "name".
Also note, it is possible to change the name of the Session Region used by the client if what comes down from the cluster when you are using SDG's #EnableClusterDefinedRegions and the Region definition pulled down from the cluster is named differently on the server-side using this property.
Additionally, you can also configure the client Session Region data policy using properties as well (for example).
Regarding the Expiration "info" message you are seeing in the logs...
Since the client Session Region is a PROXY by default, then Expiration, Eviction and other Region data management policies (e.g. Compression, etc), do not actually make much sense.
In fact, SSDG is smart about whether to apply additional Region data management policies locally or not (see here, and specifically, this logic).
The message you are seeing in your application logs is in fact coming from SSDG, specifically. This message really serves as a reminder that your Session state management is actually "managed" on the server-side (when the application client is using a PROXY or even a CACHING_PROXY Region for that matter) and that the corresponding server-side, or cluster Sessions Region should be configured manually and appropriately, with Expiration policies as well as other things if necessary. Otherwise, no Session expiration would actually happen!
I hope all this makes since.
If you continue to have problems, feel free to file an Issue ticket and provide an example test or small application replicating your problem.

Jboos connectivity Issue

I am getting the following error when trying to connect my application to jboss:
WARN | ISPN004022: Unable to invalidate transport for server:
/127.0.0.1:12222 ERROR | ISPN004017: Could not fetch transport
org.infinispan.client.hotrod.exceptions.TransportException:: Could not
connect to server: /127.0.0.1:12222
Tried searching a lot for a solution. It would be great is someone could help me out with this. Thanks
You must recall the following actions:
Make sure that your webapp is using the same port as defined in the socket-binding definitions for hotrod in the standalone.xml for JDG configuration folder;
Make sure that your webapp is using the proper inject annotations for your RemoteCacheManager class (remember to use the #ApplicationScopped annotation at the class definition and for additional methods used to get the cache instance);
If you are using JBoss and JDG on the same host, you must check declarations of the JBOSS_HOME environment variable. This variable must be assigned to the JDG installation home directory and not the JBoss EAP home (check also port-offset settings at startup if you're using a custom shell script);
If you are not using both products on the same host, check firewall and network settings;
Remember to re-deploy the application always after every modification and check both EAP and JDG console output for warnings and/or errors.
The following errors are related (for example):
14:38:42,610 WARN [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (http-127.0.0.1:8080-1) ISPN004022:
Unable to invalidate transport for server: /127.0.0.1:11322
14:38:42,610 ERROR [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (http-127.0.0.1:8080-1) ISPN004017:
Could not fetch transport: java.lang.IllegalStateException: Pool not open

How can I set distinct-name on Wildfly?

I try EJB invocations from a client to a server, they are actually same copied ear files each other and on same machine.
I think that I must set "distinct-name" in somewhere, but I can not find it.
WildFly Developer Guide - EJB invocations from a remote client using JNDI:
distinct-name : This is a WildFly-specific name which can be optionally assigned to the deployments that are deployed on the server. More about the purpose and usage of this will be explained in a separate chapter. If a deployment doesn't use distinct-name then, use an empty string in the JNDI name, for distinct-name
Where is "a separate chapter"?
Set <distinct-name> attribute in jboss-app.xml(EAR) or jboss-ejb3.xml(WAR/EJB jar).

How to dynamically configure security for Artemis MQ addresses

Trying to dynamically create and provide security metadata for artemis mq topics (as opposed to defining them statically in broker.xml).
For that purpose I've implemented (as described here) the SecuritySettingPlugin interface.
Now, the issue is the getSecurityRoles/populateSecurityRoles of the implementation are called only at server startup.
So, at some point in time after the mq server has been started, a topic will be created :
org.apache.activemq.artemis.api.jms.management.JMSServerControl.createTopic("newTopic")
Now I would like artemis to call again my SecuritySettingPlugin implementation to get the updated security roles (which will include configuration for the newly created newTopic).
Is that possible ?
P.S. security-invalidation-interval does not invalidate roles configuration cache.
Seems there is a way to customize an address security by API :
ActiveMQServerControl.addSecuritySettings()

Connect HermesJMS to Wildfly 8.2

we recently changed our Application Server from Glassfish to Wildfly. With Glassfish we used QBrowser to monitor our JMS Queues, sadly that tool does not work with Wildfly.
After a quick search I found the Tool HermesJMS. Although there are lots of guides how to set up a connection to a JMS queue with it I couldn´t find anything directly for the JBoss Wildfly application server. After lots of reading through different guides I think I can now connect to the wildfly server but I just can´t connect to my jms queues.
First I tried to connect via JNDI InitialContext. Here´s my settings for it:
initialContextFactory: org.jboss.naming.remote.client.InitialContextFactory
providerURL: http-remoting://localhost:
urlPkgPrefixes: org.jboss.naming.remote.client
securityPrincipal: admin
securityCredentials: admin
It does connect but all I see are my deployed web applications and a "jms" folder. But they all contain the same web-applications again plus the jms folder and appear as a red circle with a white X in it.
So next I tried to set up a session manually via "Create new JMS Session" with following preferences:
Session: HornetQ
Plugin: HornetQ
Properties:
binding: jms/RemoteConnectionFactory
initialContextFactory: initialContextFactory: org.jboss.naming.remote.client.InitialContextFactory
providerURL: http-remoting://localhost:
urlPkgPrefixes: org.jboss.naming.remote.client
User: guest Password: pass
The guest user is an user I created in Wildfly as an application user
When I then double click on one of the queues it says that there is no such queue.
javax.jms.JMSException: There is no queue with name java:jboss/jms/queue/ngsEmailProvRequestQueue
at org.hornetq.jms.client.HornetQSession.createQueue(HornetQSession.java:397)
at hermes.impl.jms.SimpleDestinationManager.createDesintaion(SimpleDestinationManager.java:60)
at hermes.impl.JNDIDestinationManager.createDesintaion(JNDIDestinationManager.java:105)
at hermes.impl.jms.SimpleDestinationManager.getDestination(SimpleDestinationManager.java:137)
at hermes.impl.jms.AbstractSessionManager.getDestination(AbstractSessionManager.java:387)
at hermes.impl.DefaultHermesImpl.getDestination(DefaultHermesImpl.java:323)
at hermes.browser.tasks.BrowseDestinationTask.invoke(BrowseDestinationTask.java:122)
at hermes.browser.tasks.TaskSupport.run(TaskSupport.java:175)
at hermes.browser.tasks.ThreadPool.run(ThreadPool.java:170)
at java.lang.Thread.run(Thread.java:745)
Does anybody know what I´m missing? Is it even possible to get HermesJms to work with Wildfly? Of if not is there an alternative monitoring tool for JMS queues?
Thank you for your help.
To work with Wildfly, follow this doc: https://developer.jboss.org/wiki/UsingHermesJMSWithHornetQ
Second part: Configuring HermesJMS for JBoss7 / EAP6 with HornetQ
And change those values:
binding=jms/RemoteConnectionFactory
initialContextFactory=org.jboss.naming.remote.client.InitialContextFactory
providerURL=http-remoting://localhost:8080
urlPkgPrefixes=org.jboss.naming.remote.client
In the destinations, change also:
Name: sample
Domain: QUEUE
Maybe you could have a look at JMSToolbox on sourceforge: https://sourceforge.net/projects/jmstoolbox/?source=directory
i recently revisited this as the team is moving from glassfish (yaye...) to wildfly. I tried with wildfly9 and it works.
I think it is a matter of exporting your queue name. see below
java:/jms/queue/test does not work
java:jboss/exported/jms/queue/test works
Note: wildfly9.2 is the final version that has hornetq. wildfly 10++ supports artemis instead.