Spring Data Geode register interest is not receiving events - geode

Related to this question I've setup a Spring Data Geode Client application with
#EnableClusterDefinedRegions(clientRegionShortcut=ClientRegionShortcut.CACHING_PROXY)
and by ensuring all classes are autowired then using the #Resource the Geode server regions are setup and instantiated on the client.
#Resource(name = "request")
private Region<String, Request> request;
I can put and get on the regions like this. However when I try and register interest in a key on the server the updates from other clients to the server are received by the spring boot client. The register interest code:
request.registerInterestForAllKeys();
request.getAttributesMutator().addCacheListener(new myListener());
The logs show the interest is added to the region:
DEBUG [main] org.apach.geode.inter.cache.GemFireCacheImpl 4388 registerInterestStarted: registerInterestsStarted: new count = 1
TRACE [main] org.apach.geode.inter.InternalDataSerializer 2194 basicWriteObject: basicWriteObject: KEYS_VALUES
TRACE [main] org.apach.geode.inter.InternalDataSerializer 1535 writeDSFID: writeDSFID 37 class=class org.apache.geode.internal.cache.tier.sockets.InterestResultPolicyImpl
TRACE [main] org.apach.geode.cache.clien.inter.OpExecutorImpl 568 executeOnQueuesAndReturnPrimaryResult: sending org.apache.geode.cache.client.internal.RegisterInterestOp$RegisterInterestOpImpl#5e1a7d3 to backups: []
TRACE [main] org.apach.geode.cache.clien.inter.OpExecutorImpl 584 executeOnQueuesAndReturnPrimaryResult: sending org.apache.geode.cache.client.internal.RegisterInterestOp$RegisterInterestOpImpl#5e1a7d3 to primary: Connection[1.2.3.4:40404]#613231852
TRACE [main] org.apach.geode.cache.clien.inter.AbstractOp 85 attemptSend: Sending op=RegisterInterestOp$RegisterInterestOpImpl using Connection[1.2.3.4:40404]#613231852
When another client adds or changes a value on the server region that event is not getting to the spring boot client. The CacheListenerAdapter with usual afterCreate and afterUpdate overrides is not called.
The use case is to register and unregister to lots of different keys on the fly.
If I use the spring boot app itself to put to the local region, then the event handler is being called. So it's not an issue with the spring boot code this is a connection pool & server registration issue of some kind.

The problem was my application.properties had
spring.data.gemfire.cache.client.durable-client-id=myListener
For some reason a durable client cannot run a CacheListenerAdapter

Related

started geode spring boot and save to remote region but failed to start bean gemfireClusterSchemaObjectInitializer

With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction
In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.

java connection refused error with spring boot data geode remote locator

Per my question Apache Geode Web framework I've checked through various spring guides from here and spring data geode samples from here and written a short spring data geode application but it cannot connect to the remote GFSH started Geode locator. The Application class is:
package cm;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication.Locator;
import org.springframework.data.gemfire.config.annotation.EnablePdx;
import org.springframework.data.gemfire.repository.config.EnableGemfireRepositories;
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
and in the resources directory application.properties I've set up the remote locator:
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Build and run the application and it discovers the locator (which it returns as the server name)
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : AutoConnectionSource discovered new locators [UAT:10334]
A couple of seconds later it throws the error:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : locator UAT:10334 is not running.
and
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_232]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_232]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[na:1.8.0_232]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_232]
at java.net.Socket.connect(Socket.java:607) ~[na:1.8.0_232]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:958) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:899) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:888) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.getServerVersion(TcpClient.java:290) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.java:184) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocatorUsingConnection(AutoConnectionSourceImpl.java:209) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocator(AutoConnectionSourceImpl.java:199) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryLocators(AutoConnectionSourceImpl.java:287) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl$UpdateLocatorListTask.run2(AutoConnectionSourceImpl.java:500) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1371) [geode-core-1.9.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_232]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_232]
at org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) [geode-core-1.9.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_232]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server according to Connecting GemFire using Spring Boot and Spring Data GemFire and so I downloaded the ListRegionsOnServerFunction jar and deployed it on the GFSH server get the same result (have not yet restarted the server...) but that causes the same error condition.
If by Spring-Data-Gemfire - Unable to contact a Locator service. Operation either timed out or Locator does not exist I try and change the application.properties from
spring.data.gemfire.pool.locators=1.2.3.4[10334]
to
spring.gemfire.locators=1.2.3.4[10334]
or other variations then the app can't find the remote locator and throws:
[Timer-DEFAULT-3] o.a.g.c.c.i.AutoConnectionSourceImpl : locator localhost/127.0.0.1:10334 is not running.
Writing this question I've finally found How to connect a remote-locator in Geode and also can't PING the GFSH server from the SPRING app. However, the server bind address is setup properly for remote locator clients and various other services and UI using a locally built Geode Native Client for Geode v 1.10 can connect. I suspect PING may be disabled across this (semi-internal) network by default. I also disabled the firewall rules for ports 10334, 1099, 40404 to allow all traffic but still get the same error condition.
It turns out that from repeated INFO messages in the Spring Boot app after the connection refused:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList changing locator list: loc form: LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false] ,loc to: UAT:10334
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList locator list from:[UAT:10334, /1.2.3.4:10334] to: [LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false], LocatorAddress [socketInetAddress=/1.2.3.4:10334, hostname=1.2.3.4, isIpString=true]]
and then running list clients on the server, the connection from the Spring Boot app to the Geode server v 1.10 is in fact established. Arrrgh!
It means the locator logic is working but this doesn't explain why after the first connection there's a java.net.ConnectException: Connection refused: connect error. Any ideas?
1 quick note about your Spring Boot application class...
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
The following statements are true iff you are using Spring Boot for Apache Geode (or Pivotal GemFire), which is highly recommended.
When using SBDG (by declaring the correct org.springframework.geode:spring-geode-starter dependency on your application classpath), then you do not need to explicitly declare the #ClientCacheApplication, #EnableGemfireRepositories or the #EnablePdx annotations since SBDG auto-configures a ClientCache instance by default, auto-configures SD Repositories particularly when all entity classes are in the same package or sub-package as the Spring Boot app and SBDG auto-configures PDX by default, as well.
The locator = #Locator just specifies that the "DEFAULT" GemFire/Geode Pool when configured via the ClientCacheFactory should connect to the cluster via Locators, on localhost using the default Locator port, 10334. Therefore, this attribute is mostly useless and I would recommend the new #EnableClusterAware annotation from SBDG (see here).
The other attributes can be configured via Spring Boot application.properties, like so:
spring.application.name=CmWeb
spring.data.gemfire.pool.subscription-enabled=true
TIP: You can configure subscription on individually "named" Pools, even via properties, if you are using more than 1 Pool (of connections) in your application, perhaps to route different payloads based on workflows to different "grouped" servers in your cluster, etc.
You started to configure the "DEFAULT" Pool in application.properties already with...
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Regarding...
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server
No, SDG does not expect the cluster (of servers) to be configured or bootstrapped with Spring at all. Using Gfsh is perfectly valid. For instance. If the ListRegionsOnServerFunction is not available, SDG falls back to other means (provided by GemFire/Geode itself, which Gfsh knows and uses).
All the messages you are seeing in the Spring Boot app logs are coming from Geode itself, i.e. nothing to do with Spring. In a nutshell, and FWIW, SDG/SBDG is a facade around the Apache Geode (Pivotal GemFire) API and Java client driver. SDG/SBDG is at the mercy of this client doing the right thing, which of course, is partially dependent on proper configuration. Still... I am really just thinking out loud now since I suspect you are already well aware of (or have discovered) all of this.
I would also say the Java client and Native Client are not exactly an apple to apple comparison either. Meaning, if you developed a client using purely the Apache Geode (Pivotal GemFire) API without Spring, you'd have the exact same problem.
I have never seen a case where the first connection is establish but subsequent connections get a "Connection refused", o.O #argh
Have you tried this same configuration/arrangement with older Geode versions, e.g. 1.9?
Sorry for your troubles. I will think on this more.

Spring MVC on WebSphere Liberty, 404s on all Spring URLs

Eclipse Oxygen with WebSphere Development Tools (WDT), Spring MVC 4.3.14, WebSphere Liberty Core 18.0.0.1 on Java 8. Liberty Features enabled (deliberately not latest) are:
<featureManager>
<feature>adminCenter-1.0</feature>
<feature>localConnector-1.0</feature>
<feature>jaxrs-1.1</feature>
<feature>concurrent-1.0</feature>
<feature>webProfile-6.0</feature>
<feature>jaxb-2.2</feature>
</featureManager>
JSPs on the context root are working fine, so that's correct. Also, ibm-web-ext.xml has <context-root uri="/webapp/gatewaymvm/" />
The Spring startup logging indicates that my #Controller classes are binding to the paths I expect:
10:31:24,102 DEBUG org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping:Looking for request mappings in application context: WebApplicationContext for namespace 'Spring MVC Dispatcher-servlet': startup date [Thu Apr 05 10:31:22 CDT 2018]; parent: Root WebApplicationContext
....
10:31:24,125 DEBUG org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping:1 request handler methods found on class mypackage.QueryTransactionController: {public mypackage.QueryTransResponse mypackage.QueryTransactionController.processRequest(mypackage.QueryTransRequest,javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) throws java.io.IOException={[/QueryTransaction],methods=[POST],consumes=[application/json || application/xml],produces=[application/json || application/xml]}}
10:31:24,125 INFO org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping:Mapped "{[/QueryTransaction],methods=[POST],consumes=[application/json || application/xml],produces=[application/json || application/xml]}" onto public mypackage.QueryTransResponse mypackage.QueryTransactionController.processRequest(mypackage.QueryTransRequest,javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) throws java.io.IOException
...
10:31:24,130 DEBUG org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping:1 request handler methods found on class mypackage.TestPostJSONDocumentController: {public java.lang.String mypackage.TestPostDocumentController.execute(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) throws java.io.IOException={[/testPostJSONDoc],methods=[POST]}}
10:31:24,130 INFO org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping:Mapped "{[/testPostJSONDoc],methods=[POST]}" onto public java.lang.String mypackage.TestPostDocumentController.execute(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) throws java.io.IOException
Yet, when I hit any of those URLs, they produce 404 responses and log entries like the following:
10:32:40,067 DEBUG org.springframework.web.servlet.DispatcherServlet:DispatcherServlet with name 'Spring MVC Dispatcher' processing POST request for [/webapp/gatewaymvm/testPostJSONDoc]
10:32:40,067 DEBUG org.springframework.webflow.mvc.servlet.FlowHandlerMapping:No flow mapping found for request with URI '/webapp/gatewaymvm/testPostJSONDoc'
10:32:40,068 DEBUG org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping:Looking up handler method for path testPostJSONDoc
10:32:40,075 DEBUG org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping:Did not find handler method for [testPostJSONDoc]
10:32:40,076 WARN org.springframework.web.servlet.PageNotFound:No mapping found for HTTP request with URI [/webapp/gatewaymvm/testPostJSONDoc] in DispatcherServlet with name 'Spring MVC Dispatcher'
This same application, when deployed to "Traditional" WAS, works as expected. I imagine there's something obvious I'm missing about Liberty, Liberty under Eclipse, or Spring MVC under Liberty.
At some point during my testing, where I had code that constructed a URL from the current request, I had seen /webapp/gatewaymvm//resource, with two slashes together.
So I tried removing the trailing slash from the places where I had a context root of /webapp/gatewaymvm/, and that resolved the issue. Either from server.xml if I have the WAR installed there directly, or from application.xml if I have the WAR installed in an EAR/Enterprise project.
Interesting that the JSPs worked with the trailing slash there, but the Spring paths did not.

Wildfly10 (EAP 7) call jboss 5.0.1 EJB without legacy jars

Dears,
I'm trying to call ejb3 in jboss 5.0.1 from Wildfly 10 or EAP 7.
My code:
final Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
env.put("java.naming.factory.url.pkgs", "org.jboss.ejb.client.naming");
env.put(Context.PROVIDER_URL, "remoting://localhost:1099");
env.put("org.jboss.ejb.client.scoped.context", "true");
InitialContext initialContext = new InitialContext(env);
TestBeanRemote remote = (TestBeanRemote) initialContext.lookup(
"ejb:TestEar/TestBean/TestBean!com.test.TestBeanRemote");
but it says:
Exception in thread "main" java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:BilllingFacadeCallbackEAR, moduleName:BilllingFacadeCallback, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext#3b088d51
at org.jboss.ejb.client.EJBClientContext.requireEJBReceiver(EJBClientContext.java:798)
at org.jboss.ejb.client.ReceiverInterceptor.handleInvocation(ReceiverInterceptor.java:128)
at org.jboss.ejb.client.EJBClientInvocationContext.sendRequest(EJBClientInvocationContext.java:186)
at org.jboss.ejb.client.EJBInvocationHandler.sendRequestWithPossibleRetries(EJBInvocationHandler.java:255)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:200)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:183)
at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:146)
at com.sun.proxy.$Proxy2.getActions(Unknown Source)
at TestStandalone.main(TestStandalone.java:28)
Is there any solution to call legacy jboss without old jars?
There is a legacy subsystem for this but I don't know its current status.
https://github.com/jboss-set/jboss-as-legacy
The CORBA standard defines an "across the wire" standard for making remote method calls called IIOP or "Internet Inter-ORB Protocol".
You need to set up to use CORBA IIOP in order to make platform independent remote EJB calls.
Therefore, you need to:
configure JBoss 5 so that it can handle incoming IIOP calls;
configure WildFly 10/EAP 7 to make outgoing EJB invocations using IIOP.
There is some information on this in the WildFly 10 EJB3 Reference Guide although I'm not sure how up to date that is.
The issue is normally caused by a transaction reaching it's timeout value.
So it may be that the application logic is correctly handling the scenario in this case and is not attempting to retry activity
It can have several issues :
connection: Connection broken
security : user/pass invalid
EJB missing: connected, but ejb is not there
SSL
Ports
IP Address
JBoss maintains a persistent connection to the other server, so when the client sees this message it means there is no connection to a server that has the ejb you are trying to call, so a message will be logged when the connection fails to the other server.
Caused by: java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling
Can you clarify the below:
1# is your EJBs deployed on jboss 5.0.1?
2# You are invoking the EJBS from Wildfly 10 or EAP 7, means your client is deployed in Wildfly 10 or EAP 7?

Instantiating EntityManagerFactory with GWT, JPA and Tomcat

I am using GWT with JPA and Hibernate in Tomcat Apache container. When I try testing my dao and database connection from a Standalone java application it works fine. Howerver, when I use it in server enviornment, it SOMETIMES works sometimes doesn't. Here is an abridged sequence of logevents:
org.hibernate.type.BasicTypeRegistry
- Adding type registration boolean -> org.hibernate.type.BooleanType#82b436
INFO org.hibernate.cfg.Environment -
Hibernate 3.6.0.Final 42937
[btpool0-0] INFO org.hibernate.cfg.Environment -hibernate.properties not found 42940
[btpool0-0] INFO org.hibernate.cfg.Environment -Bytecode provider name : javassist
[btpool0-0] INFO org.hibernate.cfg.Environment - usingJDK 1.4 java.sql.Timestamp handling
43038 [btpool0-0] DEBUG org.hibernate.id.factory.DefaultIdentifierGeneratorFactor - Registering IdentifierGenerator strategy [uuid2] -> [class
org.hibernate.id.UUIDGenerator] 43069
[btpool0-0] INFO
org.hibernate.ejb.Version - Hibernate
EntityManager 3.6.0.Final
43090 [btpool0-0] DEBUG
org.hibernate.type.BasicTypeRegistry
- Adding type registration text -> org.hibernate.type.TextType#1cf00aa43106
[btpool0-0] DEBUG
org.hibernate.ejb.Ejb3Configuration -
Look up for persistence unit:
transactions-optional 43269
[btpool0-0] DEBUG org.hibernate.ejb.Ejb3Configuration Detect class: true; detect hbm: true
43285 [btpool0-0] DEBUG
org.hibernate.ejb.packaging.AbstractJarVisitor
- Searching mapped entities in jar/par: file://xxxxx 43378
[btpool0-0] DEBUG
org.hibernate.ejb.packaging.AbstractJarVisitor
- Filtering: com.demo.server.hello 43492 [btpool0-0] DEBUG
org.hibernate.ejb.packaging.AbstractJarVisitor
- Java element filter matched for com.demo.server.hello 43505
[btpool0-0] DEBUG
org.hibernate.ejb.Ejb3Configuration -
Detect class: true; detect hbm: true
43505 [btpool0-0] DEBUG
org.hibernate.ejb.Ejb3Configuration -
Creating Factory:
transactions-optional
After this I get no log message and my client layer can not talk to database layer. When my client layer is able to talk to database layer, the entry following above log entry is as follows:
1063 [main] DEBUG org.hibernate.cfg.Configuration - Processing hbm.xml files
If you could point out as to what might be going wrong, I will really appreciate it. I can't figure out if its eclipse compilation fault, or some problem in GWT plugin or (most likely) my programming bug.
Are you sure you are using JPA? I'm not familiar with GWT, but I assume it deploys your application as a WAR file. If so, check if you WAR file contains a META-INF/persistence.xml file, and verify the connection details from there.
1063 [main] DEBUG org.hibernate.cfg.Configuration - Processing hbm.xml files
It seems that Hibernate is creating a session every time your client is able to talk to the database. The fact that sometimes it's called (and works) indicates that it's not a problem with Hibernate. Otherwise, you'd see a consistent behavior. So, I would double-check if the requests are not failing before reaching Hibernate. For instance, I would try to add some debug log entries before and after Hibernate is called.