started geode spring boot and save to remote region but failed to start bean gemfireClusterSchemaObjectInitializer - spring-data-gemfire

With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect

Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction

In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.

Related

java connection refused error with spring boot data geode remote locator

Per my question Apache Geode Web framework I've checked through various spring guides from here and spring data geode samples from here and written a short spring data geode application but it cannot connect to the remote GFSH started Geode locator. The Application class is:
package cm;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication.Locator;
import org.springframework.data.gemfire.config.annotation.EnablePdx;
import org.springframework.data.gemfire.repository.config.EnableGemfireRepositories;
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
and in the resources directory application.properties I've set up the remote locator:
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Build and run the application and it discovers the locator (which it returns as the server name)
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : AutoConnectionSource discovered new locators [UAT:10334]
A couple of seconds later it throws the error:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : locator UAT:10334 is not running.
and
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_232]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_232]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[na:1.8.0_232]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_232]
at java.net.Socket.connect(Socket.java:607) ~[na:1.8.0_232]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:958) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:899) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:888) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.getServerVersion(TcpClient.java:290) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.java:184) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocatorUsingConnection(AutoConnectionSourceImpl.java:209) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocator(AutoConnectionSourceImpl.java:199) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryLocators(AutoConnectionSourceImpl.java:287) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl$UpdateLocatorListTask.run2(AutoConnectionSourceImpl.java:500) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1371) [geode-core-1.9.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_232]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_232]
at org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) [geode-core-1.9.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_232]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server according to Connecting GemFire using Spring Boot and Spring Data GemFire and so I downloaded the ListRegionsOnServerFunction jar and deployed it on the GFSH server get the same result (have not yet restarted the server...) but that causes the same error condition.
If by Spring-Data-Gemfire - Unable to contact a Locator service. Operation either timed out or Locator does not exist I try and change the application.properties from
spring.data.gemfire.pool.locators=1.2.3.4[10334]
to
spring.gemfire.locators=1.2.3.4[10334]
or other variations then the app can't find the remote locator and throws:
[Timer-DEFAULT-3] o.a.g.c.c.i.AutoConnectionSourceImpl : locator localhost/127.0.0.1:10334 is not running.
Writing this question I've finally found How to connect a remote-locator in Geode and also can't PING the GFSH server from the SPRING app. However, the server bind address is setup properly for remote locator clients and various other services and UI using a locally built Geode Native Client for Geode v 1.10 can connect. I suspect PING may be disabled across this (semi-internal) network by default. I also disabled the firewall rules for ports 10334, 1099, 40404 to allow all traffic but still get the same error condition.
It turns out that from repeated INFO messages in the Spring Boot app after the connection refused:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList changing locator list: loc form: LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false] ,loc to: UAT:10334
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList locator list from:[UAT:10334, /1.2.3.4:10334] to: [LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false], LocatorAddress [socketInetAddress=/1.2.3.4:10334, hostname=1.2.3.4, isIpString=true]]
and then running list clients on the server, the connection from the Spring Boot app to the Geode server v 1.10 is in fact established. Arrrgh!
It means the locator logic is working but this doesn't explain why after the first connection there's a java.net.ConnectException: Connection refused: connect error. Any ideas?
1 quick note about your Spring Boot application class...
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
The following statements are true iff you are using Spring Boot for Apache Geode (or Pivotal GemFire), which is highly recommended.
When using SBDG (by declaring the correct org.springframework.geode:spring-geode-starter dependency on your application classpath), then you do not need to explicitly declare the #ClientCacheApplication, #EnableGemfireRepositories or the #EnablePdx annotations since SBDG auto-configures a ClientCache instance by default, auto-configures SD Repositories particularly when all entity classes are in the same package or sub-package as the Spring Boot app and SBDG auto-configures PDX by default, as well.
The locator = #Locator just specifies that the "DEFAULT" GemFire/Geode Pool when configured via the ClientCacheFactory should connect to the cluster via Locators, on localhost using the default Locator port, 10334. Therefore, this attribute is mostly useless and I would recommend the new #EnableClusterAware annotation from SBDG (see here).
The other attributes can be configured via Spring Boot application.properties, like so:
spring.application.name=CmWeb
spring.data.gemfire.pool.subscription-enabled=true
TIP: You can configure subscription on individually "named" Pools, even via properties, if you are using more than 1 Pool (of connections) in your application, perhaps to route different payloads based on workflows to different "grouped" servers in your cluster, etc.
You started to configure the "DEFAULT" Pool in application.properties already with...
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Regarding...
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server
No, SDG does not expect the cluster (of servers) to be configured or bootstrapped with Spring at all. Using Gfsh is perfectly valid. For instance. If the ListRegionsOnServerFunction is not available, SDG falls back to other means (provided by GemFire/Geode itself, which Gfsh knows and uses).
All the messages you are seeing in the Spring Boot app logs are coming from Geode itself, i.e. nothing to do with Spring. In a nutshell, and FWIW, SDG/SBDG is a facade around the Apache Geode (Pivotal GemFire) API and Java client driver. SDG/SBDG is at the mercy of this client doing the right thing, which of course, is partially dependent on proper configuration. Still... I am really just thinking out loud now since I suspect you are already well aware of (or have discovered) all of this.
I would also say the Java client and Native Client are not exactly an apple to apple comparison either. Meaning, if you developed a client using purely the Apache Geode (Pivotal GemFire) API without Spring, you'd have the exact same problem.
I have never seen a case where the first connection is establish but subsequent connections get a "Connection refused", o.O #argh
Have you tried this same configuration/arrangement with older Geode versions, e.g. 1.9?
Sorry for your troubles. I will think on this more.

spring-cloud-sleuth with opentracing

I'm using the latest milestone of spring-cloud-sleuth and I can't seem to get traces emitted through opentracing. I have a Tracer bean defined and spring boot seems to acknowledge that, but no traces are being emitted.
Is there a way to check if spring-cloud-sleuth is aware of the Tracer bean?
update
I did see the merged documentation and have a Tracer instance on the bean, as defined below:
#Bean(name = "tracer")
#Primary
public Tracer lightstepTracer() throws MalformedURLException {
Options opt = lightstepOptionsBuilder.build();
log.info("Instantiating LightStep JRETracer.");
return new JRETracer(opt);
}
I'm not explicitly importing the OpenTracing APIs, because the LightStep tracer pulls that in transitively, but I can try doing that.
I've also explicitly enabled OpenTracing support in my application.yml file.
sleuth:
opentracing:
enabled: true
If you go to the latest snapshot documentation (or milestone) and you search for the word OpenTracing, you would get your answer. It's here https://cloud.spring.io/spring-cloud-sleuth/single/spring-cloud-sleuth.html#_opentracing
Spring Cloud Sleuth is compatible with OpenTracing. If you have OpenTracing on the classpath, we automatically register the OpenTracing Tracer bean. If you wish to disable this, set spring.sleuth.opentracing.enabled to false
So it's enough to just have OpenTracing on the classpath and Sleuth will work out of the box

Spring Boot with application managed persistence context

I am trying to migrate an application from EJB3 + JTA + JPA (EclipseLink). Currently, this application makes use of application managed persistent context due to an unknown number of databases on design time.
The application managed persistent context allows us to control how to create EntityManager (e.g. supply different datasources JNDI to create proper EntityManager for specific DB on runtime).
E.g.
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.TRANSACTION_TYPE, "JTA");
//the datasource JNDI is by configuration and without prior knowledge about the number of databases
//currently, DB JNDI are stored in a externalized file
//the datasource is setup by operation team
properties.put(PersistenceUnitProperties.JTA_DATASOURCE, "datasource-jndi");
properties.put(PersistenceUnitProperties.CACHE_SHARED_DEFAULT, "false");
properties.put(PersistenceUnitProperties.SESSION_NAME, "xxx");
//create the proper EntityManager for connect to database decided on runtime
EntityManager em = Persistence.createEntityManagerFactory("PU1", properties).createEntityManager();
//query or update DB
em.persist(entity);
em.createQuery(...).executeUpdate();
When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback.
Now, I am trying to migrate this application to Spring Boot.
The problem I encounter is that there is no transaction started even after I annotate the method with #Transactional(propagation = Propagation.REQUIRED).
The Spring application is packed as an executable JAR file and run with embadded Tomcat.
When I try to execute those update APIs, e.g. EntityManager.persist(..), EclipseLink always complains about:
javax.persistence.TransactionRequiredException: 'No transaction is currently active'
Sample code below:
//for data persistence
#Service
class DynamicServiceImpl implements DynamicService {
//attempt to start a transaction
#Transactional(propagation = Propagation.REQUIRED)
public void saveData(DbJndi, EntityA){
//this return false that no transaction started
TransactionSynchronizationManager.isActualTransactionActive();
//create an EntityManager based on the input DbJndi to dynamically
//determine which DB to save the data
EntityManager em = createEm(DbJndi);
//save the data
em.persist(EntityA);
}
}
//restful service
#RestController
class RestController{
#Autowired
DynamicService service;
#RequestMapping( value = "/saveRecord", method = RequestMethod.POST)
public #ResponseBody String saveRecord(){
//save data
service.saveData(...)
}
}
//startup application
#SpringBootApplication
class TestApp {
public static void main(String[] args) {
SpringApplication.run(TestApp.class, args);
}
}
persistence.xml
-------------------------------------------
&ltpersistence-unit name="PU1" transaction-type="JTA">
&ltproperties>
&lt!-- comment for spring to handle transaction??? -->
&lt!--property name="eclipselink.target-server" value="WebLogic_10"/ -->
&lt/properties>
&lt/persistence-unit>
-------------------------------------------
application.properties (just 3 lines of config)
-------------------------------------------
spring.jta.enabled=true
spring.jta.log-dir=spring-test # Transaction logs directory.
spring.jta.transaction-manager-id=spring-test
-------------------------------------------
My usage pattern does not follow most typical use cases (e.g. with known number of DBs - Spring + JPA + multiple persistence units: Injecting EntityManager).
Can anybody give me advice on how to solve this issue?
Is there anybody who has ever hit this situation that the DBs are not known in design time?
Thank you.
I finally got it work with:
Enable tomcat JNDI and create the datasource JNDI to each DS programmatically
Add transaction stuff
com.atomikos:transactions-eclipselink:3.9.3 (my project uses eclipselink instead of hibernate)
org.springframework.boot:spring-boot-starter-jta-atomikos
org.springframework:spring-tx
You have pretty much answered the question yourself: "When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback".
WebLogic is compliant with the Java Enterprise Edition specification which is probably why it worked before, but now you are using Tomcat (in embedded mode) which are NOT.
So you simply cannot do what you are trying to do.
This statement in your persistence.xml file:
<persistence-unit name="PU1" transaction-type="JTA">
requires an Enterprise Server (WebLogic, Glassfish, JBoss etc.)
With Tomcat you can only do this:
<persistence-unit name="PU1" transaction-type="RESOURCE_LOCAL">
And you need to handle transactions by your self:
myEntityManager.getTransaction.begin();
... //Do your transaction stuff
myEntityManager.getTransaction().commit();

Resteasy Bean Validation Not Working on Remote Server

I have a problem similar to the one described here.
I am using RESTEasy within a standalone Jetty application. When I start the application locally and call a service (e.g. localhost:16880/rest/user/login) bean validation works fine, i.e. I get validation errors like this:
[PARAMETER]
[UserService#login(arg0).appKey]
[app_key may not be null or empty]
[]
However, when I deploy my application to a remote host and call the same service (e.g. remotehost:16880/rest/user/login) bean validation is not invoked at all.
I am using the #ValidateRequest annotation for the service and #Valid annotation for the bean parameter.
My Resteasy version is 3.0.13.Final, though I have tried earlier versions as well. I have tried to write my custom validator, but that didn't work either.
I am puzzled why the validation works locally, but not on remote server. Any suggestions would be highly appreciated.
Since you are using Jetty as standalone server, you have to define RESTEasy validation providers where you define ServletContextHandler. Note that in standalone server there is no container to scan for #Provider classes and to activate them, so you must do it manually.
I expect that you create and start your server app something like:
//create a server listening at some port
Server server= new Server(port);
//add server handlers
HandlerList handlers= new HandlerList();
initHandlers(handlers);
server.setHandler(handlers);
//start the server
server.start();
In initHandlers you must have defined your RESTEasy support:
public void initHandlers(List<HandlerList> handlers) {
//define root context handler
ServletContextHandler servletContextHandler= new ServletContextHandler(ServletContextHandler.SESSIONS);
servletContextHandler.setContextPath("/");
handlers.addHandler(servletContextHandler);
//define RESTEasy handler
ServletHolder restServlet= new ServletHolder(new HttpServlet30Dispatcher());
//since this is a standalone server, somewhere you have to define RESTful services and Singletons
restServlet.setInitParameter("javax.ws.rs.Application", "com.exampleapp.MyRestApplication");
restServlet.setInitParameter("resteasy.servlet.mapping.prefix", "rest");
servletContextHandler.addServlet(restServlet, "rest/*");
}
So what is left to do now is to add Validation provider as init parameter:
restServlet.setInitParameter("resteasy.providers", "org.jboss.resteasy.plugins.validation.ValidatorContextResolver,org.jboss.resteasy.api.validation.ResteasyViolationExceptionMapper");
On this link I tried to find the name of the validator providers: https://docs.jboss.org/resteasy/docs/3.0.4.Final/userguide/html/Validation.html
RESTEasy obtains a bean validation implemenation by looking in the available META-INF/services/javax.ws.rs.Providers files for an implementation of ContextResolver
So it does not say what, but says where. Now open the "resteasy-hibernatevalidator-provider-3...*.jar (from Eclipse -> Maven dependencies or manually unzip) and look into META-INF/services/javax.ws.rs.ext.Providers It says:
org.jboss.resteasy.plugins.validation.hibernate.ValidatorContextResolver
org.jboss.resteasy.api.validation.ResteasyViolationExceptionMapper
If you don't have this dependency, then add it to your pom file:
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-hibernatevalidator-provider</artifactId>
<version>${resteasy.version}</version>
</dependency>
One more note: that at the same place where you described validation providers, you also add other providers, if you happen to need them (such as JacksonJaxbJson, etc).

Injecting EJB within JAX-RS resource in JBoss 5

Although there already are quite some StackOverflow questions, blog entries, etc. on the web, I still cannot figure out a solution to the problem stated below.
Similar to this question (Injecting EJB within JAX-RS resource on JBoss7) I'd like to inject a EJB instance into a JAX-RS class. I tried with JBoss 5, JBoss 7, and WildFly 8. I either get no injection at all (field is null), or the server does not deploy (as soon as I try to combine all sorts of annotations).
Adding #Stateless to the JAX-RS makes the application server know both classes as beans. However, no injection takes place.
Is there a way to inject EJBs into a REST application? What kind of information (in addition to that contained in the question linked to above) could I provide to help?
EDIT: I created a Github project showing code that works (with Glassfish 4.0) and does not work (with JBoss 5).
https://github.com/C-Otto/beantest
Commit 4bf2f3d23f49d106a435f068ed9b30701bbedc9d works using Glassfish
4.0.
Commit 50d137674e55e1ceb512fe0029b9555ff7c2ec21 uses Jersey 1.8, which does not work.
Commit 86004b7fb6263d66bda7dd302f2d2a714ff3b939
uses Jersey 2.6, which also does not work.
EDIT2:
Running the Code which I tried on JBoss 5 on Glassfish 4.0 gives:
Exception while loading the app : CDI deployment failure:WELD-001408 Unsatisfied dependencies for type [Ref<ContainerRequest>] with qualifiers [#Default] at injection point [[BackedAnnotatedParameter] Parameter 1 of [BackedAnnotatedConstructor] #Inject org.glassfish.jersey.server.internal.routing.UriRoutingContext(Ref<ContainerRequest>, ProcessingProviders)]
org.jboss.weld.exceptions.DeploymentException: WELD-001408 Unsatisfied dependencies for type [Ref<ContainerRequest>] with qualifiers [#Default] at injection point [[BackedAnnotatedParameter] Parameter 1 of [BackedAnnotatedConstructor] #Inject org.glassfish.jersey.server.internal.routing.UriRoutingContext(Ref<ContainerRequest>, ProcessingProviders)]
at org.jboss.weld.bootstrap.Validator.validateInjectionPointForDeploymentProblems(Validator.java:403)
EDIT3: The crucial information might be that I'd like a solution that works on JBoss 5
If you don't want to make your JAX-RS resource an EJB too (#Stateless) and then use #EJB or #Resource to inject it, you can always go with JNDI lookup (I tend to write a "ServiceLocator" class that gets a service via its class.
A nice resource to read about the topic:
https://docs.jboss.org/author/display/AS71/Remote+EJB+invocations+via+JNDI+-+EJB+client+API+or+remote-naming+project
A sample code:
try {
// 1. Retreive the Home Interface using a JNDI Lookup
// Retrieve the initial context for JNDI. // No properties needed when local
Context context = new InitialContext();
// Retrieve the home interface using a JNDI lookup using
// the java:comp/env bean environment variable // specified in web.xml
helloHome = (HelloLocalHome) context.lookup("java:comp/env/ejb/HelloBean");
//2. Narrow the returned object to be an HelloHome object. // Since the client is local, cast it to the correct object type.
//3. Create the local Hello bean instance, return the reference
hello = (HelloLocal)helloHome.create();
} catch(NamingException e) {
} catch(CreateException e) {
}
This is not "injecting" per-se, but you don't use "new" as-well, and you let the application server give you an instance which is managed.
I hope this was useful and I'm not telling you something you already know!
EDIT:
This is an excellent example: https://docs.jboss.org/author/display/AS72/EJB+invocations+from+a+remote+client+using+JNDI
EDIT 2:
As you stated in your comment, you'd like to inject it via annotations.
If the JNDI lookup is currently working for you without problems, and
If you're using Java EE 6+ (which I'm guessing you are), you can do the following:
#EJB(lookup = "jndi-lookup-string-here")
private RemoteInterface bean;