Jboss 6 Cluster Singleton Clustered - jboss

I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs.
So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration:
Bean interface:
#Remote
public interface IUniverse {
...
}
Bean implementation:
#Clustered #Stateful
public class Universe implements IUniverse {
private static Vector<String> messages = new Vector<String>();
...
}
jboss-beans.xml configuration:
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<!-- This bean is an example of a clustered singleton -->
<bean name="Universe" class="Universe">
</bean>
<bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController">
<property name="HAPartition"><inject bean="HAPartition"/></property>
<property name="target"><inject bean="Universe"/></property>
<property name="targetStartMethod">startSingleton</property>
<property name="targetStopMethod">stopSingleton</property>
</bean>
</deployment>
The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.

I don't think you actually need to "singleton" stateful session beans, as the way to invoke stateful session bean is to keep the obtained reference to the bean and invoke the same instance of the reference. Application server cluster will maintain the statefulness of stateful session bean by maintaining the ejb session across the cluster.
But at the end, I agree that you may want to reconsider your necessity of using stateful session bean.

Related

started geode spring boot and save to remote region but failed to start bean gemfireClusterSchemaObjectInitializer

With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction
In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.

Spring Boot with application managed persistence context

I am trying to migrate an application from EJB3 + JTA + JPA (EclipseLink). Currently, this application makes use of application managed persistent context due to an unknown number of databases on design time.
The application managed persistent context allows us to control how to create EntityManager (e.g. supply different datasources JNDI to create proper EntityManager for specific DB on runtime).
E.g.
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.TRANSACTION_TYPE, "JTA");
//the datasource JNDI is by configuration and without prior knowledge about the number of databases
//currently, DB JNDI are stored in a externalized file
//the datasource is setup by operation team
properties.put(PersistenceUnitProperties.JTA_DATASOURCE, "datasource-jndi");
properties.put(PersistenceUnitProperties.CACHE_SHARED_DEFAULT, "false");
properties.put(PersistenceUnitProperties.SESSION_NAME, "xxx");
//create the proper EntityManager for connect to database decided on runtime
EntityManager em = Persistence.createEntityManagerFactory("PU1", properties).createEntityManager();
//query or update DB
em.persist(entity);
em.createQuery(...).executeUpdate();
When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback.
Now, I am trying to migrate this application to Spring Boot.
The problem I encounter is that there is no transaction started even after I annotate the method with #Transactional(propagation = Propagation.REQUIRED).
The Spring application is packed as an executable JAR file and run with embadded Tomcat.
When I try to execute those update APIs, e.g. EntityManager.persist(..), EclipseLink always complains about:
javax.persistence.TransactionRequiredException: 'No transaction is currently active'
Sample code below:
//for data persistence
#Service
class DynamicServiceImpl implements DynamicService {
//attempt to start a transaction
#Transactional(propagation = Propagation.REQUIRED)
public void saveData(DbJndi, EntityA){
//this return false that no transaction started
TransactionSynchronizationManager.isActualTransactionActive();
//create an EntityManager based on the input DbJndi to dynamically
//determine which DB to save the data
EntityManager em = createEm(DbJndi);
//save the data
em.persist(EntityA);
}
}
//restful service
#RestController
class RestController{
#Autowired
DynamicService service;
#RequestMapping( value = "/saveRecord", method = RequestMethod.POST)
public #ResponseBody String saveRecord(){
//save data
service.saveData(...)
}
}
//startup application
#SpringBootApplication
class TestApp {
public static void main(String[] args) {
SpringApplication.run(TestApp.class, args);
}
}
persistence.xml
-------------------------------------------
&ltpersistence-unit name="PU1" transaction-type="JTA">
&ltproperties>
&lt!-- comment for spring to handle transaction??? -->
&lt!--property name="eclipselink.target-server" value="WebLogic_10"/ -->
&lt/properties>
&lt/persistence-unit>
-------------------------------------------
application.properties (just 3 lines of config)
-------------------------------------------
spring.jta.enabled=true
spring.jta.log-dir=spring-test # Transaction logs directory.
spring.jta.transaction-manager-id=spring-test
-------------------------------------------
My usage pattern does not follow most typical use cases (e.g. with known number of DBs - Spring + JPA + multiple persistence units: Injecting EntityManager).
Can anybody give me advice on how to solve this issue?
Is there anybody who has ever hit this situation that the DBs are not known in design time?
Thank you.
I finally got it work with:
Enable tomcat JNDI and create the datasource JNDI to each DS programmatically
Add transaction stuff
com.atomikos:transactions-eclipselink:3.9.3 (my project uses eclipselink instead of hibernate)
org.springframework.boot:spring-boot-starter-jta-atomikos
org.springframework:spring-tx
You have pretty much answered the question yourself: "When deployed in a EJB container (e.g. WebLogic), with proper TransactionAttribute (e.g. TransactionAttributeType.REQUIRED), the container will take care of the transaction start/end/rollback".
WebLogic is compliant with the Java Enterprise Edition specification which is probably why it worked before, but now you are using Tomcat (in embedded mode) which are NOT.
So you simply cannot do what you are trying to do.
This statement in your persistence.xml file:
<persistence-unit name="PU1" transaction-type="JTA">
requires an Enterprise Server (WebLogic, Glassfish, JBoss etc.)
With Tomcat you can only do this:
<persistence-unit name="PU1" transaction-type="RESOURCE_LOCAL">
And you need to handle transactions by your self:
myEntityManager.getTransaction.begin();
... //Do your transaction stuff
myEntityManager.getTransaction().commit();

How to configure Bean class in root-context.xml file using STS in Spring?

I am new to Spring and going to develop a spring MVC Application.What is the best way to write root-context.xml for Bean class properties?
For database Connection I want to use Spring jdbc(JdbcTemplate).Can you please suggest me the best way to do so?
You're going to need to set up a DataSource and then create a JdbcTemplate bean that utilizes it. The Spring JDBC Reference Documentation provides examples and really good explanations on how to accomplish this.
Here's a basic bean definition for a datasource. The properties specified will depend on the database that you are using.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
Then you can create a jdbcTemplate bean that uses the dataSource or you can instantiate the jdbcTemplate within your code.
The JdbcTemplate API mentions this:
Can be used within a service implementation via direct instantiation with a DataSource reference, or get prepared in an application context and given to services as bean reference. Note: The DataSource should always be configured as a bean in the application context, in the first case given to the service directly, in the second case to the prepared template.

Turning off JBoss hot deploy service?

What is the correct way to turn off the JBoss hot deploy service?
This is a production environment.
Edit: JBoss version 5.1.0 GA
I think deleting the "deploy/hdscanner-jboss-beans.xml" file is the correct way to do this.
From JBoss in Action, ch. 3.1.5:
The deployer is configured via the deployers.xml and profile.xml descriptor files,
both found in the server/xxx/conf directory. This file defines several POJOs that
manage various deployment responsibilities. Table 3.3 identifies each of these POJOs
and highlights some of the more interesting configuration properties provided by
each one. [...]
And the relevant bits from the table:
Bean: HDScanner
Property: scanEnabled - Set this to true (default) to enable the hot
deployer and to false to disable it. When set to
false, applications are deployed only when the
server is started or when the deploy method on
the MainDeployer MBean is called.
Property: scanPeriod - The number of milliseconds the hot deployer
waits between performing scans. The default is
5000 milliseconds (5 seconds). This value is
ignored if scanEnabled is set to false.
Property: scanThreadName - You can use this to change the name of the
thread from its default of HDScanner. The thread
name enables you to identify the hot deployer
thread if you should take a thread dump.
You can disable and expose it with JMX:
<bean name="HDScanner" class="org.jboss.system.server.profileservice.hotdeploy.HDScanner">
<annotation>#org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.deployment:service=HDScanner", exposedInterface=org.jboss.system.server.profileservice.hotdeploy.Scanner, registerDirectly=false)</annotation>
<start method="start" ignored="true" />
<property name="deployer"><inject bean="ProfileServiceDeployer"/></property>
<property name="profileService"><inject bean="ProfileService"/></property>
<property name="scanPeriod">5000</property>
<property name="scanThreadName">HDScanner</property>
<property name="scanEnabled">false</property>
</bean>
Property: scanEnabled doesn't exist on JBoss 5.x only on JBoss 4.x for the Deployment Scanner.
On JBoss 5.x just delete the hdscanner-jboss-beans.xml from the deploy directory and use the MainDeployer MBean to deploy your applications.

Correct way to make datasources/resources a deploy-time setting

I have a web-app that requires two settings:
A JDBC datasource
A string token
I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container.
My code does this:
InitialContext ctx = new InitialContext();
Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env");
token = (String)envCtx.lookup("token");
ds = (DataSource)envCtx.lookup("jdbc/datasource")
Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml:
...
<resource-ref>
<res-ref-name>jdbc/datasource</res-ref-name>
<jndi-name>jdbc/test-datasource</jndi-name>
</resource-ref>
...
but
sun-web.xml goes inside my war, right?
surely there must be a way to do this through the management interface
Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development.
EDIT Tomcat has a reasonable way to do this:
Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with:
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true">
<!-- String resource -->
<Environment name="token" value="value of token" type="java.lang.String" override="false" />
<!-- Linking to a global resource -->
<ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" />
<!-- Derby -->
<Resource name="jdbc/datasource2"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.apache.derby.jdbc.EmbeddedDataSource"
url="jdbc:derby:test;create=true"
/>
<!-- H2 -->
<Resource name="jdbc/datasource3"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.h2.jdbcx.JdbcDataSource"
url="jdbc:h2:~/test"
username="sa"
password=""
/>
</Context>
Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml.
I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific.
I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.
For GF v3, you may want to try leveraging the --deploymentplan option of the deploy subcommand of asadmin. It is discussed on the man page for the deploy subcommand.
We had just this issue when migrating from Tomcat to Glassfish 3. Here is what works for us.
In the Glassfish admin console, configure datasources (JDBC connection pools and resources) for DEV/TEST/PROD/etc.
Record your deployment time parameters (in our case database connect info) in properties file. For example:
# Database connection properties
dev=jdbc/dbdev
test=jdbc/dbtest
prod=jdbc/dbprod
Each web app can load the same database properties file.
Lookup the JDBC resource as follows.
import java.sql.Connection;
import javax.sql.DataSource;
import java.sql.SQLException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
/**
* #param resourceName the resource name of the connection pool (eg jdbc/dbdev)
* #return Connection a pooled connection from the data source
* associated with resourceName
* #throws NamingException will be thrown if resource name is not found
*/
public Connection getDatabaseConnection(String resourceName)
throws NamingException, SQLException {
Context initContext = new InitialContext();
DataSource pooledDataSource = (DataSource) initContext.lookup(resourceName);
return pooledDataSource.getConnection();
}
Note that this is not the usual two step process involving a look up using the naming context "java:comp/env." I have no idea if this works in application containers other than GF3, but in GF3 there is no need to add resource descriptors to web.xml when using the above approach.
I'm not sure to really understand the question/problem.
As an Application Component Provider, you declare the resource(s) required by your application in a standard way (container agnostic) in the web.xml.
At deployment time, the Application Deployer and Administrator is supposed to follow the instructions provided by the Application Component Provider to resolve external dependencies (amongst other things) for example by creating a datasource at the application server level and mapping its real JNDI name to the resource name used by the application through the use of an application server specific deployment descriptor (e.g. the sun-web.xml for GlassFish). Obviously, this is a container specific step and thus not covered by the Java EE specification.
Now, if you want to change the database an application is using, you'll have to either:
change the mapping in the application server deployment descriptor - or -
modify the configuration of the existing datasource to make it points on another database.
Having an admin interface doesn't really change anything. If I missed something, don't hesitate to let me know. And just in case, maybe have a look at this previous answer.