How to use Apache-Commons DBCP with EclipseLink JPA and Tomcat 7.x - jpa

I've been working on a web application, deployed on Tomcat 7, which use EclipseLink JPA to handle the persistence layer.
Everything works fine in a test environment but we're having serious issues in the production environment due to a firewall cutting killing inactive connections. Basically if a connection is inactive for a while a firewall the sits between the Tomcat server and the DB server kill it, with the result of leaving "stale" connections in the pool.
The next time that connection is used the code never returns, until it gets a "Connection timed out" SQLException (full ex.getMessage() below).
EL Fine]: 2012-07-13
18:24:39.479--ServerSession(309463268)--Connection(69352859)--Thread(Thread[http-bio-8080-exec-5,5,main])--
MY QUERY REPLACED TO POST IT TO SO [EL Config]: 2012-07-13
18:40:10.229--ServerSession(309463268)--Connection(69352859)--Thread(Thread[http-bio-8080-exec-5,5,main])--disconnect
[EL Info]: 2012-07-13
18:40:10.23--UnitOfWork(1062365884)--Thread(Thread[http-bio-8080-exec-5,5,main])--Communication
failure detected when attempting to perform read query outside of a
transaction. Attempting to retry query. Error was: Exception
[EclipseLink-4002] (Eclipse Persistence Services -
2.3.0.v20110604-r9504): org.eclipse.persistence.exceptions.DatabaseException Internal
Exception: java.sql.SQLException: Eccezione IO: Connection timed out
I already tried several configuration in the persistence.xml, but since I have no access to the firewall configuration I had no luck with these methods. I also tried to use setCheckConnections()
ConnectionPool cp = ((JpaEntityManager)em).getServerSession().getDefaultConnectionPool();
cp.setCheckConnections();
cp.releaseConnection(cp.acquireConnection());
I managed to solve the issue in a test script using testOnBorrow, testWhileIdle and other features that are avalaible from DBCP Apache Commons. I'd like to know how to override the EclipseLink internal connection pool to use a custom connection pool so that I can provide an already configured pool, based on DBCP rather than just configuring the internal one using persistence.xml.
I know I should provide a SessionCustomizer, I'm uncertain which one is the correct pattern to use. Basically I would like to preserve the performance of DBCP in a JPA-like way.
I'm deploying on Tomcat 7, I know that if I switch to GF I won't have this problem, but for a matter of consistency with other webapp on the same server I'd prefere to stay on Tomcat.

What you want is definitely possible, but you might be hitting the limits of the "do it yourself" approach.
This is one of the more difficult things to explain, but there are effectively two ways to configure your EntityManagerFactory. The "do it yourself" approach and the "container" approach.
When you call Persistence.createEntityManagerFactory it eventually delegates to this method of the PersistenceProvider interface implemented by EclipseLink:
EntityManagerFactory createEntityManagerFactory(String emName, Map map)
The deal here is EclipseLink will then take it upon itself to do all the work, including its own connection creation and handling. This is the "do it yourself" approach. I don't know EclipseLink well enough to know if there is a way to feed it connections using this approach. After two days on Stackoverflow it doesn't seem like anyone else has that info either.
So here is why this "works in GF". When you let the container create the EntityManagerFactory for you by having it injected or looking it up, the container uses a different method on the PersistenceProvider interface implemented by EclipseLink:
EntityManagerFactory createContainerEntityManagerFactory(PersistenceUnitInfo info, Map map)
The long and short of it is that this PersistenceUnitInfo is an interface that the container implements and has these two very key methods on it:
public DataSource getJtaDataSource();
public DataSource getNonJtaDataSource();
With this mode EclipseLink will not try to do its own connection handling and will simply call these methods to get the DataSource from the container. This is really what you need.
There are two possible approaches you could take to solving this:
You could attempt to instantiate the EclipseLink PersistenceProvider implementation yourself and call the createContainerEntityManagerFactory method passing in your own implementation of the PersistenceUnitInfo interface and feed the DBCP configured DataSource instances into EclipseLink that way. You would need to parse the persistence.xml file yourself and feed that data in through the PersistenceUnitInfo. As well EclipseLink might also expect a TransactionManager, in which case you'll be stuck unless you hunt down a TransactionManager you can add to Tomcat.
You could use the Java EE 6 certified version of Tomcat, TomEE. DataSources are configured in the tomee.xml, created using DBCP with full support for all the options you need, and passed to the PersistenceProvider using the described createContainerEntityManagerFactory call. You then get the EntityManagerFactory injected via #PersistenceUnit or look it up.
If you do attempt to use TomEE, make sure your persistence.xml is updated to explicitly set transaction-type="RESOURCE_LOCAL" because the default is JTA. Even though it's non-compliant to use JTA with the Persistence.createEntityManagerFactory approach, there aren't any persistence providers that will complain and let you know you're doing something wrong, they treat it as RESOURCE_LOCAL ignoring the schema. So when you go to port your app to an actual certified server, it blows up.
Another note on TomEE is that in the current release, you'll have to put your EclipseLink libs in the <tomcat>/lib/ directory. This is fixed in trunk, just not released yet.
I'm not sure how useful these slides will be without the explanation that goes along with them, but the second part of this presentation is a deep dive into how container-managed EntityManager's work, specifically with regards to connection handling and transactions. You can ignore the transaction part as you aren't using them and already have an in production you're not likely to dramatically change, but it might be interesting for future development.
Best of luck!

Related

No EJB receiver available for handling after some time

I am using Jboss 7.1 Final. I have setup remote ejb using jboss-ejb-client.properties and standalone.xml accordingly. But after the server running for sometime it will throw this exception while trying to lookup the remote ejb. Is there anything I need to set in the jboss-ejb-client.properties in order for it to work. Note that I already defined the HEARTBEAT_INTERVAL, is that not enough?
Here is the properties file:
endpoint.name=client-endpoint
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connections=default
remote.connection.default.host=222.222.23.222
remote.connection.default.port=4447
remote.connection.default.username=us
remote.connection.default.password=ps
remote.connection.default.connect.options.org.jboss.remoting3.RemotingOptions.HEARTBEAT_INTERVAL=60000
Since no takers to this question, I found some possible solutions by googling. It might be that I have been opening too many connections by calling new InitialContext() -- I might be calling it every few minutes!!! See this link:
https://developer.jboss.org/thread/222883
In there someone mentioned GC and the connection closing etc. That might be helpful.
How do you lookup to your EJB from your EJB Client ? Incase you are using java:/ namespace the problem will happen.
Please use ejb:/ namespace to eliminate the problem.

WebSphere 7.0 remote client rollback global UserTransaction

I'm observing weird behavior for WebSphere 7.0.0.21:
Architecture:
Simple EJB bean with annotation #Local, #Remote Interfaces and transactional method marked as #Required
Standalone command line client that looks up remote "jta/usertransaction" and transactional EJB method. Client code starts user transaction, executes method and then tries to rollback it.
Expected behavior: (I see it on Jboss) rollback of DB transaction
Observed behavior: (On WAS 7.0.0.21) commit of DB transaction
I see that client transaction is changing from STATUS_NO_TRANSACTION(6) to STATUS_ACTIVE(0) and then again to STATUS_NO_TRANSACTION(6) after rollback.
I tried to Google it but didn't find any results
Any ideas on this scenario ? I'm pretty much ready to file the issue to IBM.
thanks,
UPDATE:
Finally after long wait and interactions with IBM support I got it resolved:
No problems with IBM JRE
For Sun/Oracle JRE it requires extra configuration for ORB e.g.
jndiProperties.put("java.naming.corba.orb", com.ibm.CORBA.iiop.ORB.init((String[])null, orbProperties));
and orb.properties from WAS or AppClient JRE is required to be provided as "orbProperties"

Toplink and CMT message driven bean

I am trying to integrate Toplink with CMT Message driven bean. MY MDB is CMT. When I try to use unitofwork commit it is erroring out saying a global transaction is present so can not do local commit. After researching toplink they suggested following things. use external connection pool and use getactiveUnit of work to commit. We are using oracle 10.1.3 container for connection pooling and external transaction controller (OC4J transaction controller). When I changed to getActiveUnitWork().commit, I get null pointer because of null active unit of work. My understanding is container starts a transaction when on message of MDB gets executed. So toplink getactive unit of work should associate a unit of work with external transaction. Toplink GetActive unit of work method should return null only when there is no external transaction is present. I am not sure how to solve this issue or what is wrong. I appreciate any help on this.
Thanks.
TZ
Ensure you have set your ExternalTransactionController on your session correctly, and that there is a JTA transaction active.

How to programmatically un/register POJOs as services in JBoss 4.2.3.GA

I need to be able to circumvent the whole deployer malarkey and programmatically register/unregister (dependency-less) POJOs as services in JBoss.
Currently I'm dynamically creating an MBean interface and registering this with the JBoss MBeanServer, and then registering local/remote with Jndi.
This works ok (I can have a standard service from a vanilla SAR reference one of these service POJOs with the #EJB annotation) - however the container seems to leaves stale references behind as after calling unbind() and unregisterMBean().
Obviously I'm missing something by not dealing with the container in a way it expects, but what am I missing? Or is there an easier way (can't see much in the way of an API)?
thanks.

injected #EJB reference is null after redeployment

I have two ear applications (EJB 3.0) deployed on Jboss 5.1. SLSB from application A calls remote SLSB from application B via #EJB annotation.
Everything works fine, until I redeploy application B. Then the bean from A application tries to call the one from B and its reference turns to be null.
I suppose that SLSBs are pooled and references are injected on creation time, and after redeployment those proxies are not refreshed somehow.
How can I cope with that? Is it ok to put an interceptor on that bean and check if all annotated references are not null?
If the application is redeployed/undeployed or there is network failure, the proxy objects are invalidated.
You can use ServiceLocator pattern for caching the references of the remote objects. You can remove & again re-create them with JNDI lookup in case of failure.
Else, instead of using #EJB to inject remote bean, you have to manually lookup each time which is resource consuming, but former is much better approach.