How to initialize EclipseLink connection - jpa

Sorry if my question is quite simple, but I really couldn't find an answer googling. I have this project with JPA 2.0 (EclipseLink), it's working fine but I want to ask if there's a way to initialize the database connection?
Actually it begins whenever the user try to access any module that requires any query, which is quite annoying because the connection can take some seconds and the app froze for a second when it's connecting.
I can make any random query on main method for "turn it on", but it's an unnecesary query and not the solution I want to use.
Thanks beforehand!

The problem will be that the deployment process is lazy. This prevents the cost of initializing and connecting to unused/unneeded persistence units, but it means everything with a persistence unit is processed the very first time it is accessed.
This can be configured on a persistence unit by using the "eclipselink.deploy-on-startup" property:
http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/p_deploy_on_startup.htm

Not sure if this is what you are looking for, I found the existence of property eclipselink.jdbc.exclusive-connection.is-lazy, which defaults to true.
According to Javadoc, "property specifies when write connection is acquired lazily".

Related

"PSQLException: FATAL: sorry, too many clients already" error in integration tests with jOOQ & Spring Boot

There are already similar questions about this error and suggested solutions; e.g. increasing max_connections in postgresql.conf and / or adapting the max number of connections your app requests. However, my question is more specific to using jOOQ in a Spring Boot application.
I integrated jOOQ into my application as in the example on GitHub. Namely, I am using DataSourceConnectionProvider with TransactionAwareDataSourceProxy to handle database connections, and I inject the DSLContext in the classes that need it.
My application provides various web services to front-ends and I've never encountered that PSQLException on dev or test environments so far. I only started getting that error when running all integration tests (around 1000) locally. I don't expect some leak in handling the connection as Spring and jOOQ manage the resources; nevertheless that error got me worried if that would also happen on production.
Long story short, is there a better alternative to using DataSourceConnectionProvider to manage connections? Note that I already tried using DefaultConnectionProvider as well, and tried to make spring.datasource.max-active less than max_connections allowed by Postgres. Neither fixed my problem so far.
Since your question seems not to be about the generally best way to work with PostgreSQL connections / data sources, I'll answer the part about jOOQ and using its DataSourceConnectionProvider:
Using DataSourceConnectionProvider
There is no better alternative in general. In order to understand DataSourceConnectionProvider (the implementation), you have to understand ConnectionProvider (its specification). It is an SPI that jOOQ uses for two things:
to acquire() a connection prior to running a statement or a transaction
to release() a connection after running a statement (and possibly, fetching results) or a transaction
The DataSourceConnectionProvider does so by acquiring a connection from your DataSource through DataSource.getConnection() and by releasing it through Connection.close(). This is the most common way to interact with data sources, in order to let the DataSource implementation handle transaction and/or pooling semantics.
Whether this is a good idea in your case may depend on individual configurations that you have made. It generally is a good idea because you usually don't want to manually manage connection lifecycles.
Using DefaultConnectionProvider
This can certainly be done instead, in case of which jOOQ does not close() your connection for you, you'll do that yourself. I'm expecting this to have no effect in your particular case, as you'll implement the DataSourceConnectionProvider semantics manually using e.g.
try (Connection c = ds.getConnection()) {
// Implicitly using a DefaultConnectionProvider
DSL.using(c).select(...).fetch();
// Implicit call to c.close()
}
In other words: this is likely not a problem related to jOOQ, but to your data source.

How do you use a Connection Pool with ActiveJDBC instead of just Base.open & close everytime?

Right now I'm just writing methods that does Base.open(), do some operation, and then Base.close(). However, this is extremely inefficient especially when lots of these method calls are made, so I'd like to use some kind of connection pool with ActiveJDBC. Is there a way to use something like a connection pool with ActiveJDBC, or some other way to approach this problem instead of doing Base.open() and Base.close() each time I access the DB?
Thanks in advance! :)
Here is an example of using ActiveJDBC with a pool: https://github.com/javalite/activejdbc/blob/master/activejdbc/src/test/java/org/javalite/activejdbc/C3P0PoolTest.java
However, you still need to open and close a connection, except you are getting a connection from pool and returning back to pool. If you provide more information on what type of application you develop, I can potentially provide better advice
--
igor

[Drools]Fact Objects Mistakenly Updated During Multi-Stage Rule Firing OR wtih Long Fact Object List

If the subject is confusing, that is because the problem itself is way too confusing to us. Here is the thing.
We have an application that leverages Drools' rule engine to help us evaluate java beans - Fact Objects in Drool's term - on their field values and update a particular flag field within the bean to "true" or "false" according to the evaluation result. All the evaluations and update operations are defined in the template.
The way it invokes Drools is like this. First it creates a stateful session before the first use. And when we have a list of beans, we insert them one by one to the session, and call fireAllRules. After firing the rules, we keep the session for later uses. And once we have another batch of beans, we do the same again, and again, and again...
This sounds making sense. But later during the testing, we found that although during the first batch, the rule engine worked fine, the following batches didn't. Some beans were mistakenly updated, that is, even no fields did match any rules, the flag was updated to true.
Then we thought maybe we should not reuse the session. So we put all beans from all batches into one big list. But soon we found that the problematic beans still got wrong update. And what's more weird, if we run this testing on different machines, problematic bean could be different. But if we test any of the problematic beans in unit test with itself, everything works fine.
Now I hope I have explained the problem. We are new to Drools. Maybe we did something wrong somewhere that we don't know. Could anyone here give any direction of the problem? That'll us a very big favor!
It sounds to me as though you're not clearing out working memory after each 'fireAllRules'.
If you use a stateful session, then every fact which you insert will remain in working memory until you explicitly retract it. Therefore every time you fire rules, you are re-evaluating the original set of facts, plus the new ones.
It might be useful to add a little debugging to your code. Using session.getObjects(), you will be able to see what facts are in working memory before and after execution of the rules. This should indicate what is not being retracted between evaluations.

Nullreference exception in EntityFramework ObjectStateManager.DetectConflicts

I've written a WCF webservice that takes XML files and stores them into the database. Everything worked fine under 'low load' but under high load I'm getting some unexpected behavior and thusfar I haven't been able to pinpoint what exactly the problem is. Does anybody have a suggestion?
This is the exception I'm seeing in the logs 'sometimes' - like 25 times out of 10 000:
Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at System.Data.Objects.ObjectStateManager.DetectConflicts(IList`1 entries)
at System.Data.Objects.ObjectStateManager.DetectChanges()
at System.Data.Entity.Internal.InternalContext.GetStateEntry(Object entity)
at System.Data.Entity.DbContext.Entry(Object entity)
... rest of my stacktrace
I see this happen every once-in-a-while and I'm currently looking into whether this has to do with concurrency (some other thread maybe working on the same entity). Can someone maybe give me some pointers as to where to look for?
NullReferenceException Occurs when You try to use a reference variable who's value is Nothing/null.
When the value is Nothing/null for the reference variable, that means
it is not actually holding a reference to an instance of any object
that exists on the heap.
I can't tell what the problem is, But i believe its with the threads. Since its working fine for small number of users. It might have used multiple threads for improving performance when the load increased. When threads executes asynchronously, there is a greater chance for having this issue.!!
The Solution i can offer is to custom specify the threads, and synchronise the Objects. Probably it will solve the Issue.

How can I clear Class::DBI's internal cache?

I'm currently working on a large implementation of Class::DBI for an existing database structure, and am running into a problem with clearing the cache from Class::DBI. This is a mod_perl implementation, so an instance of a class can be quite old between times that it is accessed.
From the man pages I found two options:
Music::DBI->clear_object_index();
And:
Music::Artist->purge_object_index_every(2000);
Now, when I add clear_object_index() to the DESTROY method, it seems to run, but doesn't actually empty the cache. I am able to manually change the database, re-run the request, and it is still the old version.
purge_object_index_every says that it clears the index every n requests. Setting this to "1" or "0", seems to clear the index... sometimes. I'd expect one of those two to work, but for some reason it doesn't do it every time. More like 1 in 5 times.
Any suggestions for clearing this out?
The "common problems" page on the Class::DBI wiki has a section on this subject. The simplest solution is to disable the live object index entirely using:
$Class::DBI::Weaken_Is_Available = 0;
$obj->dbi_commit(); may be what you are looking for if you have uncompleted transactions. However, this is not very likely the case, as it tends to complete any lingering transactions automatically on destruction.
When you do this:
Music::Artist->purge_object_index_every(2000);
You are telling it to examine the object cache every 2000 object loads and remove any dead references to conserve memory use. I don't think that is what you want at all.
Furthermore,
Music::DBI->clear_object_index();
Removes all objects form the live object index. I don't know how this would help at all; it's not flushing them to disk, really.
It sounds like what you are trying to do should work just fine the way you have it, but there may be a problem with your SQL or elsewhere that is preventing the INSERT or UPDATE from working. Are you doing error checking for each database query as the perldoc suggests? Perhaps you can begin there or in your database error logs, watching the queries to see why they aren't being completed or if they ever arrive.
Hope this helps!
I've used remove_from_object_index successfully in the past, so that when a page is called that modifies the database, it always explicitly reset that object in the cache as part of the confirmation page.
I should note that Class::DBI is deprecated and you should port your code to DBIx::Class instead.