Grails afterInsert hook issue with Hibernate 5.1 and PostgreSQL - postgresql

In an existing Grails 3.1.15 application, recently upgraded to Hibernate 5.1, an odd issue with afterInsert (or other) hooks started to appear. After some hours of testing, I could track this down to Hibernate 5.1 + PostgreSQL combination - issue is not reproducible with H2. To reproduce, create a simple application consisting of 2 domain objects - an Audit trail and a User, as shown here:
class Audit {
Date dateCreated
String auditData
}
class User {
String name
String email
def afterInsert() {
new Audit(auditData: "User created: $this").save()
}
}
The code above works OK with Hibernate 4, however, if the application is upgraded to Hibernate5 plugin + Hibernate 5.1.x (tested with 5.1.0.Final and 5.1.5.Final) the above scenario will always lead to a ConcurrentModificationException when you attempt to save a new User instance. You can just use a scaffold controller to reproduce. Note this only happens with PostgreSQL as the data source - with H2 it would still work OK.
Now, according to GORM Docs (see chapter 8.1.3) one should use a new session when attempting to save other objects in beforeUpdate or afterInsert hooks anyway:
def afterInsert() {
Audit.withNewSession() {
new Audit(auditData: "User created: $this").save()
/* no exception logged, but Audit instance not preserved */
}
}
But this wouldn't really resolve the issue with PSQL. The exception is gone, the User instance is persisted, but the Audit instance is not saved. Again, this code would work OK with H2. To really avoid the issue with PSQL, you would have to manually flush the session in afterInsert:
def afterInsert() {
Audit.withNewSession() { session ->
new Audit(auditData: "User created: $this").save()
session.flush()
/* finally no exceptions, both User and Audit saved */
}
}
Question: is this a bug, or is this expected? I find it a bit suspicious that the manual flush is required in order for the Audit instance to be persisted - and even more so when I see it works without a flush with H2 and only seems to affect PostgreSQL. But I couldn't really find any reports - any pointers are appreciated!
For the sake of completeness, I tested with the following JDBC driver versions for PostgreSQL:
runtime 'org.postgresql:postgresql:9.3-1101-jdbc41'
runtime 'org.postgresql:postgresql:9.4.1208.jre7'
runtime 'org.postgresql:postgresql:42.0.0'
And for the upgrade to Hibernate 5.1, the following dependencies were used:
classpath "org.grails.plugins:hibernate5:5.0.13"
...
compile "org.grails.plugins:hibernate5:5.0.13"
compile "org.hibernate:hibernate-core:5.1.5.Final"
compile "org.hibernate:hibernate-ehcache:5.1.5.Final"

Related

TFS Server plugin fails after upgrade from TFS2012 to TFS2015 RC

We have just upgraded one of our servers from TFS2012.2 to TFS2015RC. Everything went "smooth", but we are encountering an issue:
A while ago we wrote a server side plugin for TFS, which listens to the WorkitemChangedEvent. It implements the ISubscriber interface. The following piece of code was working fine before the update:
void ITfsService.UpdateState(int workItemId, string newState)
{
var wi = store.GetWorkItem(workItemId);
wi.State = newState;
wi.Save();
}
After the update, and after recompiling against the TFS2015 dlls, the following error occured:
Failed to process notification: TF237124: Work Item is not ready to save.
Note that none of the workitemtypes has changed, it is the same data.
I tried getting more information out of the error by calling Validate() before saving, this is the output:
Status: InvalidListValue
State: "Resolved, To Be Reviewed"
WIT: Task
Id: 5842
Field: State
However, the state "Resolved, To Be Reviewed" does exists in the list of available states. In the GUI it is perfectly possible to change the state of the item to "Resolved, To Be Reviewed":
What is causing the Save() to fail?
Finally found the cause of this:
Apparently, the Validate() call validates against the whole ProcessConfiguration.xml. Because in TFS2012 there were both AgileProcessConfiguration and CommonProcessConfiguration, there were underlying problems with the workitem states.
After resolving those issues in the correct ProcessConfiguration file, the plugin was working again (and, TFS could also upgrade the Backlog\IterationPlanning features.)

getSchema in PostgreSQL JDBC driver throws java.lang.AbstractMethodError or java.sql.SQLFeatureNotSupportedException

I'm using Postgresql 8.4 and my application is trying to connect to the database.
I've registered the driver:
DriverManager.registerDriver(new org.postgresql.Driver());
and then trying the connection:
db = DriverManager.getConnection(database_url);
(btw, my jdbc string is something like: jdbc:postgresql://localhost:5432/myschema?user=myuser&password=mypassword)
I've tried various version of the jdbc driver and getting two type of errors:
with jdbc3:
Exception in thread "main" java.lang.AbstractMethodError: org.postgresql.jdbc3.Jdbc3Connection.getSchema()Ljava/lang/String;
with jdbc4:
java.sql.SQLFeatureNotSupportedException: Il metodo ½org.postgresql.jdbc4.Jdbc4Connection.getSchema()╗ non Þ stato ancora implementato.
that means: method org.postgresql.jdbc4.Jdbc4Connection.getSchema() not implemented yet.
I'm missing something but I don't know what..
------ SOLVED ---------
The problem were not in the connection String or the Driver version, the problem were in the code directly above the getConnection() method:
db = DriverManager.getConnection(database_url);
LOGGER.info("Connected to : " + db.getCatalog() + " - " + db.getSchema());
It seems postgresql driver doesn't have getSchema method, as the java console were often trying to say to me..
The Connection.getSchema() version was added in Java 7 / JDBC 4.1. This means that it is not necessarily available in a JDBC 3 or 4 driver (although if an implementation exists, it will get called).
If you use a JDBC 3 (Java 4/5) driver or a JDBC 4 (Java 6) driver in Java 7 or higher it is entirely possible that you receive a java.lang.AbstractMethodError when calling getSchema if it does not exist in the implementation. Java provides a form of forward compatibility for classes implementing an interface.
If new methods are added to an interface, classes that do not have these methods and were - for example - compiled against an older version of the interface, can still be loaded and used provided the new methods are not called. Missing methods will be stubbed by code that simply throws an AbstractMethodError. On the other hand: if a method getSchema had been implemented and the signature was compatible that method would now be accessible through the interface, even though the method did not exist in the interface at compile time.
In March 2011, the driver was updated so it could be compiled on Java 7 (JDBC 4.1), this happened by stubbing the new JDBC 4.1 methods with an implementation that throws a java.sql.SQLFeatureNotSupportedException, including the implementation of Connection.getSchema. This code is still in the current PostgreSQL JDBC driver version 9.3-1102. Technically a JDBC-compliant driver is not allowed to throw SQLFeatureNotSupportedException unless the API documentation or JDBC specification explicitly allows it (which it doesn't for getSchema).
However the current code on github does provide an implementation since April this year. You might want to consider compiling your own version, or ask on the pgsql-jdbc mailinglist if there are recent snapshots available (the snapshots link on http://jdbc.postgresql.org/ shows rather old versions).

Spring-Boot Configuration for suppressing BatchDataInitializer

I am using Spring-boot 0.5.0.M6 with Spring-Batch. Configuration has by using #EnableBatchProcessing with datasource etc configured in application.properties.
During first run of the application, everything works fine but after I stop the application and restart application, following error is seen
org.springframework.dao.DuplicateKeyException: PreparedStatementCallback; SQL [INSERT into BATCH_JOB_INSTANCE(JOB_INSTANCE_ID, JOB_NAME, JOB_KEY, VERSION) values (?, ?, ?, ?)]; Duplicate entry '1' for key 'PRIMARY'; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY'
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:239)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:659)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:908)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:969)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:974)
When digging down, i had observed following lines in logs
2013-12-06 12:12:37 INFO ResourceDatabasePopulator:162 - Executing SQL script from class path resource [org/springframework/batch/core/schema-mysql.sql]
2013-12-06 12:12:37 INFO ResourceDatabasePopulator:217 - Done executing SQL script from class path resource [org/springframework/batch/core/schema-mysql.sql] in 13 ms.
Root problem over here was schema-drop-mysql.sql was not triggered by schema-mysql.sql was triggered, thereby creating two entries in BATCH_JOB_SEQ.
For resolution of the same, I have added
#EnableAutoConfiguration(exclude={BatchAutoConfiguration.class})
However due to this I need to execute schema-mysql.sql explicitly, which as of now is ok, but would be problem when spring-batch version would be updated with updates in schema
Hence have couple of questions:
1. How to autoconfigure batch for even executing schema-drop-mysql.sql before schema-mysql.sql?
2. is there way to configure this BatchDatabaseInitializer to run kind of "update" mode?
Regards
With the current version of Spring Batch autoconfiguration that isn't possible with the upcoming version it is possible to disable the automatic creation of the database tables by specifying the spring.batch.initializer.enabled property and setting it to false.
IMHO you shouldn't use the automatic creation/update features to create schema's either do it yourself or use tools like LiquiBase or FlyWay to do it more controlled.
Also see https://stackoverflow.com/questions/8418814/db-migration-tool-liquibase-or-flyway
You can always execute the schema-drop-mysql.sql yourself, as a work around you could add a #PreDestroy method to your #Configuration class which executes this script. (Maybe you could even add this to an #Configuration class which is enabled only in dev mode/profile).

Memory Leak in Grails with MongoDB

I've found a strange issue when saving or updating several objects in Grails with MongoDB. Currently I'm using Grails 2.2.3 and MongoDB plugin 1.3.0.
The problem seems to be that the instances of MiUsuario are never GC neither when I manually call the GC. In our main application we don't make batch updates, but when doing load tests (with JMeter and monitoring JVM with Java VisualVM) this problem causes memory filling and Tomcat stops responding.
I've created a small new application to show the problem.
A simple domain object:
class MiUsuario {
ObjectId id
String nickName
}
My controller:
import pruebasrendimiento.Prueba
class MiUsuarioController {
def doLogin(String privateKey, String id){
MiUsuario user = MiUsuario.get(id)
user.nickName = new Random().nextInt().toString()
user.save(failOnError:true)
render 'ok'
}
}
My BuildConfig (Just the dependencies and plugins part):
dependencies {
}
plugins {
// runtime ":hibernate:$grailsVersion"
runtime ":jquery:1.8.3"
runtime ":resources:1.2"
build ":tomcat:$grailsVersion"
// runtime ":database-migration:1.3.2"
// compile ':cache:1.0.1'
runtime ":mongodb:1.3.0"
}
I've also tried something that Burt said a long time ago (http://burtbeckwith.com/blog/?p=73), but DomainClassGrailsPlugin.PROPERTY_INSTANCE_MAP.get().clear() doesn't make any difference. And the other option that's said in that page, RequestContextHolder.resetRequestAttributes(), gives me an exception.
I had similar problem and it solves upgrading to grails 2.3.1. Try it.

Squeryl 0.9.5 (with Lift 2.4) not releasing database connections/pools

Following the recommended transaction setup for Squeryl, in my Boot.scala:
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import org.squeryl.adapters.H2Adapter
SquerylRecord.initWithSquerylSession(Session.create(
DriverManager.getConnection("jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1", "sa", ""),
new H2Adapter
))
The first startup works fine. I can connect via H2's web-interface and if I use my app, it updates the database appropriately. However if I restart jetty without restarting the JVM, I get:
java.sql.SQLException: No suitable driver found for jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1
The same result is had if I replace "DB_CLOSE_DELAY=-1" with "AUTO_SERVER=TRUE", or remove it entirely.
Following the recommendations on the Squeryl list, I tried C3P0:
import com.mchange.v2.c3p0.ComboPooledDataSource
val cpds = new ComboPooledDataSource
cpds.setDriverClass("org.h2.Driver")
cpds.setJdbcUrl("jdbc:h2:lift_proto")
cpds.setUser("sa")
cpds.setPassword("")
org.squeryl.SessionFactory.concreteFactory =
Some(() => Session.create(
cpds.getConnection, new H2Adapter())
)
This produces similar behavior:
WARNING: A C3P0Registry mbean is already registered. This probably means that an application using c3p0 was undeployed, but not all PooledDataSources were closed prior to undeployment. This may lead to resource leaks over time. Please take care to close all PooledDataSources.
To be sure it wasn't anything I was doing which was causing this, I started and stopped the server without calling a transaction { } block. No exceptions were thrown. I then added to my Boot.scala:
transaction { /* Do nothing */ }
And the exception was once again thrown (I'm assuming because connections are lazy). So I moved the db initialization code to its own file away from Lift:
SessionFactory.concreteFactory = Some(()=>
Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:mem:test", "sa", ""),
new H2Adapter
))
transaction {}
Results were unchanged. What am I doing wrong? I cannot find any mention of needing to explicitly close connections or sessions in the Squeryl documentation, and this is my first time using JDBC.
I found mention of the same issue here on the Lift google group, but no resolution.
Thanks for any help.
When you say you are restarting Jetty, I think what you're actually doing is reloading your webapp within Jetty. Neither the h2 database or C3P0 will automatically shut down when your app reloads, which explains the errors you are receiving when Lift tries to initialize them a second time. You don't see the error when you don't create a transaction block because both h2 and C3P0 are initialized when the first DB connection is retrieved.
I tend to use BoneCP as a connection pool myself. You can configure the minimum number of pooled connections to be > 1, which will stop h2 from shutting down without the need for DB_CLOSE_DELAY=-1. Then you can use:
LiftRules.unloadHooks append { () =>
yourPool.close() //should destroy the pool and it's associated threads
}
That will close all of the connections when Lift is shutdown, which should properly shutdown the h2 database as well.