I'm working on a project where we recently have changed our persistence provider from using OpenJPA to use EclipseLink. This is a big and old application where we also do SQL inserts from other processes that are currently not feasible to migrate to JPA in the time being.
We use #TableGenerator to reference a table where we keep track of the ids to use on inserts.
When we used OpenJPA, we noticed that it first selects next id from the table, then it updates table to preallocate next id's. This is exactly the same way the old SQL process gets and preallocates next id's.
When we switched to EclipseLink, we noticed the opposite behavior, it Updates the table to preallocate next id's, and then begin doing inserts. This is causing us a java.sql.SQLIntegrityConstraintViolationException because the last preallocated id has been used by the non-JPA processes to insert a new record, so when the JPA process reaches that id, the database gives an error, claiming that we are trying to do an insert with an id already used.
Is there a way to tell EclipseLink to handle pre-allocation the way OpenJPA does?
Here are some samples of the pre-allocation strategy from OpenJPA vs EclipseLink, for these examples I have set the allocationSize to 5
OpenJPA
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> executing prepstmnt 9137209 SELECT NEXT_ID FROM ABC.table_ids WHERE TABLE_ID = ? FOR UPDATE [params=(String) 1034] [reused=0]
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> [94 ms] spent
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> executing prepstmnt 23999306 UPDATE ABC.table_ids SET NEXT_ID = ? WHERE TABLE_ID = ? AND NEXT_ID = ? [params=(long) 55, (String) 10, (long) 50] [reused=0]
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> [93 ms] spent
EclipseLink:
[EL Fine]: 2013-01-23 14:08:35.875--ClientSession(6215763)--Connection(10098848)--Thread(Thread[main,5,main])--UPDATE table_ids SET next_id = next_id + ? WHERE table_id = ?
bind => [5, 10]
[EL Fine]: 2013-01-23 14:08:36.0--ClientSession(6215763)--Connection(10098848)--Thread(Thread[main,5,main])--SELECT next_id FROM table_ids WHERE table_id = ?
bind => [10]
Thanks in advance!
The issue isn't the order of the update versus select, but the interpretation of having the current value or not. EclipseLink assumes it gets the current value, where as OpenJPA appears to not.
Ideally you would be able to change the non-JPA usages to make the same assumption. If you cannot, you could write your own custom Sequence object in EclipseLink.
To do this create a subclass of TableSequence and override the buildSelectQuery() method to add a "+ 1" (or "- 1") to the SQL to account for the difference in assumption.
You can then add your custom sequence using a SessionCustomizer.
Please also log a bug in EclipseLink to have a compatiblity option added for OpenJPA sequencing.
Related
I am migrating an application from JPA 2.0 with OpenJPA on WebSphere 8.5 to JPA 2.1 with EclipseLink on WebSphere 9.0, using DB2 12 on z/OS. Generally it is working, but one rather complex query is failing. I could localize the problem to the usage of a custom DB2-function call within a criteria-query. The call looks something like this:
criteriaBuilder.function("REPLACE", String.class, fromMyEntity.get("myField"), criteriaBuilder.literal("a"), criteriaBuilder.literal("b"));
This produces the following error (had to translate some error texts, since WebSphere localizes them, and anonymize my field/table names, so labels/names might not be 100% exact):
Error: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.3.WAS-v20160414-bd51c70): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.ibm.websphere.ce.cm.StaleConnectionException: DB2 SQL Error: SQLCODE=-171, SQLSTATE=42815, SQLERRMC=2;REPLACE, DRIVER=3.72.44
Errorcode: -171
Call: SELECT COUNT(ID) FROM MY_TABLE WHERE REPLACE(MYFIELD, ?, ?) LIKE ?
bind => [abc, def, %g%]
Query: ReportQuery(referenceClass=MyEntity sql="SELECT COUNT(ID) FROM MY_TABLE WHERE REPLACE(MYFIELD, ?, ?) LIKE ?").
What really confuses me, if I take the generated query, replace the placeholders with the given bound parameters, and execute that in a database client myself, it works without error.
The documentation states, the first parameter must not be empty (https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_replace.html), and indeed if I use an empty string either as a literal in the query or in my database client, it will produce the above error. But none of the rows in the database contain an empty value. There are checks in place to prevent this in the old environment, but they don't appear to work with the new environment, so I disabled them while searching for the problem, and made sure myself no empty values exist. I can even use the primary key as the first parameter, and it will still fail, and that can't even contain an empty/null value.
Using other functions (like TRANSLATE) works, I also tried using "SYSIBM.REPLACE" as name, and different combinations of parameters, but as soon as I use a real column to replace data in, it fails. Anybody got any ideas what I am doing wrong here?
This is my table definition:
CREATE TABLE "MY_TABLE" (
"ID" INTEGER NOT NULL GENERATED BY DEFAULT AS IDENTITY (NO MINVALUE NO MAXVALUE NO CYCLE CACHE 20 NO ORDER ),
"MYFIELD" VARCHAR(160) FOR MIXED DATA WITH DEFAULT NULL,
[....]
) IN "<Database>"."<Tablespace>" PARTITION BY SIZE EVERY 4 G AUDIT NONE DATA CAPTURE NONE CCSID UNICODE;
<insert id="insert" parameterType="com.youneverwalkalone.cent.web.model.Category" useGeneratedKeys="true" keyProperty="id" keyColumn="id">
LOCK TABLE t_category WRITE;
UPDATE t_category SET rgt = rgt + 2 WHERE rgt greater than #{parentNode.lft,jdbcType=BIGINT};
UPDATE t_category SET lft = lft + 2 WHERE lft greater than #{parentNode.lft,jdbcType=BIGINT};
insert into t_category (
name, lft, rgt,
time_created, people_created,
state, type, project)
values (
#{record.name,jdbcType=VARCHAR}, #{parentNode.lft,jdbcType=BIGINT}+1, #{parentNode.lft,jdbcType=BIGINT}+2,
#{record.timeCreated,jdbcType=TIMESTAMP}, #{record.peopleCreated,jdbcType=BIGINT},
#{record.state,jdbcType=SMALLINT},#{record.type,jdbcType=VARCHAR},#{record.project,jdbcType=VARCHAR});
UNLOCK TABLES;
</insert>
Above is my code snippet. Call this insert method will get errors.
My question:
1) Does mybatis supourt these grammar--multiple sql in one method?
2) If not support, how to handle this case.
1/ it is actually not related to Mybatis, if JDBC supports it and used DB does (as well as the driver) then yes you can do that with Mybatis.
As noticed by Gabriele Coletta the question MyBatis executing multiple sql statements in one go, is that possible? contains the answer.
As you will see, the syntax is different across database types (mysql, ms-sql, oracle)
2/ without object since answer to 1/ is yes.
I am using
length(ze.string)>2 in openJpa query. but i am getting
SQLCODE=-440, SQLSTATE=42884, SQLERRMC=CHAR_LENGTH;FUNCTION, DRIVER=3.53.95 {prepstmnt 1776269692 SELECT t0.f1, t0.f2, t0.f3, t0.f4, t0.f5, t0.f6, t0.f7, t0.f8, t0.f9, t0.f10, t0.f11, t0.f12, t0.f13, t0.f14, t0.f15, t0.f16, t0.f17 FROM table t0 WHERE (t0.f1 = ? AND CHAR_LENGTH(?) > ? AND .....
In plain query when i do length operation i am getting record but using jpa its not working. I looked Here used size it doesn't work. and the field is varchar and db2. trying from past 1 hour.
DB2 requires use of the SQL function LENGTH, yet OpenJPA seems to be incorrectly converting your JPQL to use SQL function CHAR_LENGTH (hence the error message - not that DB2 gives out clear messages saying what is wrong, who knows what SQLCODE=-440 is without having to search!!).
Raise a bug on your JPA provider.
See https://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000818.html
You would need to give more details about your entity, persistence.xml, and query to get to the bottom or this. However, I do not see how OpenJPA would use CHAR_LENGTH instead of LENGTH for DB2. Let me explain. If you look at DBDictionary here:
https://svn.apache.org/viewvc/openjpa/branches/2.2.x/openjpa-jdbc/src/main/java/org/apache/openjpa/jdbc/sql/DBDictionary.java?view=markup
You can see it defines something called "stringLengthFunction" as follows:
public String stringLengthFunction = "CHAR_LENGTH({0})";
This is the string length function which should be used for each individual dictionary (i.e. Database config). However, for DB2, the AbstractDB2Dictionary, see here:
https://svn.apache.org/viewvc/openjpa/branches/2.2.x/openjpa-jdbc/src/main/java/org/apache/openjpa/jdbc/sql/AbstractDB2Dictionary.java?view=markup
overrides this as follows:
stringLengthFunction = "LENGTH({0})";
Given this, for DB2, LENGTH should be used. I took the following simple query:
"select me.id from MyEntity me where length(me.name)>2"
And executed it on OpenJPA using DB2, and I got this:
SELECT t0.ID FROM MYENTITY t0 WHERE (CAST(LENGTH(t0.ID) AS BIGINT) > CAST(? AS BIGINT)) [params=(long) 2]
Thanks,
Heath Thomann
I am calling update statements one after the other from a servlet to DB2. I am getting error sqlstate 40001, reason code 68 which i found it is due to deadlock timeout.
How can I resolve this issue?
Can it be resolved by setting query timeout?
If yes then how to use it with update statements in servlet or where to use it?
The reason code 68 already tells you this is due to a lock timeout (deadlock is reason code 2) It could be due to other users running queries at the same time that use the same data you are accessing, or your own multiple updates.
Begin by running db2pd -db locktest -locks show detail from a db2 command line to see where the locks are. You'll then need to run something like:
select tabschema, tabname, tableid, tbspaceid
from syscat.tables where tbspaceid = # and tableid = #
filling in the # symbols with the ID number you get from the db2pd command output.
Once you see where the locks are, here are some tips:
◦Deadlock frequency can sometimes be reduced by ensuring that all applications access their common data in the same order – meaning, for example, that they access (and therefore lock) rows in Table A, followed by Table B, followed by Table C, and so on.
taken from: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.trb.doc/doc/t0055074.html
recommended reading: http://www.ibm.com/developerworks/data/library/techarticle/dm-0511bond/index.html
Addendum: if your servlet or another guilty application is using select statements found to be involved in the deadlock, you can try appending with ur to the select statements if accuracy of the newly updated (or inserted) data isn't important.
For me, the solution was adding FOR READ ONLY WITH UR at the end of all my SELECT statements. (Apparently my select statements were returning so much data, it locked the tables long enough to interfere with other SQL statements)
See https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html
Using a SQL 2008 R2 November release database and a .net 4.0 Beta 2 Azure worker role application. The worker role collects data and inserts it into a single SQL table with one identity column. Because there will likely be multiple instances of this worker role running, I created an Insert Instead Of trigger on the SQL table. The trigger performs Upsert functionality using the SQL Merge function. Using T-SQL I was able to verify the insert instead of trigger functions correctly, new rows were inserted while existing rows were updated.
This is the code for my trigger:
Create Trigger [dbo].[trgInsteadOfInsert] on [dbo].[Cars] Instead of Insert
as
begin
set nocount On
merge into Cars as Target
using inserted as Source
on Target.id=Source.id AND target.Manufactureid=source.Manufactureid
when matched then
update set Target.Model=Source.Model,
Target.NumDoors = Source.NumDoors,
Target.Description = Source.Description,
Target.LastUpdateTime = Source.LastUpdateTime,
Target.EngineSize = Source.EngineSize
when not matched then
INSERT ([Manufactureid]
,[Model]
,[NumDoors]
,[Description]
,[ID]
,[LastUpdateTime]
,[EngineSize])
VALUES
(Source.Manufactureid,
Source.Model,
Source.NumDoors,
Source.Description,
Source.ID,
Source.LastUpdateTime,
Source.EngineSize);
End
Within the worker role I am using Entity Framework for an object model. When I call the SaveChanges method I receieve the following exception:
OptimisticConcurrencyException
Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries.
I understand this is likly due to SQL not reporting back an IdentityScope for each new inserted/updated row. Then EF thinks the rows were not inserted and the transaction is not ultimately not committed.
What is the best way to handle this exception? Maybe using OUTPUT from the SQL merge function?
Thanks!
-Paul
As you suspected, the problem is that any insertions into a table with an Identity column are immediately followed by a select of the scope_identity() to populate the associated value in the Entity Framework. The instead of trigger causes this second step to be missed, which leads to the 0 rows inserted error.
I found an answer in this StackOverflow thread that suggested adding the following line at the end of your trigger (in the case where the item is not matched and the Insert is performed).
select [Id] from [dbo].[TableXXX] where ##ROWCOUNT > 0 and [Id] = scope_identity()
I tested this with Entity Framework 4.1, and it solved the problem for me. I have copied my entire trigger creation here for completeness. With this trigger defenition I was able to add rows to the table by adding Address entities to the context and saving them using context.SaveChanges().
ALTER TRIGGER [dbo].[CalcGeoLoc]
ON [dbo].[Address]
INSTEAD OF INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT OFF;
-- Insert statements for trigger here
INSERT INTO Address (Street, Street2, City, StateProvince, PostalCode, Latitude, Longitude, GeoLoc, Name)
SELECT Street, Street2, City, StateProvince, PostalCode, Latitude, Longitude, geography::Point(Latitude, Longitude, 4326), Name
FROM Inserted;
select AddressId from [dbo].Address where ##ROWCOUNT > 0 and AddressId = scope_identity();
END
I had almost exactly the same scenario: Entity Framework-driven inserts to a view with an INSTEAD OF INSERT trigger on it were resulting in the "...unexpected number of rows (0)..." exception. Thanks to Ryan Gross's answer I fixed it by adding
SELECT SCOPE_IDENTITY() AS CentrePersonID;
at the end of my trigger, where CentrePersonID is the name of the key field of the underlying table that has an auto-inrcementing identity. This way the EF can discover the ID of the newly-inserted record.