Ebean Annotations - Using sequences to generate IDs in DB2 - db2

I'm trying to use sequences to generate incremented IDs for my tables in DB2. It works when I send SQL statements directly to the database, but when using ebean the statement fails. Here's the field in Java:
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "TABLENAME_IDNAME_TRIG")
#SequenceGenerator(name = "TABLENAME_IDNAME_TRIG", sequenceName = "TABLENAME_IDNAME_SEQ")
#Column(name = "IDNAME")
private Long id;
Here's the column in SQL (From TOAD):
Name Data type Not Null Default Generated Bit Data Scope Identity
IDNAME INTEGER Yes No No
And here's the sequence definition in SQL:
CREATE OR REPLACE SEQUENCE SCHEMA.TABLENAME_IDNAME_SEQ
AS INTEGER CACHE 50 ORDER;
And the trigger:
CREATE OR REPLACE TRIGGER SCHEMA.TABLENAME_IDNAME_TRIG
NO CASCADE BEFORE INSERT
ON TABLENAME
REFERENCING
NEW AS OBJ
FOR EACH ROW
BEGIN
SET obj.IDNAME=NEXT VALUE FOR SCHEMA.TABLENAME_IDNAME_SEQ;
END;
What is the issue with my annotations here? As a(n important) side note - when I set GenerationType to AUTO, TABLE, or IDENTITY, it works, even though it shouldn't, because I'm also using this object to represent a parallel oracle table which also uses sequences for ID generation.
Edited to include error message:
javax.persistence.PersistenceException: Error getting sequence nextval
...
Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-348, SQLSTATE=428F9, SQLERRMC=NEXTVAL FOR SCHEMA.TABLENAME_IDNAME_SEQ, DRIVER=4.19.49
EDIT 2: The specific Sql statement that is failing is:
values nextval for QA_CONNECTION_ICONNECTIONI_SEQ union values nextval for QA_CONNECTION_ICONNECTIONI_SEQ union values nextval for QA_CONNECTION_ICONNECTIONI_SEQ
Which is SQL generated by Ebean. This is a smaller version of the real statement, which is repeated 20 times, so I'm guessing something screws up when generating the caching query.
EDIT 3: I believe this might be a bug in Ebean's use of DB2 sequences. This function generates SQl that returns an error for me when used with db2
public DB2SequenceIdGenerator(BackgroundExecutor be, DataSource ds, String seqName, int batchSize) {
super(be, ds, seqName, batchSize);
this.baseSql = "values nextval for " + seqName;
this.unionBaseSql = " union " + baseSql;
}
EDIT 4: Based on this SO link I think it is a bug.
Can't insert multiple values into DB2 by using UNION ALL and generate IDs from sequence
The correct class probably looks like this? Though I haven't ever tried building the library, so I couldn't test it. Time to learn how to open a defect I guess.
public class DB2SequenceIdGenerator extends SequenceIdGenerator {
private final String baseSql;
private final String unionBaseSql;
private final String startSql;
public DB2SequenceIdGenerator(BackgroundExecutor be, DataSource ds, String seqName, int batchSize) {
super(be, ds, seqName, batchSize);
this.startSql = "values "
this.baseSql = "(nextval for " + seqName);
this.unionBaseSql = ", " + baseSql;
}
public String getSql(int batchSize) {
StringBuilder sb = new StringBuilder();
sb.append(startSql);
sb.append(baseSql);
for (int i = 1; i < batchSize; i++) {
sb.append(unionBaseSql);
}
return sb.toString();
}
}

Temporary workaround for those interested: in ebean.properties, set
ebean.databaseSequenceBatchSize=1

Related

#Enumerated Mapping with Postgresql Enum

I created a simple entity called Agent that have an enumerated category. I already know that JPA will not map this enum with Postgresql type enum so I tried to force this mapping.
What I Have:
Java Parts: in the java part we've defined the Person.java entity and the category enumerated class.
Person.java
#Entity
public class Agentimplements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
#Column(length = 50, nullable = false)
private String code;
#Column(length = 50, nullable = false)
private String first_name;
#Column(length = 50, nullable = false)
private String family_name;
#Enumerated(EnumType.STRING)
#Column(nullable = false)
private TypeEntree category;
}
CategoryEn.java
public enum CategoryEn{
CUSTOMER,
PROVIDER,
DRIVER
}
Sql Forcing:
CREATE TYPE category_enum AS ENUM ('CUSTOMER','PROVIDER','DRIVER');
CREATE FUNCTION dummy_cast(varchar) RETURNS category_enum AS $$
SELECT CASE $1
WHEN 'CUSTOMER' THEN 'CUSTOMER'::category_enum
WHEN 'PROVIDER' THEN 'PROVIDER'::category_enum
WHEN 'DRIVER' THEN 'DRIVER'::category_enum
END;
$$ LANGUAGE SQL;
CREATE CAST (varchar AS category_enum) WITH FUNCTION dummy_cast(varchar) AS ASSIGNMENT;
ALTER TABLE public.agent
ALTER COLUMN category
SET DATA TYPE category_enum
USING agent::text::category_enum;
Until here, everything is working fine, but when I try to execute this query in the the AgentFacade:
String jpql ="SELECT a FROM Agent a"
+ " WHERE a.category = :cat";
Query query = em.createQuery(jpql);
query.setParameter("cat", CategoryEn.DRIVER);
I'm having the following error:
Caused by: javax.persistence.PersistenceException: Exception [EclipseLink-4002]
(Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd):
org.eclipse.persistence.exceptions.DatabaseException Internal
Exception: org.postgresql.util.PSQLException: ERREUR: operator does not exist : category_enum= character varying
Indication :No operator matches the given name and argument type(s). You might need to add explicit type casts
My questions are:
Why I am having this error ?
Can I solve this error ? How?
Why the JPA doesn't have a tool that map automatically a Java enum to an Sql type enum ?
PS: I've already seen almost all the stackoverflow questions/answers that are similare to this topic
You are getting this error because your driver/ORM is likely casting that parameter to varchar.
You could create operator for that comparison:
CREATE OR REPLACE FUNCTION texteq(
category_enum,
text)
RETURNS boolean AS $q$ SELECT texteq($1::text, $2) $q$
LANGUAGE SQL IMMUTABLE STRICT
COST 1;
CREATE OPERATOR =(
PROCEDURE = texteq,
LEFTARG = category_enum,
RIGHTARG = text,
COMMUTATOR = =,
NEGATOR = <>,
RESTRICT = eqsel,
JOIN = eqjoinsel,
HASHES,
MERGES);
I didn't test if it actually works in JOIN merges/hashes, but simple comparison looks fine.

Inserting columns in Sqlite database using android studio

Inserting columns in Sqlite database using android studio and on how to set some as autoincrement and primarykeys
If you set the column type as INTEGER PRIMARY KEY and if you do not specify a value when inserting a row, then a value (64bit signed integer) will be assigned an unused integer, usually one greater then the largest.
You very likely do not need to use the AUTOINCREMENT keyword (this assigns an integer that is unique within the database, rather than at the table level and thus has overheads in determining that integer).
Frequently _id is used, so will will often see
db.execSQL("CREATE TABLE tablename (_id INTEGER PRIMARY KEY, column_name column_type, ...more column_name / column_types as required...);");
As an example, the following code uses a subclass of SQLiteOpenHelper (not needed but frequently used), which requires an onCreate method (which is called when the database is created e.g. the very first time the helper is used) and an onUpgrade method (required if the version number is incremented/increased).
This code will create, if need be, a database called mydb (filename mydb.sqlite for Operating Systems that support extension greater than 3 characters).
Note if need be is just once (with some rare exceptions) unless the database is deleted. i.e. onCreate is not called every time an instance of the helper is constructed, it is only called when the database file itself doesn't exist (again rare exceptions apply).
It will then, if creating the database, create a table in the database named testfloat. The table will consists of 2 columns namely, _id and myfloat.
The _id column type is INTEGER PRIMARY KEY, if when inserting a row and no value is specified for the column then it will be a unique incrementing integer (1 at first, then 2 ......).
The myfloat column is of type FLOAT (perhaps checkout Datatypes In SQLite Version 3 for SQLite's flexibility in regards to Datatypes), if a value isn't given when inserting a row then a value of 0.0 will be given.
public class MyDBHelper extends SQLiteOpenHelper {
public static final String DBNname = "mydb";
public static final int DBVersion = 1;
public static final String TESTFLOATTABLE = "testfloat";
public static final String STDIDCOL = "_id INTEGER PRIMARYKEY";
public static final String MYFLOATCOL = "myfloat";
public static final String MYFLOATTYPE = " FLOAT DEFAULT 0.0";
public MyDBHelper(Context context) {
super(context,DBNname,null,DBVersion);
}
#Override
public void onCreate(SQLiteDatabase db) {
db.execSQL("CREATE TABLE " +
TESTFLOATTABLE +
"(" +
STDIDCOL +
"," +
MYFLOATCOL +
MYFLOATTYPE +
")");
}
#Override
public void onUpgrade(SQLiteDatabase db, int oldeversion, int newversion) {
}
public long insertRow(double myfloatvalue) {
long rv;
SQLiteDatabase db = getWritableDatabase();
ContentValues cv = new ContentValues();
cv.put(MYFLOATCOL,myfloatvalue);
rv = db.insert(TESTFLOATTABLE,null,cv);
return rv;
}
public Cursor getAllMyFloats() {
Cursor rv;
SQLiteDatabase db = getReadableDatabase();
rv = db.query(TESTFLOATTABLE,null,null,null,null,null,null);
return rv;
}
}
The code above, has an empty method for onUpgrade. Additionally there are two methods insertRow (to insert a row) and getAllMyFloats to return a Cursor, with in this case, all rows containing all columns.
In you invoking activity you could do something along the lines of :-
MyDBHelper mydbhelper = new MyDBHelper(this);
mydbhelper.insertRow(1.3);
mydbhelper.insertRow(1);
mydbhelper.insertRow(5.674389123459834);
The first line gets a MyDBHelper instance, which when first run will create the database and in doing so call the onCreate method, thus creating the tables.
The next three lines invoke the insertRow method (note if the app is rerun 3 additional rows will be added........ i.e. the code is for demonstration), which will cause 3 rows to be added, the first will have 1 in the _id column, the next 2 etc.
The following code (which follows on from the code above) demonstrates obtaining and interrogating a Cursor:-
Cursor getfloats = mydbhelper.getAllMyFloats();
Log.d("TESTFLOAT","Rows returned from getALlFloats = " + getfloats.getCount());
while (getfloats.moveToNext()) {
Log.d("TESTFLOAT","Via getString = " + getfloats.getString(getfloats.getColumnIndex(mydbhelper.MYFLOATCOL)));
Log.d("TESTFLOAT","Via getFloat = " + Float.toHexString(
getfloats.getFloat(
getfloats.getColumnIndex(
mydbhelper.MYFLOATCOL
)
)
));
Log.d("TESTFLOAT","Via getDouble = " +Double.toString(
getfloats.getDouble(
getfloats.getColumnIndex(
mydbhelper.MYFLOATCOL
)
)));
}
getfloats.close();
The first line invokes the getAllMyFLoats that returns a cursor.
The next line wtires a log message that details how many rows are in the resultant cursor.
The while clause transverses the cursor if there are any rows in the cursor. For each row it gets the value of the myfloat column, using some of the cursor.get????? methods (to demonstrate how getting the value is affected by each). Note instead of getfloats.get????(getfloats.getColumnIndex(column)), getfloats.get????(1) would work.

Postgres insert record with Sequence generates error - org.postgresql.util.PSQLException: ERROR: relation "dual" does not exist

I am new to Postgres database.
I have a Java Entity class with the below column for ID:
#Entity
#Table(name = "THE_RULES")
public class TheRulesEntity {
/** The id. */
#Column(name = "TEST_NO", precision = 8)
#SequenceGenerator(name = "test_no_seq", sequenceName = "TEST_NO_SEQ")
#GeneratedValue(generator = "test_no_seq", strategy = GenerationType.AUTO)
#Id
private Long id;
/** The test val. */
#Column(name = "TEST_VAL", nullable = false, length = 3)
private String testVal;
Code:
rulesRepository.saveAndFlush(theRulesEntity)
Table:
CREATE TABLE THE_RULES
(
TEST_NO INT NOT NULL,
TEST_VAL VARCHAR(3) NOT NULL
)
CREATE SEQUENCE "TEST_NO_SEQ" START WITH 1000 INCREMENT BY 1;
When I try to insert a new record into the postgres database from my application (the ID value is null in Java code during Debug mode), then I get the below error:
Caused by: org.postgresql.util.PSQLException: ERROR: relation "dual" does not exist
But If I insert the record manually into database table and then update the record from my application, then the record is updated successfully (Probably because the application uses the same ID value so no need to refer to the Sequence TEST_NO_SEQ value anymore)
Looks like the database is not able to access the sequence from dual table.
Could anyone help me how to fix this?
Thanks.
Thanks to Joop and a_horse_with_no_name, the issue is resolved
I have used Oracle driver which is wrong. I have updated my code to use Postgres driver
I created the Sequence again in the database with same name but without the Quotes
I used all capital-case letters in my Java entity class to refer to the sequence correctly

Cannot drop a constraint in MS ACCESS

When using the SQL command :
ALTER TABLE [Sessions] DROP CONSTRAINT [SessionAttendance]
I get the exception error message "Could not find reference."
The constraint exists, and shows in the system table of constraints for this user table. How can I get this constraint to drop?
The database is in MS-ACCESS 2003 format. The application uses JET 4.0 I have several hundred instances which will need schema updates. I have a utility program to generate the SQL, but it falls over when attempting the DROP CONSTRAINT action.
Answered by implications of Gord Thompson in comment suggestions.
The ALTER statement was being applied to the wrong table in the relation.
The constraint was originally Added to the Attendance table. However it shows up as an attribute of the Sessions table when using the "GetOleDbSchemaTable" method to list.
Per the following code excerpt:
Structure Relation
Public Name As String
Public PrimaryTableName As String
Public PrimaryField As String
Public PrimaryIndex As String
Public ForeignTable As String
Public ForeignField As String
Public OnUpdate As String
Public OnDelete As String
Public Overrides Function ToString() As String
Dim msg As String = String.Format("Name:{0} PT:{1} PF:{2} PI:{3} FT:{4} FF:{5}", _
Name, PrimaryTableName, PrimaryField, PrimaryIndex, ForeignTable, ForeignField)
Return msg
End Function
End Structure
Private Function ListRelations(tableName As String) As List(Of Relation)
Dim relations As New List(Of Relation)
Dim MySchemaTable As DataTable
Dim dbConn As New OleDbConnection(connectionString)
dbConn.Open()
MySchemaTable = dbConn.GetOleDbSchemaTable(OleDbSchemaGuid.Foreign_Keys, _
New Object() {Nothing, Nothing, tableName})
Dim result As Boolean = False
'List the table name from each row in the schema table.
For Each row As DataRow In MySchemaTable.Rows
Dim r As New Relation
r.Name = row("FK_NAME")
r.PrimaryTableName = row("PK_TABLE_NAME")
r.PrimaryField = row("PK_COLUMN_NAME")
r.PrimaryIndex = row("PK_NAME")
r.ForeignTable = row("FK_TABLE_NAME")
r.ForeignField = row("FK_COLUMN_NAME")
r.OnUpdate = row("UPDATE_RULE")
r.OnDelete = row("DELETE_RULE")
Console.WriteLine(r.ToString)
relations.Add(r)
Next
MySchemaTable.Dispose()
dbConn.Close()
dbConn.Dispose()
Return relations
End Function

JPA native select followed by native update .. fires an additional update

I am trying the following which is resulting in an additional update execution and failing my tests.
I have an entity like this.
#Entity
#SqlResultSetMapping(name = "tempfilenameRSMapping",
entities = { #EntityResult(entityClass = MyEntity.class) },
columns = { #ColumnResult(name = "TEMPFILENAME") })
//The reason for this mapping is to fetch an additional field data through join.
#Table(name = "MY_TABLE")
public class MyEntity {
#Id
#Column(name="ID")
private String id;
#Column(name="NAME")
private String name;
#Column(name="DESC")
private String description;
#Column(name="STATUS")
private String status;
//follwed by getter setters
}
I am trying to do a retrieve with a native query. And for the retrieved entity, I execute a native update (the reason for native update is that I want to update just one single field). Note that I am not updating the retrieved entity directly.
What I observe is that my update is not getting executed properly. When I turn the TRACE on, I notice that on flush openJPA is executing an additional update query and therefore overriding my original update.
e.g.
SELECT M.ID, M.NAME, M.DESC, O.TEMPFILENAME FROM MY_TABLE M, OTHER_TABLE O WHERE M.ID = ?
UPDATE MY_TABLE SET STATUS = ? WHERE ID = ?
UPDATE MY_TABLE SET ID=?, NAME=?, DESC=?, STATUS=? WHERE ID = ?
What can I do to skip the auto-updation?
Edit:
Here are the routines we use for executing the queries.
The following routine returns a named native query sql.
public String getNamedNativeQuerySql(EntityManagerFactory emf, String qryName) {
MetamodelImpl metamodel = (MetamodelImpl) emf.getMetamodel();
QueryMetaData queryMetaData =
metamodel.getConfiguration().getMetaDataRepositoryInstance().getQueryMetaData(null, qryName, null, true);
String queryString = queryMetaData.getQueryString();
return queryString;
}
The code for retrieval:
Query query = entityManager.createNamedQuery("retrieveQry");
query.setParameter(1, id);
Object[] result = (Object[]) query.getSingleResult();
MyEntity entity = (MyEntity) result[0];
String tempFileName = (String) result[1];
The code for update that follows retrieval:
Query qry = entityManager.createNamedQuery("updateQry");
qry.setParameter(1, status);
qry.setParameter(2, entity.getId() );
qry.executeUpdate()
Edit:
I see the problem even without the update statement. OpenJPA is
executing an additional update query even if I do a simple find.
The problem was with runtime enhancement. OpenJPA was unable to do a proper detection of dirty state with runtime-enhanced entities.
It got resolved by doing a build time enhancement.