ERROR: relation "domain_event_entry" does not exist [AXON] - postgresql

This problem occurred while learning AXON 3.x. Using postgre 42.5.0 DB.
Hibernate: select nextval ('hibernate_sequence')
Hibernate: insert into domain_event_entry (event_identifier, meta_data, payload, payload_revision, payload_type, time_stamp, aggregate_identifier, sequence_number, type, global_index) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
2022-11-02 11:05:28.189 WARN 15368 --- [nio-8080-exec-1] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 42P01
2022-11-02 11:05:28.189 ERROR 15368 --- [nio-8080-exec-1] o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: relation "domain_event_entry" does not exist
Position: 13
2022-11-02 11:05:28.191 INFO 15368 --- [nio-8080-exec-1] o.h.e.j.b.internal.AbstractBatchImpl : HHH000010: On release of batch it still contained JDBC statements
I think the relevant tables should be created automatically. What is wrong with the configuration?

Given you current explanation, there's not that much to go off, #zzz.
I do have two pointers for you, though:
Please move to the latest Axon Framework release. Axon Framework is currently on 4.6.2, which is about 4 years ahead from the latest minor release of Axon Framework 3.
The main thing missing from your question, is how you're configuring your application. Surely there's something wrong, but pinpointing what it is, is rather tough without knowing what you're doing. On this note though, if you are in a Spring Boot environment, adding the axon-spring-boot-starter dependency should be sufficient.
What might be an easy stepping stone for you, #zzz, is using AxonIQ's Initializr. Through the initializr, you can construct a basic Axon Framework application, allowing you to add several dependencies you may need as well.
You can find the initializr here, by the way.

Related

Replace-function via EclipseLink on DB2 z/OS

I am migrating an application from JPA 2.0 with OpenJPA on WebSphere 8.5 to JPA 2.1 with EclipseLink on WebSphere 9.0, using DB2 12 on z/OS. Generally it is working, but one rather complex query is failing. I could localize the problem to the usage of a custom DB2-function call within a criteria-query. The call looks something like this:
criteriaBuilder.function("REPLACE", String.class, fromMyEntity.get("myField"), criteriaBuilder.literal("a"), criteriaBuilder.literal("b"));
This produces the following error (had to translate some error texts, since WebSphere localizes them, and anonymize my field/table names, so labels/names might not be 100% exact):
Error: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.3.WAS-v20160414-bd51c70): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.ibm.websphere.ce.cm.StaleConnectionException: DB2 SQL Error: SQLCODE=-171, SQLSTATE=42815, SQLERRMC=2;REPLACE, DRIVER=3.72.44
Errorcode: -171
Call: SELECT COUNT(ID) FROM MY_TABLE WHERE REPLACE(MYFIELD, ?, ?) LIKE ?
bind => [abc, def, %g%]
Query: ReportQuery(referenceClass=MyEntity sql="SELECT COUNT(ID) FROM MY_TABLE WHERE REPLACE(MYFIELD, ?, ?) LIKE ?").
What really confuses me, if I take the generated query, replace the placeholders with the given bound parameters, and execute that in a database client myself, it works without error.
The documentation states, the first parameter must not be empty (https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_replace.html), and indeed if I use an empty string either as a literal in the query or in my database client, it will produce the above error. But none of the rows in the database contain an empty value. There are checks in place to prevent this in the old environment, but they don't appear to work with the new environment, so I disabled them while searching for the problem, and made sure myself no empty values exist. I can even use the primary key as the first parameter, and it will still fail, and that can't even contain an empty/null value.
Using other functions (like TRANSLATE) works, I also tried using "SYSIBM.REPLACE" as name, and different combinations of parameters, but as soon as I use a real column to replace data in, it fails. Anybody got any ideas what I am doing wrong here?
This is my table definition:
CREATE TABLE "MY_TABLE" (
"ID" INTEGER NOT NULL GENERATED BY DEFAULT AS IDENTITY (NO MINVALUE NO MAXVALUE NO CYCLE CACHE 20 NO ORDER ),
"MYFIELD" VARCHAR(160) FOR MIXED DATA WITH DEFAULT NULL,
[....]
) IN "<Database>"."<Tablespace>" PARTITION BY SIZE EVERY 4 G AUDIT NONE DATA CAPTURE NONE CCSID UNICODE;

Capturing DBIC exception to prevent script from dying

I'm running a perl script to update a currently working database with new datasets. Basically, the script receives molecule information as plain text (MDL if you are interested) and performs several chemical properties calculations through a bunch of third party softwares called via system.
The script have never had any problems processing data but the last dataset seems to have some molecules (or at least one, for what it's worth) for which some of the properties make no sense and I end up having truncated data insertions, which lastly causes a DBIC exception. Namely:
DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::mysql::st execute failed: Column 'inchi' cannot be null [for Statement "INSERT INTO moldata ( formula, hba, hbd, inchi, inchikey, mol_id, pm, psa, ro3pass, ro5vio, rtb, smiles, xlogp3aa) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )" with ParamValues: 0='I', 1='0', 2='0', 3=undef, 4='XMBWDFGMSWQBCA-YPZZEJLDSA-M', 5=936934, 6='125.00', 7='0', 8=6, 9=0, 10='0', 11='[125I-]', 12='0.86']
At this point, the program just dies. I can surely do some workarounds to avoid getting "undefs" in the inserts, but I'd like DBIC to be able to handle them and continue with the inserts while ignoring (and maybe warn me about them later) the bad ones.
What would be the right way to implement a try/catch scenario in perl for DBIC transactions?
Thanks!

How to adjust EclipseLink Preallocation strategy

I'm working on a project where we recently have changed our persistence provider from using OpenJPA to use EclipseLink. This is a big and old application where we also do SQL inserts from other processes that are currently not feasible to migrate to JPA in the time being.
We use #TableGenerator to reference a table where we keep track of the ids to use on inserts.
When we used OpenJPA, we noticed that it first selects next id from the table, then it updates table to preallocate next id's. This is exactly the same way the old SQL process gets and preallocates next id's.
When we switched to EclipseLink, we noticed the opposite behavior, it Updates the table to preallocate next id's, and then begin doing inserts. This is causing us a java.sql.SQLIntegrityConstraintViolationException because the last preallocated id has been used by the non-JPA processes to insert a new record, so when the JPA process reaches that id, the database gives an error, claiming that we are trying to do an insert with an id already used.
Is there a way to tell EclipseLink to handle pre-allocation the way OpenJPA does?
Here are some samples of the pre-allocation strategy from OpenJPA vs EclipseLink, for these examples I have set the allocationSize to 5
OpenJPA
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> executing prepstmnt 9137209 SELECT NEXT_ID FROM ABC.table_ids WHERE TABLE_ID = ? FOR UPDATE [params=(String) 1034] [reused=0]
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> [94 ms] spent
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> executing prepstmnt 23999306 UPDATE ABC.table_ids SET NEXT_ID = ? WHERE TABLE_ID = ? AND NEXT_ID = ? [params=(long) 55, (String) 10, (long) 50] [reused=0]
TEST TRACE [main] openjpa.jdbc.SQL - <t 22760146, conn 3658896> [93 ms] spent
EclipseLink:
[EL Fine]: 2013-01-23 14:08:35.875--ClientSession(6215763)--Connection(10098848)--Thread(Thread[main,5,main])--UPDATE table_ids SET next_id = next_id + ? WHERE table_id = ?
bind => [5, 10]
[EL Fine]: 2013-01-23 14:08:36.0--ClientSession(6215763)--Connection(10098848)--Thread(Thread[main,5,main])--SELECT next_id FROM table_ids WHERE table_id = ?
bind => [10]
Thanks in advance!
The issue isn't the order of the update versus select, but the interpretation of having the current value or not. EclipseLink assumes it gets the current value, where as OpenJPA appears to not.
Ideally you would be able to change the non-JPA usages to make the same assumption. If you cannot, you could write your own custom Sequence object in EclipseLink.
To do this create a subclass of TableSequence and override the buildSelectQuery() method to add a "+ 1" (or "- 1") to the SQL to account for the difference in assumption.
You can then add your custom sequence using a SessionCustomizer.
Please also log a bug in EclipseLink to have a compatiblity option added for OpenJPA sequencing.

Scala query generating invalid SQL

I'm using scalaquery to connect to both oracle and postgres servers.
This behaviour is occuring for both Oracle and Postgres, but it's only valid (and still incorrect) SQL in Postgres.
At some point, I'm running a query in scalaquery of the form:
row.foo.bind == parameter.foo || row.foo inSetBind parameter.foo.children
Parameter is a trait, which is known to have a foo in it.
The problem here is that out of the ~100 queries run, scala-query only generates the correct SQL once, of the form
...
WHERE row.foo = ? or row.foo in (?, ?, ?, ?, ?)
...
Most of the time it instead generates
...
WHERE row.foo = ? or false
...
Why is this happening inconsistently, is it a bug (I assume it is), and how do I work around it?
It turns out that the query was looking at an empty set, because parameter.foo had no childen in most cases.
Given that WHERE row.foo IN () is not valid SQL, it was instead written out as false.
This still leaves the issue of false being generated despite the code being targeted at oracle DB, but the root cause has now been cleared up.

how to enable sql log in openjpa?

I have proram written using java,jps.Now i can see logs on cosole as folllows:
INSERT INTO XYZ (a,b,c) VALUES (?, ?, ?) [org.apache.openjpa.jdbc.kernel.JDBCStoreManager$CancelPreparedStatement]
I also want to see the values passed to insert query in log.How can i see it?AM using openjpa as jpa provider.
Set openjpa.ConnectionFactoryProperties=PrintParameters=True.