Warning from DataNucleus initialization: Database Adapter (JDBC driver) doesnt support specification of schema name in table definitions - datanucleus

Getting warning during datanucleus' initialization. Can I ignore it?
[main] WARN DataNucleus.Datastore - Default Schema name "databasename" has been specified yet the Database Adapter (JDBC driver) doesnt support specification of schema name in table definitions !
Environment:
Datanucleus version: datanucleus-rdbms-5.1.0-release
JDK 8
mysql-connector-java-8.0.11
relevant part of jdoconfig.xml:
<property name="javax.jdo.option.ConnectionDriverName" value="com.mysql.cj.jdbc.Driver"/>
<property name="javax.jdo.option.ConnectionURL" value="jdbc:mysql://ip:3306/databasename?autoReconnect=true&useSSL=false&nullCatalogMeansCurrent=true"/>

Related

Classes inside persistence.xml will be ignored

I have here a wildfly 24.0.1 (on OpenJDK 11) and a ear with following structure:
MyPrj.ear
|- MyPrj_EJB.jar
`- MyPrj_Web.war
My persistence.xml is placed in MyPrj.ear/MyPrj_EJB.jar/META-INF/persistence.xml. I'm sure the persistence.xml isn't anywhere else.
The persistence.xml has following contents:
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="myPrj">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>java:/myPrj</jta-data-source>
<class>my.prj.cfg.Config</class>
<properties>
<property name="eclipselink.query-results-cache" value="true"/>
<property name="eclipselink.cache.shared.default" value="true"/>
<property name="eclipselink.target-server" value="JBoss"/>
<property name="eclipselink.weaving" value="false"/>
<property name="jboss.as.jpa.providerModule" value="org.eclipse.persistence"/>
<!-- property name="eclipselink.logging.level" value="FINE" / -->
</properties>
</persistence-unit>
</persistence>
I'm sure that the persistence.xml can be readed, because if I put an invalid character into the persistence-unit name (like an "/": <persistence-unit name="/myPrj">) I get the following error while deploying the application:
Caused by: java.lang.IllegalArgumentException: WFLYJPA0043: Persistence unit name (/myPrj) contains illegal '/' character
So I'm assuming that my persistence.xml is put in the right place and can be read successfully.
But its seems that my <class>-definition will be ignored. Every time if I deploy the ear, I get in the wildfly log file:
...
2022-07-04 18:51:15,694 INFO [org.jboss.as.jpa] (MSC service thread 1-4) WFLYJPA0002: Read persistence.xml for myPrj
2022-07-04 18:52:48,400 INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 141) WFLYJPA0003: Starting Persistence Unit Service 'MyPrj.ear/MyPrj_EJB.jar#myPrj'
...
2022-07-04 18:53:16,182 WARN [org.eclipse.persistence] (ServerService Thread Pool -- 184) session_manager_no_partition
...
If I start the application (open the web site) I get:
...
2022-07-04 18:57:45,366 INFO [org.eclipse.persistence] (default task-1) EclipseLink, version: Eclipse Persistence Services - 2.6.0.v20150309-bf26070
2022-07-04 18:57:45,444 INFO [org.eclipse.persistence.connection] (default task-1)
connecting(DatabaseLogin(
platform=>PostgreSQLPlatform
user name=> ""
connector=>JNDIConnector datasource name=>null
))
2022-07-04 18:57:45,444 INFO [org.eclipse.persistence.connection] (default task-1)
Connected: jdbc:postgresql://localhost:5432/myDB
User: myUser
Database: PostgreSQL Version: 9.2.24
Driver: PostgreSQL JDBC Driver Version: 42.2.23
2022-07-04 18:57:45,444 INFO [org.eclipse.persistence.connection] (default task-1)
connecting(DatabaseLogin(
platform=>PostgreSQLPlatform
user name=> ""
connector=>JNDIConnector datasource name=>null
))
2022-07-04 18:57:45,444 INFO [org.eclipse.persistence.connection] (default task-1)
Connected: jdbc:postgresql://localhost:5432/myDB
User: myUser
Database: PostgreSQL Version: 9.2.24
Driver: PostgreSQL JDBC Driver Version: 42.2.23
2022-07-04 18:57:45,446 INFO [org.eclipse.persistence.connection] (default task-1)
/vfs:/content/MyPrj.ear/MyPrj_EJB.jar/_myPrj login successful
2022-07-04 18:57:45,509 WARN [org.eclipse.persistence.metamodel] (default task-1) The
collection of metamodel types is empty. Model classes may not have been found
during entity search for Java SE and some Java EE container managed persistence
units. Please verify that your entity classes are referenced in persistence.xml
using either <class> elements or a global <exclude-unlisted-
classes>false</exclude-unlisted-classes> element
2022-07-04 18:57:45,606 ERROR [org.jboss.as.ejb3.invocation] (default task-1)
WFLYEJB0034: Jakarta Enterprise Beans Invocation failed on component ConfigDAO for
method public my.prj.cfg.Config
my.prj.cfg.ConfigDAO.loadByKey(java.lang.String):
javax.ejb.EJBTransactionRolledbackException: An exception occurred while creating
a query in EntityManager:
Exception Description: Problem compiling [SELECT obj FROM Config obj WHERE
obj.key=:parKey].
[16, 22] The abstract schema type 'Config' is unknown.
[33, 40] The state field path 'obj.key' cannot be resolved to a valid type.
at org.jboss.as.ejb3#24.0.1.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:219)
Take also note of the warning 2022-07-04 18:57:45,509 WARN org.eclipse.persistence.metamodel] (default task-1) The collection of metamodel types is empty. Model classes may not have been found during entity search for Java SE and some Java EE container managed persistence units. Please verify that your entity classes are referenced in persistence.xml using either <class> elements or a global <exclude-unlisted- classes>false</exclude-unlisted-classes> element.
Also to use <exclude-unlisted-classes>false</exclude-unlisted-classes> instead of the <class>-definition makes no difference.
Here the #Resource annotation of the EntityManager:
#PersistenceContext(unitName="myPrj")
private EntityManager em = null;
And the annotation of the Config entity:
#Entity(name="Config")
#Table(name = "Cfg")
public class Config implements Serializable {
...
#Column(name="key")
private String key = null;
...
}
On the same wildfly I have another EAR running (in a separate standalone instance) which also have JPA access to a database (also eclipselink via persistence.xml) which works without any problems.
What can be wrong here?
Found it! After upgrading eclipselink to V2.7.10 (store eclipselink.jar into .../wildfly-24.0.1.Final/modules/system/layers/base/org/eclipse/persistence/main/) I can deploy and start the application. Not sure which version of eclipselink was used before that (the jar file name don't contains the version information ...)
Switching to eclipselink 3.0.2 don't works because of this problem: Wildfly 21 with eclipselink 3.0 getting error as Persistence Provider not found

Configure a db2 datasource with Thorntail / Wildfly Swarm

Has anyone managed to configure a db2 datasource with Thorntail / Wildfly Swarm?
As far as I understand: As soon as I pull in the datasources fraction, the db2 driver should be autodetected according to documentation (https://docs.thorntail.io/2.3.0.Final/#auto-detecting-jdbc-drivers_thorntail).
So the only thing I should have to do is reference "ibmdb2" as the driver-name in my datasource, right?
pom.xml (using Thorntail 2.3.0.Final)
<dependency>
<groupId>io.thorntail</groupId>
<artifactId>datasources</artifactId>
</dependency>
<dependency>
<groupId>com.ibm.db2</groupId>
<artifactId>db2jcc_license_cu</artifactId>
<version>10.1</version>
</dependency>
<dependency>
<groupId>com.ibm.db2</groupId>
<artifactId>db2jcc4</artifactId>
<version>4.22.29</version>
</dependency>
project-defauls.yml
swarm:
context:
path: /
datasources:
data-sources:
MYDS:
driver-name: ibmdb2
connection-url: jdbc:db2://host:port/schema
user-name: user
password: password
Currently I get the following error on startup:
2019-05-02 09:07:52,747 INFO [org.wildfly.swarm.datasources] (main) THORN1003: Auto-detected JDBC driver for ibmdb2
2019-05-02 09:07:57,660 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 16) WFLYCTL0013: Operation ("add") failed - address: ([
("subsystem" => "datasources"),
("jdbc-driver" => "ibmdb2")
]) - failure description: "WFLYJCA0114: Failed to load datasource class: com.ibm.db2.jdbc.DB2XADataSource"
You found a bug in the JDBC driver autodetection code. The driver was (probably) autodetected, but it was wrongly configured. Specifically, this line of code sets the XA datasource class name to com.ibm.db2.jdbc.DB2XADataSource, which doesn't exist. (That's actually what your error message says, but I also confirmed it by looking into the JDBC driver JAR.) The correct class name is com.ibm.db2.jcc.DB2XADataSource. I filed THORN-2398 and submitted a PR with a fix.
I'm not sure if there's a simple workaround, because JDBC driver autodetection is performed after all configuration is applied. Perhaps the following hack might work. Define a new JDBC driver in project-defaults.yml like this:
thorntail:
datasources:
jdbc-drivers:
mydb2:
driver-module-name: com.ibm.db2jcc
driver-xa-datasource-class-name: com.ibm.db2.jcc.DB2XADataSource
But keep everything else intact. That means there will be 2 JDBC drivers for DB2, one autodetected (which will create the com.ibm.db2jcc module), and the second one you create that will piggyback on the infrastructure created by the first. If that works, just change driver-name: ibmdb2 in your data source to driver-name: mydb2.
If this doesn't work, you'll have to move off of JDBC driver autodetection for now, until the issue is fixed.

hibernate.hbm2ddl.auto value="update" issue with Hibernate 3.4 to 5.1 migration

I have an application presently running (without issue) on JBoss 6 that I am attempting to upgrade to run on WildFly 10.1. Much of this upgrade is going well. However, the upgrade from Hibernate 3.4 (on JBoss 6) to Hibernate 5.1 (on WildFly 10.1) is causing a few issues.
Specifically, in my persistence.xml, I include the following property.
<property name="hibernate.hbm2ddl.auto" value="update"/>
Please NOTE: I am NOT making any schema or other DB changes as part of the upgrade. Furthermore, I am pointing at the same database instance that has been successfully running under the JBoss 6/Hibernate 3.4 instance. Therefore, I am confident that inclusion of this property should have no actual work/update to do upon first run with the WildFly 10.1/Hibernate 5.1 version.
However, inclusion of this property appears to 1) erroneously determine that it needs to make updates and 2) fail to do so successfully. It results in the following stack trace:
Failed to start service jboss.persistenceunit."app.ear#PU": org.jboss.msc.service.StartException in service jboss.persistenceunit."PU": javax.persistence.PersistenceException: [PersistenceUnit: PU] Unable to build Hibernate SessionFactory
...
Caused by: javax.persistence.PersistenceException: [PersistenceUnit: PU] Unable to build Hibernate SessionFactory
...
Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Unable to execute schema management to JDBC target [create index company_id_index on APPROVER (COMPANY_ID)]
...
Caused by: org.postgresql.util.PSQLException: ERROR: relation "company_id_index" already exists
Again, the table and index in question already exist (as confirmed by the final error).
Is Hibernate now no longer case sensitive (COMPANY_ID_INDEX being different than company_id_index)?
If so, how can I configure it so that it is case insensitive as it used to be (Postgres defaults all of this to lower....)
TIA!
Doh! Face palm! I recently discovered that similar errors were also occurring related to hbm2ddl index creation with Hibernate 3.4/JBoss 6 as I am now experiencing with Hibernate 5.1/Wildfly 10.1; however, they were NOT preventing successful start up of the persistence module. Essentially, they were being only subtly suppressed. I'm not sure if this is an expected change related to the Hibernate versions or not, as they do prevent it's start up in Hibernate 5.1/Wildfly 10.1?
The underlying issue here turned out to be that index names must be unique across the entire schema in Postgres. So multiple entities each having a FK to a COMPANY_ID column must each have a unique name for the index. Indices are relations in Postgres (driving the unique across schema requirement).
Thank you for the suggestions and apologies for the confusion.

OpenEJB + EclipseLink are not able to create tables on HSQL database

I have a vanilla maven WAR project, using the Java EE web profile, that executes its unit/integration tests using OpenEJB. During the OpenEJB start-up, instead of using the data source defined in jndi.properties, OpenEJB creates its own:
INFO - Auto-creating a Resource with id 'Default JDBC Database' of type 'DataSource for 'scmaccess-unit'.
INFO - Creating Resource(id=Default JDBC Database)
INFO - Configuring Service(id=Default Unmanaged JDBC Database, type=Resource, provider-id=Default Unmanaged JDBC Database)
INFO - Auto-creating a Resource with id 'Default Unmanaged JDBC Database' of type 'DataSource for 'scmaccess-unit'.
INFO - Creating Resource(id=Default Unmanaged JDBC Database)
INFO - Adjusting PersistenceUnit scmaccess-unit <jta-data-source> to Resource ID 'Default JDBC Database' from 'jdbc/scmaccess'
INFO - Adjusting PersistenceUnit scmaccess-unit <non-jta-data-source> to Resource ID 'Default Unmanaged JDBC Database' from 'null'
And then, further below, when it's time to create the table - as per the create-drop strategy defined on the app's persistence.xml file - I see several errors like this:
(...) Internal Exception: java.sql.SQLSyntaxErrorException: type not found or user lacks privilege: NUMBER
Error Code: -5509
The jndi.properties file:
##
# Context factory to use during tests
##
java.naming.factory.initial=org.apache.openejb.client.LocalInitialContextFactory
##
# The DataSource to use for testing
##
scmDatabase=new://Resource?type=DataSource
scmDatabase.JdbcDriver=org.hsqldb.jdbcDriver
scmDatabase.JdbcUrl=jdbc:hsqldb:mem:scmaccess
##
# Override persistence unit properties
##
scmaccess-unit.eclipselink.jdbc.batch-writing=JDBC
scmaccess-unit.eclipselink.target-database=Auto
scmaccess-unit.eclipselink.ddl-generation=drop-and-create-tables
scmaccess-unit.eclipselink.ddl-generation.output-mode=database
And, the test case:
public class PersistenceTest extends TestCase {
#EJB
private GroupManager ejb;
#Resource
private UserTransaction transaction;
#PersistenceContext
private EntityManager emanager;
public void setUp() throws Exception {
EJBContainer.createEJBContainer().getContext().bind("inject", this);
}
public void test() throws Exception {
transaction.begin();
try {
Group g = new Group("Saas Automation");
emanager.persist(g);
} finally {
transaction.commit();
}
}
}
Looks like eclipselink is trying to create a column with the type NUMBER and that type does not exist in HSQL. Did you specify that type in your mappings? If yes then fix that.
Otherwise it might help to add
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
<property name="eclipselink.create-ddl-jdbc-file-name" value="createDDL_ddlGeneration.jdbc"/>
<property name="eclipselink.drop-ddl-jdbc-file-name" value="dropDDL_ddlGeneration.jdbc"/>
<property name="eclipselink.ddl-generation.output-mode" value="both"/>
to your persistence.xml so you can see what create table statements are exactly generated. If eclipselink is using NUMBER on it's own for certain columns you can tell it to use something else by using the following annotations on the corresponding fields.
#Column(columnDefinition="NUMERIC")

WebSphere Application Server V8.0.0.5 JPA Unable to persist

I have a code that works perfectly on WAS 7 but fail when i run it in WAS 8.0.0.5. I am using JPA 2.0 with openJPA as my provider. Calling persist on my em throws a nested exception. Has anyone ever managed to write a JPA program in WAS 8.0.0.5
here is the Exception
WTRN0074E: Exception caught from before_completion synchronization operation: org.apache.openjpa.persistence.PersistenceException: DB2 SQL Error: SQLCODE=-204, SQLSTATE=42704, SQLERRMC=.OPENJPA_SEQUENCE_TABLE, DRIVER=3.58.81 {prepstmnt -1559269434 SELECT SEQUENCE_VALUE FROM .OPENJPA_SEQUENCE_TABLE WHERE ID = ? FOR READ ONLY WITH RS USE AND KEEP UPDATE LOCKS [params=?]}
The SQLCODE=-204 points that something is missing. The log keeps printing THAKHANI.OPENJPA_SEQUENCE_TABLE which makes think that maybe the table is missing. You could also check to make sure the DB2 user that JPA is using has permissions to create tables and run SELECT statements on them.
I manage to resolve the problem by selecting Identity as my primary key generation mechanism when generating entities from tables. I also add the following in my persistence.xml.
<properties>
<!-- OpenJPA specific properties -->
<property name="openjpa.TransactionMode" value="managed"/>
<property name="openjpa.ConnectionFactoryMode" value="managed"/>
<property name="openjpa.jdbc.DBDictionary" value="db2"/>
<property name="openjpa.jdbc.Schema" value=<SchemaName>/>
</properties>