fuse 7.0 JPA persistence.xml schema is waiting for namespace handlers - jbossfuse

I've tried all namespace handlers which are in JPA 2.1 registry persistence_1_0.xsd,persistence_2_0.xsd,persistence_2_1.xsd. As a result, none of them worked and thrown below error.
2.0,2.1
is waiting for namespace handlers [http://xmlns.jcp.org/xml/ns/persistence]
1.0
is waiting for namespace handlers [http://java.sun.com/xml/ns/persistence]
Let me know the cause of the issue.
Many Thanks In Advance.

If you check:
karaf#root()> feature:info jpa
Feature jpa 2.7.2
Description:
OSGi Persistence Container
Details:
JPA implementation provided by Apache Aries JPA 2.x. NB: this feature doesn't provide the JPA engine, you have to install one by yourself (OpenJPA for instance)
Feature has no configuration
Feature has no configuration files
Feature has no dependencies.
Feature contains followed bundles:
mvn:org.apache.aries.jpa.javax.persistence/javax.persistence_2.1/2.7.2
mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1.redhat-2 (overriden from mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1)
mvn:org.osgi/org.osgi.service.jdbc/1.0.0
mvn:org.apache.felix/org.apache.felix.coordinator/1.0.2 start-level=30
mvn:org.apache.aries.jpa/org.apache.aries.jpa.api/2.7.2 start-level=30
mvn:org.apache.aries.jpa/org.apache.aries.jpa.container/2.7.2 start-level=30
mvn:org.apache.aries.jpa/org.apache.aries.jpa.support/2.7.2 start-level=30
Feature contains followed conditionals:
Conditional(aries-blueprint) has no configuration
Conditional(aries-blueprint) has no configuration files
Conditional(aries-blueprint) has no dependencies.
Conditional(aries-blueprint) contains followed bundles:
mvn:org.apache.aries.jpa/org.apache.aries.jpa.blueprint/2.7.2 start-level=30
You'll see NB: this feature doesn't provide the JPA engine, you have to install one by yourself (OpenJPA for instance). This description seems old. You need actual JPA provider, like:
karaf#root()> feature:info hibernate
Feature hibernate 5.3.10.Final-redhat-00001
Description:
Hibernate JPA engine support
Feature has no configuration
Feature has no configuration files
Feature depends on:
wrap 0.0.0
hibernate-orm 5.3.10.Final-redhat-00001
Feature contains followed bundles:
mvn:net.bytebuddy/byte-buddy/1.9.5.redhat-00001 (overriden from mvn:net.bytebuddy/byte-buddy/1.9.5.redhat-00001)
Feature has no conditionals.
(versions of bundles from Fuse newer than 7.0).
So please install additionally hibernate feature:
karaf#root()> feature:install hibernate
karaf#root()> la -l|grep hibernate
249 │ Active │ 80 │ 5.0.4.Final-redhat-00001 │ mvn:org.hibernate.common/hibernate-commons-annotations/5.0.4.Final-redhat-00001
250 │ Active │ 80 │ 5.3.10.Final-redhat-00001 │ mvn:org.hibernate/hibernate-core/5.3.10.Final-redhat-00001
251 │ Active │ 80 │ 5.3.10.Final-redhat-00001 │ mvn:org.hibernate/hibernate-osgi/5.3.10.Final-redhat-00001
EDIT 2019-11-07:
I checked (upcoming Fuse 7.5, but should be valid for 7.0) and found the problem you have.
If you check:
karaf#root()> ls PersistenceProvider
[javax.persistence.spi.PersistenceProvider]
-------------------------------------------
javax.persistence.provider = org.hibernate.jpa.HibernatePersistenceProvider
service.bundleid = 250
service.id = 468
service.scope = bundle
Provided by :
hibernate-osgi (250)
Used by:
Apache Aries JPA Specification 2.1 API (244)
Camel Content-Based Router Example [EXAM-PREP] (256)
you'll see there's org.hibernate.jpa.HibernatePersistenceProvider JPA provider registered by Hibernate.
You've however added (in META-INF/persistence.xml):
<provider>org.hibernate.ejb.HibernatePersistence</provider>
You should either remove this provider or use org.hibernate.jpa.HibernatePersistenceProvider because it affects an OSGi filter created by org.apache.aries.jpa.container.impl.PersistenceProviderTracker#createFilter for your bundle. So that's the reason why you didn't have EMF registered.
With this change, I found it works:
karaf#root()> ls EntityManagerFactory
[javax.persistence.EntityManagerFactory]
----------------------------------------
hibernate.connection.pool_size = 25
hibernate.dialect = org.hibernate.dialect.DerbyDialect
hibernate.hbm2ddl.auto = create
hibernate.show_sql = true
javax.persistence.jdbc.driver = org.apache.derby.jdbc.EmbeddedDriver
javax.persistence.jdbc.url = jdbc:derby:memory:order;create=true
javax.persistence.jdbc.user = sa
osgi.unit.name = camel
osgi.unit.provider = org.hibernate.jpa.HibernatePersistenceProvider
osgi.unit.version = 4.1.4
service.bundleid = 256
service.id = 501
service.scope = singleton
Provided by :
Camel Content-Based Router Example [EXAM-PREP] (256)
Used by:
Camel Content-Based Router Example [EXAM-PREP] (256)

Related

started geode spring boot and save to remote region but failed to start bean gemfireClusterSchemaObjectInitializer

With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction
In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.

hibernate.hbm2ddl.auto value="update" issue with Hibernate 3.4 to 5.1 migration

I have an application presently running (without issue) on JBoss 6 that I am attempting to upgrade to run on WildFly 10.1. Much of this upgrade is going well. However, the upgrade from Hibernate 3.4 (on JBoss 6) to Hibernate 5.1 (on WildFly 10.1) is causing a few issues.
Specifically, in my persistence.xml, I include the following property.
<property name="hibernate.hbm2ddl.auto" value="update"/>
Please NOTE: I am NOT making any schema or other DB changes as part of the upgrade. Furthermore, I am pointing at the same database instance that has been successfully running under the JBoss 6/Hibernate 3.4 instance. Therefore, I am confident that inclusion of this property should have no actual work/update to do upon first run with the WildFly 10.1/Hibernate 5.1 version.
However, inclusion of this property appears to 1) erroneously determine that it needs to make updates and 2) fail to do so successfully. It results in the following stack trace:
Failed to start service jboss.persistenceunit."app.ear#PU": org.jboss.msc.service.StartException in service jboss.persistenceunit."PU": javax.persistence.PersistenceException: [PersistenceUnit: PU] Unable to build Hibernate SessionFactory
...
Caused by: javax.persistence.PersistenceException: [PersistenceUnit: PU] Unable to build Hibernate SessionFactory
...
Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Unable to execute schema management to JDBC target [create index company_id_index on APPROVER (COMPANY_ID)]
...
Caused by: org.postgresql.util.PSQLException: ERROR: relation "company_id_index" already exists
Again, the table and index in question already exist (as confirmed by the final error).
Is Hibernate now no longer case sensitive (COMPANY_ID_INDEX being different than company_id_index)?
If so, how can I configure it so that it is case insensitive as it used to be (Postgres defaults all of this to lower....)
TIA!
Doh! Face palm! I recently discovered that similar errors were also occurring related to hbm2ddl index creation with Hibernate 3.4/JBoss 6 as I am now experiencing with Hibernate 5.1/Wildfly 10.1; however, they were NOT preventing successful start up of the persistence module. Essentially, they were being only subtly suppressed. I'm not sure if this is an expected change related to the Hibernate versions or not, as they do prevent it's start up in Hibernate 5.1/Wildfly 10.1?
The underlying issue here turned out to be that index names must be unique across the entire schema in Postgres. So multiple entities each having a FK to a COMPANY_ID column must each have a unique name for the index. Indices are relations in Postgres (driving the unique across schema requirement).
Thank you for the suggestions and apologies for the confusion.

Configure Rexster to use Titan

I'm trying to configure Rexster to use Titan by modifying the rexster.xml file in Rexster.
But when I run
http ://localhost:8182/graphs/mygraph
in my browser I get a message saying:
{"message":"Graph [mygraph] could not be found"}.
This is the part of the rexster.xml file I've modified:
<graph>
<graph-name>mygraph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-location>C:/titan-server-jre6-0.4.4/bin/mygraph</graph-location>
<graph-read-only>false</graph-read-only>
<properties>
<storage.backend>local</storage.backend>
<buffer-size>100</buffer-size>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
I've added all the jar files in Titans lib folder into Rexster under config/ext/titan and I've created a graph in Titan by using the gremlin shell i Titan:
g = TitanFactory.open('mygraph');
g.createKeyIndex('name', Vertex.class);
v = g.addVertex(null);
v.setProperty('name','x');
v.setProperty('type','person');
v.setProperty('age',20);
v1 = g.addVertex(null);
v1.setProperty('name','y');
v1.setProperty('type','person');
v1.setProperty('age',22);
e = g.addEdge(null, v, v1, 'knows');
e1 = g.addEdge(null, v1, v, 'knows');
g.shutdown();
What am I missing?
[UPDATE]:
I had placed the jar files from Titan in the wrong directory in rexster, the are now in the right place in rexster. But when I now run the rexster server I get the following output:
[INFO] Application - .:Welcome to Rexster:.
[INFO] RexsterProperties - Using [C:\rexster-server-2.5.0\config\rexster.xml] as
configuration source.
[INFO] Application - Rexster is watching [C:\rexster-server-2.5.0\config\rexster
.xml] for change.
Exception in thread "main" java.lang.AbstractMethodError: com.thinkaurelius.tita
n.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(Lcom/tinkerpo
p/rexster/config/GraphConfigurationContext;)Lcom/tinkerpop/blueprints/Graph;
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFrom
Configuration(GraphConfigurationContainer.java:124)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.<init>(Graph
ConfigurationContainer.java:54)
at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRex
sterApplication.java:99)
at com.tinkerpop.rexster.server.XmlRexsterApplication.<init>(XmlRexsterA
pplication.java:47)
at com.tinkerpop.rexster.Application.<init>(Application.java:97)
at com.tinkerpop.rexster.Application.main(Application.java:189)
How can I resolve this?
Problem solved by changing the version of the rexster server, before I was using version 2.5.0. I am now using version 2.4.0 and version 0.4.4 of Titan.

Karaf 3.0.x config:update command does not create .cfg file in /etc

I am using karaf 3.0.1 with my bundle (https://github.com/johanlelan/camel-cxfrs-blueprint-example). I want to manage properties at runtime but I see that config:update does not create file on /etc, why?
<cm:property-placeholder persistent-id="org.apache.camel.examples.cxfrs.blueprint"
update-strategy="reload">
<!-- list some properties for this test -->
<cm:default-properties>
<cm:property name="cxf.application.in"
value="cxfrs:bean:rest.endpoint?throwExceptionOnFailure=false&bindingStyle=SimpleConsumer&loggingFeatureEnabled=true"/>
<cm:property name="common.tenant.in" value="direct-vm:common.tenant.in"/>
<cm:property name="common.authentication.in" value="direct-vm:common.authentication.in"/>
<cm:property name="application.put.in" value="direct-vm:application.putById"/>
<cm:property name="application.post.in"
value="direct-vm:application.postApplications"/>
<cm:property name="log.trace.level" value="INFO"/>
</cm:default-properties>
</cm:property-placeholder>
In karaf I try to modify an endpoint url:
karaf#root()> config:edit org.apache.camel.examples.cxfrs.blueprint
karaf#root()> config:property-set common.tenant.in direct-vm:test
karaf#root()> config:property-list
service.pid = org.apache.camel.examples.cxfrs.blueprint
common.tenant.in = direct-vm:test
felix.fileinstall.filename = file:/F:/travail/servers/karaf-lan/etc/org.apache.camel.examples.cxfrs.blueprint.cfg
karaf#root()> config:update
karaf#root()>
I precise that my bundle is updated after config:update but no file exists in /etc... I think it works in karaf 2.3.5.
Configurations are persisted by the ConfigurationAdmin service. If you are using Karaf, it uses the implementation from Felix ConfigAdmin [1]. By default Karaf configures ConfigAdmin to store files in its local bundle storage area under /data, but that can be changed by editing the felix.cm.dir property.
Also, the support for the .cfg files comes from Felix FileInstall [2].
[1] http://felix.apache.org/documentation/subprojects/apache-felix-config-admin.html
[2] http://felix.apache.org/site/apache-felix-file-install.html
It is a known issue at karaf 3.0.1
You may use apache karaf 3.0.2 that this bug is fixed.

JBoss6 AS EJB3StandaloneBootstrap & EJB3StandaloneDeployer

currently i'am migrating a JBoss 4 project to JBoss 6. I do miss substitutes for the EJB3StandaloneDeployer and EJB3StandaloneBootstrap.
Are there any new sources which deliver the functionality of this two classes?
THX
My guess is that EJB3StandaloneDeployer and EJB3StandaloneBootstrap are replaced by the standard EJBContainer API. Here is an example:
// Instantiate an embeddable EJB container and search the
// JVM class path for eligible EJB modules or directories
EJBContainer ejbC = EJBContainer.createEJBContainer();
// Get a naming context for session bean lookups
Context ctx = ejbC.getNamingContext();
// Retrieve a reference to the AccountBean using a
// portable global JNDI name (more on this later!)
AccountBean ab = (AccountBean)
ctx.lookup("java:global/account/AccountBean");
// Test the account
Account a1 = ab.createAccount(1234);
...
// Shutdown the embeddable container
ejbC.close();
JBoss also started the Arquillian that you might find interesting.
See also
TOTD #128: EJBContainer.createEJBContainer: Embedded EJB using GlassFish v3
The Arquillian project
The Arquillian Community Space