I am building a new application using EclipseLink for the first time.
Everything was going okay until I added an entity that uses JSR310 Instant for a timestamp column.
So I created a converter class and mapped it to the the associated field like so:
#Convert(converter = JSR310InstantTypeConverter.class)
private Instant pwdChangeCodeExpiresOn = null;
However since I added that converter the application has started throwing the following exception:
SEVERE: Servlet.service() for servlet [APIJerseyServlet] in context with path [/Sclera] threw exception [org.glassfish.jersey.server.ContainerException: java.lang.ExceptionInInitializerError] with root cause
Local Exception Stack:
Exception [EclipseLink-7351] (Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.ValidationException
Exception Description: The converter class [com.sclera.utils.JSR310InstantTypeConverter] specified on the mapping attribute [pwdChangeCodeExpiresOn] from the class [com.sclera.entity.Admin] was not found. Please ensure the converter class name is correct and exists with the persistence unit definition.
at org.eclipse.persistence.exceptions.ValidationException.converterClassNotFound(ValidationException.java:2317)
at org.eclipse.persistence.internal.jpa.metadata.converters.ConvertMetadata.process(ConvertMetadata.java:248)
This will start happening after a code change (when Eclipse restarts the server). I have to stop and start (and/or restart) the server manually a few times until it finally starts working again. Then it will work fine until a code change or two later when it will start throwing the exception again.
This is an enormous pain. Anyone know the cause and how to fix it?
Okay solution found. Adding the converter class to the persistence.xml file - as suggested by the error message - seems to have resolved the problem.
<persistence-unit name="example" transaction-type="RESOURCE_LOCAL">
....
<class>com.example.utils.JSR310InstantTypeConverter</class>
...
</persistence-unit>
I should have tried that earlier. The fact that is working some of the time without this made me think it wouldn't make a difference.
Related
I am doing a native build of my Quarkus app and am hitting the UnsupportedFeatureException: Detected an instance of Random/SplittableRandom on a few Vertx Redis Client classes.
I am building using the docker container method:
./mvnw package -Dnative -Dquarkus.native.container-build=true
I have fixed some of the exceptions by including in the pom.xml:
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient
</quarkus.native.additional-build-args>
but am stuck on this one:
Fatal error: com.oracle.graal.pointsto.util.AnalysisError$ParsingError: Error encountered while parsing
io.vertx.redis.client.impl.RedisClusterConnection.send(io.vertx.redis.client.Request)
Parsing context:
at io.vertx.redis.client.impl.RedisClusterConnection.send(RedisClusterConnection.java:117)
at io.vertx.redis.client.impl.BaseRedisClient.lambda$send$1(BaseRedisClient.java:45)
at io.vertx.redis.client.impl.BaseRedisClient$$Lambda$1711/0x00000007c1ea57e8.apply(Unknown Source)
I have tried adding
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisClusterConnection
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.BaseRedisClient
and even
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection.send(io.vertx.redis.client.Request)
but the error persists.
I am fairly new to Java and very new to native building / GraalVM etc
Can anyone shed any light on what class I should add, please?
Thanks,
Murray
I believe we can propose a change to vert.x redis client to avoid the split random use. The randomness is there mostly to share the load across nodes. It is not used for any security related features. For this reason, a proposal to either round-robin would probably make more sense as a solution to this issue.
Ok, this seems to fix it:
EDIT: No it doesn't. See below.
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
The full profiles section in the pom.xml looks like this, for anyone else new to all this.
<profiles>
<profile>
<id>native</id>
<activation>
<property>
<name>native</name>
</property>
</activation>
<properties>
<skipITs>false</skipITs>
<quarkus.package.type>native</quarkus.package.type>
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
</properties>
</profile>
</profiles>
I have different errors now, so still not building, but at least this seems to fix that specific issue.
EDIT: The problem persists...
If I build with the pom as described above, ie:
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
I now get:
Error: Classes that should be initialized at run time got initialized during image building:
io.vertx.redis.client.impl.RedisReplicationConnection
the class was requested to be initialized at run time
(from command line with 'io.vertx.redis.client.impl.RedisReplicationConnection').
So, I guess that is not the right class afterall.
If I remove that class from the list I revert to the Random exception.
(Showing more detail)
[1/7] Initializing... (3.7s # 0.10GB)
Version info: 'GraalVM 22.3.1.0-Final Java 17 Mandrel Distribution'
Java version info: '17.0.6+10'
C compiler: gcc (linux, x86_64, 11.3.0)
Garbage collector: Serial GC
4 user-specific feature(s)
- io.quarkus.runner.Feature: Auto-generated class by Quarkus from the existing extensions
- io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [org.jboss.threads] categories
- io.quarkus.runtime.graal.ResourcesFeature: Register each line in META-INF/quarkus-native-resources.txt as a resource on Substrate VM
- io.quarkus.websockets.client.runtime.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [io.undertow.websockets] categories
[2/7] Performing analysis... [*] (14.1s # 3.37GB)
12,032 (89.58%) of 13,432 classes reachable
17,778 (59.65%) of 29,803 fields reachable
61,621 (57.16%) of 107,809 methods reachable
541 classes, 150 fields, and 2,655 methods registered for reflection
Fatal error: com.oracle.graal.pointsto.util.AnalysisError$ParsingError: Error encountered while parsing io.vertx.redis.client.impl.RedisReplicationConnection.send(io.vertx.redis.client.Request)
Parsing context:
at io.vertx.redis.client.impl.RedisReplicationConnection.send(RedisReplicationConnection.java:111)
at io.vertx.redis.client.RedisConnection.send(RedisConnection.java:83)
at io.vertx.redis.client.impl.RedisReplicationClient.getNodes(RedisReplicationClient.java:183)
at io.vertx.redis.client.impl.RedisReplicationClient.lambda$connect$4(RedisReplicationClient.java:126)
at io.vertx.redis.client.impl.RedisReplicationClient$$Lambda$2203/0x00000007c1630f60.handle(Unknown Source)
at io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91)
{etc}
I am beginning to think I have an error / issue in my code where I am using the Vert.x Redis Client. I am trying to narrow it down by trial and error.
Any other suggestions are most welcome.
I work on quite a few services with similar architectures, with small differences. They all use Java 8, Maven, SpringBoot, and Jersey.
I normally debug them in Eclipse (currently on 2021-06) using "Run As->SpringBoot". This works perfectly fine. I can also run them from a command line using "mvn spring-boot:run", but that's just an academic exercise, because I prefer to run them from Eclipse.
When I run it from mvn, it successfully starts up, and I can hit listener endpoints (test case is actuator/info right now) with no problem.
When I run it from Eclipse, I get the following mystifying error:
BeanCreationException: Error creating bean with name 'mbeanExporter' defined in class path resource [org/springframework/boot/autoconfigure/jmx/JmxAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.jmx.export.annotation.AnnotationMBeanExporter]: Factory method 'mbeanExporter' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mbeanServer' defined in class path resource [org/springframework/boot/autoconfigure/jmx/JmxAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [javax.management.MBeanServer]: Factory method 'mbeanServer' threw exception; nested exception is org.springframework.jmx.MBeanServerNotFoundException: Could not access WebSphere's AdminServiceFactory.getMBeanFactory/getMBeanServer method; nested exception is java.lang.NullPointerException
Notice the last couple of phrases in that.
This service uses an ejb client class that is configured to connect to Websphere EJBs. One side effect of this is that jmx is not available. I have to set "spring.jmx.enabled=false".
Note that I went to the trouble of storing the log file for both runs, and I painstakingly compared them, verifying that they were logging the same information (varying by timestamps). That stacktrace above is the first place where they truly vary.
Curiously, although the Eclipse run shows this error, and listeners do not respond, it doesn't terminate the startup attempt. The service just sits there, sort of brain-dead.
I'm sure what I've provided isn't enough information, but I'm not sure what else would be useful information.
Believe me, I know this question has been asked many times and has gotten an answer many times, and these answers seemed to have worked for some users. I've spent many hours trying the various proposed solutions and, while they work on Linux (Ubuntu) they seem to have no effect on Windows (Windows 10 Home with jdk1.8.0_161). The web application is using EclipseLink 2.5.0 for persistence.
I've tried including the mysql-connector-java-5.1.46-bin.jar file in the WAR archive (WEB-INF/lib; using the Deployment Assembly screen in Eclipse), copying it to the payara5/glassfish/lib folder, as well as the payara5/glassfish/domains/domain1/lib/ and payara5/glassfish/domains/domain1/lib/applibs folders. I also tried specifying the library when deploying the web application, i.e., putting mysql-connector-java-5.1.46-bin.jar as the value in the library field. I updated the CLASSPATH environment variable with the path to the JAR file. Every time, the server was restarted. None of these actions have any effect. Note that they did work on Linux Ubuntu.
See below for the well-known exception trace:
Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.7.0.v20170811-d680af5): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/rom
Error Code: 0
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:326)
at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:138)
at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:170)
at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.setOrDetectDatasource(DatabaseSessionImpl.java:228)
at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:804)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:254)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:757)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:216)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:324)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:348)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:311)
...
Any thoughts would be greatly appreciated.
UPDATE: as a sanity check (got the idea thanks to #Abhi) I added the line
try {
System.out.println("JDBC driver: " +
Class.forName("com.mysql.jdbc.Driver"));
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
Which correctly prints the following line (without throwing an exception):
JDBC driver: class com.mysql.jdbc.Driver
But does nothing to solve the problem. In other words, the driver seems to be loadable but somehow EclipseLink is not able to find it (?)
Looks like I'm able to answer my own question. I asked the exact same question on the Payara Forum and was recommended to define a data source instead of using the driver directly (#Chris pointed in this direction as well). A data source is likely the best way to go anyway but I wanted to avoid the complexity and use the simplest setup .. which clearly didn't work.
For reference, you can find the working setup below:
In Payara 5, goto JDBC > JDBC Connection Pools > New: enter a pool name, select javax.sql.DataSource as resource type, and MySql as vendor. On step 2, com.mysql.jdbc.jdbc2.optional.MysqlDataSource should be preselected for Datasource Classname. Fill out the Username and Password (e.g., root, changeit) properties under the Additional Properties header. Select finish. On the page for the newly created connection pool, select PING to make sure it was setup correctly.
In your persistence.xml file, make sure the persistence-unit element starts as follows:
<persistence-unit name="ROM" transaction-type="JTA">
<jta-data-source>java:global/<connection pool name></jta-data-source>
Create a web.xml file (this may also be done using Java Annotations):
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"
version="3.1">
<data-source>
<name>java:global/<connection pool name></name>
<class-name>com.mysql.jdbc.jdbc2.optional.MysqlDataSource</class-name>
<server-name>[host name, e.g., localhost]</server-name>
<port-number>3306</port-number>
<database-name>[db name]</database-name>
<user>[username, e.g., root]</user>
<password>[password]</password>
</data-source>
</web-app>
This configuration worked for me at least. Hoping this will help someone else down the road. Note that there are various useful settings for a connection pool - see e.g., here for more options.
to the line of code to connect:
con = DriverManager.getConnection(urlBaseDatos, usuario, clave);
Add the following:
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
con = DriverManager.getConnection(urlBaseDatos, usuario, clave);
Naturally I concur with the answer here, which is "in an Application server you should use a DataSource".
Now just my two cents and to answer the original question:
From JDBC 4, you aren't required to register the driver anymore, and this line shouldn't be necessary:
DriverManager.registerDriver(new com.mysql.cj.jdbc.Driver());
See: https://docs.oracle.com/javase/8/docs/api/java/sql/DriverManager.html
So when using a JDK8+/EE8/JDBC4.2 compliant application server, you shouldn't be mandated to register the driver. Or so I thought...
Though, like you #William, I noticed Glassfish/Payara requires it. It's very strange. Maybe it has to do with the way it handles classloading?
Wildfly, in turn, does the right thing and automatically loads the driver without actually having to manually register it.
I already have a integration-test phase, when I ran the selenium tests. I also want to run some unit tests in this phase, because the app is too much complex and have a lot of dependencies between his modules (a hell), so, after a week fighting against OpenEJB and Arquillian, I believe that this would be easier.
The thing is: how do I made it work?
I have the instance already running, if I instantiate an InitialContext and try to lookup some bean, I got an exception telling me that I have not set the java.naming.initial.factory, and I don't know what to put in there.
I'm also complaining about the annotated beans.
Suppose a Bean like this:
#Stateless
public class ABeanImpl implements ABean {
#EJB
private BBean;
}
Will the container automatically get right the BBean?
Thanks in advance
How to connect to JBoss 7.1 remote JNDI:
Here is the code snippet that I use for JBoss 7.1:
Properties props = new Properties();
String JBOSS_CONTEXT = "org.jboss.naming.remote.client.InitialContextFactory";
props.put("jboss.naming.client.ejb.context", true);
props.put(Context.INITIAL_CONTEXT_FACTORY, JBOSS_CONTEXT);
props.put(Context.PROVIDER_URL, "remote://localhost:4447");
props.put(Context.SECURITY_PRINCIPAL, "jboss");
props.put(Context.SECURITY_CREDENTIALS, "jboss123");
InitialContext ctx = new InitialContext(props);
Resolution of ambiguous ejb references:
According to JBoss EJB 3 reference, if at any level of your EJB environment (EJB/EAR/Server) are duplicates in used interfaces, exception will be thrown during resolution of injected beans.
Based on above, if you have got a reference to EJB bean which interface:
has two implementations in your EJB module (JAR/WAR) - exception will be thrown
has two implementations in your application (other EJB JAR's in same EAR) - exception will be thrown
has two implementations, one in module with bean ABeanImpl, second somewhere else - implemetation from current module is used.
I am just about to go crazy,I just spent many hours to try work with spring-data for Neo4J,working with spring-data for MongoDB was a walk in the park compared to that.
My goals: 1) Working with spring-data to manage two data-stores Mongo,Neo4j.
(correct me if I am wrong but there is no spring-data cross data store support for these two, which mean I will use different domain entities for each store)
2) Working with Neo4J embedded graph.
3) Will have the ability to monitor the graph with some client like the web admin.
So I started with Good Relationship spring-data example
, where using :
POM
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-neo4j</artifactId>
<version>2.0.0.RELEASE</version>
</dependency>
XML
<neo4j:config storeDirectory="data/graph.db"/>
So my first question is how can I monitor the graph In that configuration, in which client?
So I read more and I got to Neo4j Web Admin for embedded graph configuration
I followed every step tried it and boom!
Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.haze.server.repository.mongo.ProfileRepository com.haze.server.services.ProfileServices.profileRepository; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'profileRepository': FactoryBean threw exception on object creation; nested exception is java.lang.NoSuchMethodError: org.springframework.data.repository.core.RepositoryMetadata.getDomainClass()Ljava/lang/Class;
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:506)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:284)
... 39 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'profileRepository': FactoryBean threw exception on object creation; nested exception is java.lang.NoSuchMethodError: org.springframework.data.repository.core.RepositoryMetadata.getDomainClass()Ljava/lang/Class;
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:149)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:102)
at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1442)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:305)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:876)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:818)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:735)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:478)
... 41 more
Caused by: java.lang.NoSuchMethodError: org.springframework.data.repository.core.RepositoryMetadata.getDomainClass()Ljava/lang/Class;
at org.springframework.data.mongodb.repository.support.MongoRepositoryFactory.getTargetRepository(MongoRepositoryFactory.java:84)
at org.springframework.data.repository.core.support.RepositoryFactorySupport.getRepository(RepositoryFactorySupport.java:137)
at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.getObject(RepositoryFactoryBeanSupport.java:125)
at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.getObject(RepositoryFactoryBeanSupport.java:41)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:142)
... 49 more
Make long story short the only version configuration I find to get spring context to load is
<spring.data.mongo.version>1.0.4.RELEASE</spring.data.mongo.version>
<neo4j.version>1.6</neo4j.version>
<spring-data-neo4j.version>2.0.1.RELEASE</spring-data-neo4j.version>
If I am adding the below dependency like specify in the article it crashed.
<spring-data-commons-core.version>1.3.0.RELEASE</spring-data-commons-core.version>
Ok so I got it working after many hours with the neo4j embedded graph and the server wrapper in order to monitor the graph from the web admin with mongo as my primary datastore.
Kind of happy but sad cause using old version for the neo4j server wrapper (1.6 cause that is the only thing which worked) I was motivated to start working with the graph via spring-data.
So I got the most basic node entity:
#NodeEntity
public class ProfileNode {
#GraphId
private Long id;
#Indexed(unique = true)
private String pid = null;
}
Tried some basic operations:
// save node - OK
ProfileNode node = new ProfileNode();
node.setPid("44ed79b3ea8a99117aa601b16e916ddr");
ProfileNode profile = graphRepo.save(node);
// return NULL
node = graphRepo.findByPropertyValue("pid",
"44ed79b3ea8a99117aa601b16e916ddr");
// throwing exception - java.lang.UnsupportedOperationException:read only index
graphRepo.delete(profile);
Basically almost every basic operation I tried didn't worked for me.
I don't know if the problems occur because of my mishmash configurations or that I am doing something wrong In my code, can someone please help me configure my application or let me know why the most basic operation via spring data doesn't work for me?
Thanks.
Please update to 2.1.RC4 as Lasse said.
Regarding using the embedded server with SDN, it is described in the docs.
What does your repository look like?
You really should upgrade to SDN 2.1.RC4, it will be out as GA in a metter of weeks.
Secondly, here is some code to get you started: https://github.com/SpringSource/spring-data-neo4j/blob/master/spring-data-neo4j/src/test/java/org/springframework/data/neo4j/repository/DerivedFinderTests.java - you can add a test for findByPropertyValue if you are not keen on derived finders, but at least this works out of the box using just that single file, i.e. you can eliminate Spring config as a source of errors.
For cross-store: I see little point in cross-store with MongoDB, to me cross-store is all about transactions across multiple data sources. With MongoDB + Neo, I'd just build different repositories and on the application level do just enough to use them concurrently.
You have to have spring-data-mongodb-1.1.0.RC1 and spring-data-neo4j-2.1.0.RC4. Both of those have the same spring-data-commons-core dependency.
spring-data with neo4j + mongo version conflicts