I'm trying to use the Ignite Jdbc connection; with my goal to be able to call the cache from any client over Jdbc.
I've got a number of scenario's working; so have data loaded correctly; and can run sql queries 'directly' against the cache.
When I try to call from a separate client with
ResultSet rs = conn.createStatement().executeQuery("select * from my_table");
I hit an error:
class org.apache.ignite.IgniteCheckedException: Failed to find class with given class loader for unmarshalling (make sure same versions of all classes are available on all nodes or enable peer-class-loading) [clsLdr=URLClassLoader with NativeCopyLoader with RawResources(
Is there a way to prevent the Ignite jdbc connection from trying to do any unmarshalling?
I would like my client to be as agnostic as possible to the Ignite classes. For example; I would like to swap out calling mariaDb to Ignite - with as little code change as possible on the client side.
If I'm thinking about things the wrong way; then answer along the lines of No, that will never work because ... are more than welcome too.
Thanks
Brent
If you don't want copy libs to client's lib folder, you can turn on Peer Class Loading:
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
...
<!-- Explicitly enable peer class loading. -->
<property name="peerClassLoadingEnabled" value="true"/>
...
</bean>
Also, you can work with BinaryObject, instead of your classes. Here is some example of using sql with BinaryObjects. More information on the binary format is provided here.
Related
I am doing a native build of my Quarkus app and am hitting the UnsupportedFeatureException: Detected an instance of Random/SplittableRandom on a few Vertx Redis Client classes.
I am building using the docker container method:
./mvnw package -Dnative -Dquarkus.native.container-build=true
I have fixed some of the exceptions by including in the pom.xml:
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient
</quarkus.native.additional-build-args>
but am stuck on this one:
Fatal error: com.oracle.graal.pointsto.util.AnalysisError$ParsingError: Error encountered while parsing
io.vertx.redis.client.impl.RedisClusterConnection.send(io.vertx.redis.client.Request)
Parsing context:
at io.vertx.redis.client.impl.RedisClusterConnection.send(RedisClusterConnection.java:117)
at io.vertx.redis.client.impl.BaseRedisClient.lambda$send$1(BaseRedisClient.java:45)
at io.vertx.redis.client.impl.BaseRedisClient$$Lambda$1711/0x00000007c1ea57e8.apply(Unknown Source)
I have tried adding
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisClusterConnection
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.BaseRedisClient
and even
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection.send(io.vertx.redis.client.Request)
but the error persists.
I am fairly new to Java and very new to native building / GraalVM etc
Can anyone shed any light on what class I should add, please?
Thanks,
Murray
I believe we can propose a change to vert.x redis client to avoid the split random use. The randomness is there mostly to share the load across nodes. It is not used for any security related features. For this reason, a proposal to either round-robin would probably make more sense as a solution to this issue.
Ok, this seems to fix it:
EDIT: No it doesn't. See below.
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
The full profiles section in the pom.xml looks like this, for anyone else new to all this.
<profiles>
<profile>
<id>native</id>
<activation>
<property>
<name>native</name>
</property>
</activation>
<properties>
<skipITs>false</skipITs>
<quarkus.package.type>native</quarkus.package.type>
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
</properties>
</profile>
</profiles>
I have different errors now, so still not building, but at least this seems to fix that specific issue.
EDIT: The problem persists...
If I build with the pom as described above, ie:
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
I now get:
Error: Classes that should be initialized at run time got initialized during image building:
io.vertx.redis.client.impl.RedisReplicationConnection
the class was requested to be initialized at run time
(from command line with 'io.vertx.redis.client.impl.RedisReplicationConnection').
So, I guess that is not the right class afterall.
If I remove that class from the list I revert to the Random exception.
(Showing more detail)
[1/7] Initializing... (3.7s # 0.10GB)
Version info: 'GraalVM 22.3.1.0-Final Java 17 Mandrel Distribution'
Java version info: '17.0.6+10'
C compiler: gcc (linux, x86_64, 11.3.0)
Garbage collector: Serial GC
4 user-specific feature(s)
- io.quarkus.runner.Feature: Auto-generated class by Quarkus from the existing extensions
- io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [org.jboss.threads] categories
- io.quarkus.runtime.graal.ResourcesFeature: Register each line in META-INF/quarkus-native-resources.txt as a resource on Substrate VM
- io.quarkus.websockets.client.runtime.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [io.undertow.websockets] categories
[2/7] Performing analysis... [*] (14.1s # 3.37GB)
12,032 (89.58%) of 13,432 classes reachable
17,778 (59.65%) of 29,803 fields reachable
61,621 (57.16%) of 107,809 methods reachable
541 classes, 150 fields, and 2,655 methods registered for reflection
Fatal error: com.oracle.graal.pointsto.util.AnalysisError$ParsingError: Error encountered while parsing io.vertx.redis.client.impl.RedisReplicationConnection.send(io.vertx.redis.client.Request)
Parsing context:
at io.vertx.redis.client.impl.RedisReplicationConnection.send(RedisReplicationConnection.java:111)
at io.vertx.redis.client.RedisConnection.send(RedisConnection.java:83)
at io.vertx.redis.client.impl.RedisReplicationClient.getNodes(RedisReplicationClient.java:183)
at io.vertx.redis.client.impl.RedisReplicationClient.lambda$connect$4(RedisReplicationClient.java:126)
at io.vertx.redis.client.impl.RedisReplicationClient$$Lambda$2203/0x00000007c1630f60.handle(Unknown Source)
at io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91)
{etc}
I am beginning to think I have an error / issue in my code where I am using the Vert.x Redis Client. I am trying to narrow it down by trial and error.
Any other suggestions are most welcome.
Believe me, I know this question has been asked many times and has gotten an answer many times, and these answers seemed to have worked for some users. I've spent many hours trying the various proposed solutions and, while they work on Linux (Ubuntu) they seem to have no effect on Windows (Windows 10 Home with jdk1.8.0_161). The web application is using EclipseLink 2.5.0 for persistence.
I've tried including the mysql-connector-java-5.1.46-bin.jar file in the WAR archive (WEB-INF/lib; using the Deployment Assembly screen in Eclipse), copying it to the payara5/glassfish/lib folder, as well as the payara5/glassfish/domains/domain1/lib/ and payara5/glassfish/domains/domain1/lib/applibs folders. I also tried specifying the library when deploying the web application, i.e., putting mysql-connector-java-5.1.46-bin.jar as the value in the library field. I updated the CLASSPATH environment variable with the path to the JAR file. Every time, the server was restarted. None of these actions have any effect. Note that they did work on Linux Ubuntu.
See below for the well-known exception trace:
Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.7.0.v20170811-d680af5): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/rom
Error Code: 0
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:326)
at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:138)
at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:170)
at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.setOrDetectDatasource(DatabaseSessionImpl.java:228)
at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:804)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:254)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:757)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:216)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:324)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:348)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:311)
...
Any thoughts would be greatly appreciated.
UPDATE: as a sanity check (got the idea thanks to #Abhi) I added the line
try {
System.out.println("JDBC driver: " +
Class.forName("com.mysql.jdbc.Driver"));
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
Which correctly prints the following line (without throwing an exception):
JDBC driver: class com.mysql.jdbc.Driver
But does nothing to solve the problem. In other words, the driver seems to be loadable but somehow EclipseLink is not able to find it (?)
Looks like I'm able to answer my own question. I asked the exact same question on the Payara Forum and was recommended to define a data source instead of using the driver directly (#Chris pointed in this direction as well). A data source is likely the best way to go anyway but I wanted to avoid the complexity and use the simplest setup .. which clearly didn't work.
For reference, you can find the working setup below:
In Payara 5, goto JDBC > JDBC Connection Pools > New: enter a pool name, select javax.sql.DataSource as resource type, and MySql as vendor. On step 2, com.mysql.jdbc.jdbc2.optional.MysqlDataSource should be preselected for Datasource Classname. Fill out the Username and Password (e.g., root, changeit) properties under the Additional Properties header. Select finish. On the page for the newly created connection pool, select PING to make sure it was setup correctly.
In your persistence.xml file, make sure the persistence-unit element starts as follows:
<persistence-unit name="ROM" transaction-type="JTA">
<jta-data-source>java:global/<connection pool name></jta-data-source>
Create a web.xml file (this may also be done using Java Annotations):
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"
version="3.1">
<data-source>
<name>java:global/<connection pool name></name>
<class-name>com.mysql.jdbc.jdbc2.optional.MysqlDataSource</class-name>
<server-name>[host name, e.g., localhost]</server-name>
<port-number>3306</port-number>
<database-name>[db name]</database-name>
<user>[username, e.g., root]</user>
<password>[password]</password>
</data-source>
</web-app>
This configuration worked for me at least. Hoping this will help someone else down the road. Note that there are various useful settings for a connection pool - see e.g., here for more options.
to the line of code to connect:
con = DriverManager.getConnection(urlBaseDatos, usuario, clave);
Add the following:
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
con = DriverManager.getConnection(urlBaseDatos, usuario, clave);
Naturally I concur with the answer here, which is "in an Application server you should use a DataSource".
Now just my two cents and to answer the original question:
From JDBC 4, you aren't required to register the driver anymore, and this line shouldn't be necessary:
DriverManager.registerDriver(new com.mysql.cj.jdbc.Driver());
See: https://docs.oracle.com/javase/8/docs/api/java/sql/DriverManager.html
So when using a JDK8+/EE8/JDBC4.2 compliant application server, you shouldn't be mandated to register the driver. Or so I thought...
Though, like you #William, I noticed Glassfish/Payara requires it. It's very strange. Maybe it has to do with the way it handles classloading?
Wildfly, in turn, does the right thing and automatically loads the driver without actually having to manually register it.
I have an applicationContext xml file that imports multiple resources (camel context files).
<import resource="AddRequest.xml" >
<import resource="AdviseRequest.xml" >
I am caching the definitions of this xml before hand using new FileSystemXmlApplicationContext().
Say AddRequest.xml uses some method to connect to some host, while AdviseRequest.xml uses CXF endpoint to SOAP.
When I try to load the applicationContext xml, it tries to caches both files first before actually starting camelContext. At this stage, it is trying to check the CXF endpoint availabilty. Is there anyway to handle this, if the soap wsdl is actually down ?
The reason is, if there's some connection issues in the second xml, my first xml also fails, as it tries to cache both at a time.
Note: I cannot use two separate applicationContext files
I have used below code in the camel route.
<onException id="Request_onException1">
<exception>java.net.ConnectException</exception>
<handled>
<constant>true</constant>
</handled>
</onException>
In my application we are using Toplink with Jpa.
Here the problem is we are using stored procedures in this application, we are taking the connection using Jndi connection for Stored Procedure calling, and we are using EntityManger for remaining queries. But here if we launching the application it is taking two connections from connection pool. After the application launching I am calling the Stored Procedure(sp)
one sp I am taking one connection but in websphere connection pool it is creating two connections?
can U plese help me how to overcome this problem.....
I won't using JTA, to get the JDBC connection I am using
EntityManager em = getJpaTemplate().getEntityManagerFactory().createEntityManager();
this way I am getting the JDBC Connection...and I configured the persitence.xml file following code...
<properties>
<property name="toplink.logging.level" value="OFF"/>
<property name="toplink.cache.type.default" value="NONE"/>
<property name="com.thoughtinc.runtime.persistence.sql.syntax" value="db2" />
</properties>
So, please kindly look into this and tell me any if I am doing any wrong here.
Are you using JTA or non-JTA? Are you releasing the connection back to the pool after using it?
Depending on your configuration (include your persistence.xml), if you have a non-JTA login configured TopLink may use this for non-transactional read queries. This is configurable in your persistence.xml.
To get the JDBC Connection from a TopLink (EclipseLink) EntityManager use em.unwrap(Connection.class)
I have a web-app that requires two settings:
A JDBC datasource
A string token
I desperately want to be able to deploy one .war to various different containers (jetty,tomcat,gf3 minimum) and configure these settings at application level within the container.
My code does this:
InitialContext ctx = new InitialContext();
Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env");
token = (String)envCtx.lookup("token");
ds = (DataSource)envCtx.lookup("jdbc/datasource")
Let's assume I've used the glassfish management interface to create two jdbc resources: jdbc/test-datasource and jdbc/live-datasource which connect to different copies of the same schema, on different servers, different credentials etc. Say I want to deploy this to glassfish with and point it at the test datasource, I might have this in my sun-web.xml:
...
<resource-ref>
<res-ref-name>jdbc/datasource</res-ref-name>
<jndi-name>jdbc/test-datasource</jndi-name>
</resource-ref>
...
but
sun-web.xml goes inside my war, right?
surely there must be a way to do this through the management interface
Am I even trying to do the right thing? Do other containers make this any easier? I'd be particularly interested in how jetty 7 handles this since I use it for development.
EDIT Tomcat has a reasonable way to do this:
Create $TOMCAT_HOME/conf/Catalina/localhost/webapp.xml with:
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true">
<!-- String resource -->
<Environment name="token" value="value of token" type="java.lang.String" override="false" />
<!-- Linking to a global resource -->
<ResourceLink name="jdbc/datasource1" global="jdbc/test" type="javax.sql.DataSource" />
<!-- Derby -->
<Resource name="jdbc/datasource2"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.apache.derby.jdbc.EmbeddedDataSource"
url="jdbc:derby:test;create=true"
/>
<!-- H2 -->
<Resource name="jdbc/datasource3"
type="javax.sql.DataSource"
auth="Container"
driverClassName="org.h2.jdbcx.JdbcDataSource"
url="jdbc:h2:~/test"
username="sa"
password=""
/>
</Context>
Note that override="false" means the opposite. It means that this setting can't be overriden by web.xml.
I like this approach because the file is part of the container configuration not the war, but it's not part of the global configuration; it's webapp specific.
I guess I expect a bit more from glassfish since it is supposed to have a full web admin interface, but I would be happy enough with something equivalent to the above.
For GF v3, you may want to try leveraging the --deploymentplan option of the deploy subcommand of asadmin. It is discussed on the man page for the deploy subcommand.
We had just this issue when migrating from Tomcat to Glassfish 3. Here is what works for us.
In the Glassfish admin console, configure datasources (JDBC connection pools and resources) for DEV/TEST/PROD/etc.
Record your deployment time parameters (in our case database connect info) in properties file. For example:
# Database connection properties
dev=jdbc/dbdev
test=jdbc/dbtest
prod=jdbc/dbprod
Each web app can load the same database properties file.
Lookup the JDBC resource as follows.
import java.sql.Connection;
import javax.sql.DataSource;
import java.sql.SQLException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
/**
* #param resourceName the resource name of the connection pool (eg jdbc/dbdev)
* #return Connection a pooled connection from the data source
* associated with resourceName
* #throws NamingException will be thrown if resource name is not found
*/
public Connection getDatabaseConnection(String resourceName)
throws NamingException, SQLException {
Context initContext = new InitialContext();
DataSource pooledDataSource = (DataSource) initContext.lookup(resourceName);
return pooledDataSource.getConnection();
}
Note that this is not the usual two step process involving a look up using the naming context "java:comp/env." I have no idea if this works in application containers other than GF3, but in GF3 there is no need to add resource descriptors to web.xml when using the above approach.
I'm not sure to really understand the question/problem.
As an Application Component Provider, you declare the resource(s) required by your application in a standard way (container agnostic) in the web.xml.
At deployment time, the Application Deployer and Administrator is supposed to follow the instructions provided by the Application Component Provider to resolve external dependencies (amongst other things) for example by creating a datasource at the application server level and mapping its real JNDI name to the resource name used by the application through the use of an application server specific deployment descriptor (e.g. the sun-web.xml for GlassFish). Obviously, this is a container specific step and thus not covered by the Java EE specification.
Now, if you want to change the database an application is using, you'll have to either:
change the mapping in the application server deployment descriptor - or -
modify the configuration of the existing datasource to make it points on another database.
Having an admin interface doesn't really change anything. If I missed something, don't hesitate to let me know. And just in case, maybe have a look at this previous answer.