Configure Rexster to use Titan - titan

I'm trying to configure Rexster to use Titan by modifying the rexster.xml file in Rexster.
But when I run
http ://localhost:8182/graphs/mygraph
in my browser I get a message saying:
{"message":"Graph [mygraph] could not be found"}.
This is the part of the rexster.xml file I've modified:
<graph>
<graph-name>mygraph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-location>C:/titan-server-jre6-0.4.4/bin/mygraph</graph-location>
<graph-read-only>false</graph-read-only>
<properties>
<storage.backend>local</storage.backend>
<buffer-size>100</buffer-size>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
I've added all the jar files in Titans lib folder into Rexster under config/ext/titan and I've created a graph in Titan by using the gremlin shell i Titan:
g = TitanFactory.open('mygraph');
g.createKeyIndex('name', Vertex.class);
v = g.addVertex(null);
v.setProperty('name','x');
v.setProperty('type','person');
v.setProperty('age',20);
v1 = g.addVertex(null);
v1.setProperty('name','y');
v1.setProperty('type','person');
v1.setProperty('age',22);
e = g.addEdge(null, v, v1, 'knows');
e1 = g.addEdge(null, v1, v, 'knows');
g.shutdown();
What am I missing?
[UPDATE]:
I had placed the jar files from Titan in the wrong directory in rexster, the are now in the right place in rexster. But when I now run the rexster server I get the following output:
[INFO] Application - .:Welcome to Rexster:.
[INFO] RexsterProperties - Using [C:\rexster-server-2.5.0\config\rexster.xml] as
configuration source.
[INFO] Application - Rexster is watching [C:\rexster-server-2.5.0\config\rexster
.xml] for change.
Exception in thread "main" java.lang.AbstractMethodError: com.thinkaurelius.tita
n.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(Lcom/tinkerpo
p/rexster/config/GraphConfigurationContext;)Lcom/tinkerpop/blueprints/Graph;
at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFrom
Configuration(GraphConfigurationContainer.java:124)
at com.tinkerpop.rexster.config.GraphConfigurationContainer.<init>(Graph
ConfigurationContainer.java:54)
at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRex
sterApplication.java:99)
at com.tinkerpop.rexster.server.XmlRexsterApplication.<init>(XmlRexsterA
pplication.java:47)
at com.tinkerpop.rexster.Application.<init>(Application.java:97)
at com.tinkerpop.rexster.Application.main(Application.java:189)
How can I resolve this?

Problem solved by changing the version of the rexster server, before I was using version 2.5.0. I am now using version 2.4.0 and version 0.4.4 of Titan.

Related

Quarkus native build Random/SplittableRandom exception with Vert.x Redis Client

I am doing a native build of my Quarkus app and am hitting the UnsupportedFeatureException: Detected an instance of Random/SplittableRandom on a few Vertx Redis Client classes.
I am building using the docker container method:
./mvnw package -Dnative -Dquarkus.native.container-build=true
I have fixed some of the exceptions by including in the pom.xml:
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient
</quarkus.native.additional-build-args>
but am stuck on this one:
Fatal error: com.oracle.graal.pointsto.util.AnalysisError$ParsingError: Error encountered while parsing
io.vertx.redis.client.impl.RedisClusterConnection.send(io.vertx.redis.client.Request)
Parsing context:
at io.vertx.redis.client.impl.RedisClusterConnection.send(RedisClusterConnection.java:117)
at io.vertx.redis.client.impl.BaseRedisClient.lambda$send$1(BaseRedisClient.java:45)
at io.vertx.redis.client.impl.BaseRedisClient$$Lambda$1711/0x00000007c1ea57e8.apply(Unknown Source)
I have tried adding
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisClusterConnection
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.BaseRedisClient
and even
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection.send(io.vertx.redis.client.Request)
but the error persists.
I am fairly new to Java and very new to native building / GraalVM etc
Can anyone shed any light on what class I should add, please?
Thanks,
Murray
I believe we can propose a change to vert.x redis client to avoid the split random use. The randomness is there mostly to share the load across nodes. It is not used for any security related features. For this reason, a proposal to either round-robin would probably make more sense as a solution to this issue.
Ok, this seems to fix it:
EDIT: No it doesn't. See below.
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
The full profiles section in the pom.xml looks like this, for anyone else new to all this.
<profiles>
<profile>
<id>native</id>
<activation>
<property>
<name>native</name>
</property>
</activation>
<properties>
<skipITs>false</skipITs>
<quarkus.package.type>native</quarkus.package.type>
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
</properties>
</profile>
</profiles>
I have different errors now, so still not building, but at least this seems to fix that specific issue.
EDIT: The problem persists...
If I build with the pom as described above, ie:
<quarkus.native.additional-build-args>
--initialize-at-run-time=io.vertx.redis.client.impl.RedisSentinelClient\,io.vertx.redis.client.impl.RedisReplicationConnection
</quarkus.native.additional-build-args>
I now get:
Error: Classes that should be initialized at run time got initialized during image building:
io.vertx.redis.client.impl.RedisReplicationConnection
the class was requested to be initialized at run time
(from command line with 'io.vertx.redis.client.impl.RedisReplicationConnection').
So, I guess that is not the right class afterall.
If I remove that class from the list I revert to the Random exception.
(Showing more detail)
[1/7] Initializing... (3.7s # 0.10GB)
Version info: 'GraalVM 22.3.1.0-Final Java 17 Mandrel Distribution'
Java version info: '17.0.6+10'
C compiler: gcc (linux, x86_64, 11.3.0)
Garbage collector: Serial GC
4 user-specific feature(s)
- io.quarkus.runner.Feature: Auto-generated class by Quarkus from the existing extensions
- io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [org.jboss.threads] categories
- io.quarkus.runtime.graal.ResourcesFeature: Register each line in META-INF/quarkus-native-resources.txt as a resource on Substrate VM
- io.quarkus.websockets.client.runtime.DisableLoggingFeature: Disables INFO logging during the analysis phase for the [io.undertow.websockets] categories
[2/7] Performing analysis... [*] (14.1s # 3.37GB)
12,032 (89.58%) of 13,432 classes reachable
17,778 (59.65%) of 29,803 fields reachable
61,621 (57.16%) of 107,809 methods reachable
541 classes, 150 fields, and 2,655 methods registered for reflection
Fatal error: com.oracle.graal.pointsto.util.AnalysisError$ParsingError: Error encountered while parsing io.vertx.redis.client.impl.RedisReplicationConnection.send(io.vertx.redis.client.Request)
Parsing context:
at io.vertx.redis.client.impl.RedisReplicationConnection.send(RedisReplicationConnection.java:111)
at io.vertx.redis.client.RedisConnection.send(RedisConnection.java:83)
at io.vertx.redis.client.impl.RedisReplicationClient.getNodes(RedisReplicationClient.java:183)
at io.vertx.redis.client.impl.RedisReplicationClient.lambda$connect$4(RedisReplicationClient.java:126)
at io.vertx.redis.client.impl.RedisReplicationClient$$Lambda$2203/0x00000007c1630f60.handle(Unknown Source)
at io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:91)
{etc}
I am beginning to think I have an error / issue in my code where I am using the Vert.x Redis Client. I am trying to narrow it down by trial and error.
Any other suggestions are most welcome.

WebLogic 12c not obeying "prefer-web-inf-classes" or "prefer-application-packages" during deployment

I am having a very strange problem with WebLogic 12.1.3.0.0. I have an application that causes a NullPointerException inside the AdminServer during deployment, yet the application runs fine after the error.
The NullPointerException during deployment happens because the deployment process is not respecting the following settings in my deployment descriptors:
AppName.war/WEB-INF/weblogic.xml:
<?xml version="1.0" encoding="UTF-8"?>
<weblogic-web-app>
<context-root>AppName</context-root>
<container-descriptor>
<prefer-web-inf-classes>true</prefer-web-inf-classes>
</container-descriptor>
</weblogic-web-app>
And AppName.ear/META-INF/weblogic-application.xml:
<?xml version="1.0" encoding="UTF-8"?>
<weblogic-application>
<prefer-application-packages>
<package-name>javax.faces.*</package-name>
</prefer-application-packages>
</weblogic-application>
Since these settings are ignored, javax.faces.webapp.FacesServlet class is loaded by the System Classloader (sun.misc.Launcher$AppClassLoader), from $WL_HOME/wlserver/modules/glassfish.jsf_2.0.0.0_2-1-20.jar, not from the WAR classloader. This eventually leads to the NPE due to some poorly written code in weblogic.servlet.internal.WebAnnotationProcessor, which assumes that javax.faces.webapp.FacesServlet will be annotated with #MultipartConfig, which it is not in glassfish.jsf_2.0.0.0_2-1-20.jar. The correct version of this class is located in AppName.ear/AppName.war/WEB-INF/lib/jsf-api-2.2.16.jar.
I have verified all of this by remote debugging the AdminServer, and by turning on Classloader Logging, which shows this:
GCL[61ba0535][AppName.ear#AppName.war] GCL.loadClass(javax.faces.webapp.FacesServlet)>
FCL[10d372cb][] FCL.loadClass(javax.faces.webapp.FacesServlet)>
FCL[10d372cb][] FCL.findClass(javax.faces.webapp.FacesServlet)>
GCL[75bc8d74][DomainLib] GCL.loadClass(javax.faces.webapp.FacesServlet)>
This issue makes it impossible to open the AppName Deployment in the Admin Console, b/c it always throw the error (with the aforementioned NPE in the logs):
The configuration for AppName is still being loaded from your last request, please wait a moment and retry.
In contrast to this, when I run the EAR in the server where it was deployed, exactly the same EAR works just fine. To prove that classloading is working properly, here's the log:
CACL[6eb4b4ad][AppName#AppName] CACL.loadClass(javax.faces.webapp.FacesServlet)>
CACL[6eb4b4ad][AppName#AppName] GCL.loadClass(javax.faces.webapp.FacesServlet)>
FCL[37ee3b94][AppName#AppName] FCL.loadClass(javax.faces.webapp.FacesServlet)>
FCL[37ee3b94][AppName#AppName] FCL.findClass(javax.faces.webapp.FacesServlet)>
GCL[3cc63f62][AppName#] GCL.loadClass(javax.faces.webapp.FacesServlet)>
FCL[770e8d03][] FCL.loadClass(javax.faces.webapp.FacesServlet)>
FCL[770e8d03][] FCL.findClass(javax.faces.webapp.FacesServlet)>
FCL[770e8d03][] FCL.matchesClassFilterList(javax.faces.webapp.FacesServlet): javax.faces. index : 0 end : 12>
FCL[770e8d03][] FCL.findClass(javax.faces.webapp.FacesServlet): Found match>
GCL[3cc63f62][AppName#] GCL.findClass(javax.faces.webapp.FacesServlet)>
GCL[3cc63f62][AppName#] GCL.findLocalClass(javax.faces.webapp.FacesServlet): Classpath in use: <LIST OF ALL JARS IN MY EAR>
GCL[3cc63f62][AppName#] GCL.findLocalClass(javax.faces.webapp.FacesServlet): not found>
CACL[6eb4b4ad][AppName#AppName] CACL.findClass(javax.faces.webapp.FacesServlet)>
CACL[6eb4b4ad][AppName#AppName] CACL.findClass(javax.faces.webapp.FacesServlet): About to loadClass>
CACL[6eb4b4ad][AppName#AppName] GCL.findClass(javax.faces.webapp.FacesServlet)>
CACL[6eb4b4ad][AppName#AppName] GCL.findLocalClass(javax.faces.webapp.FacesServlet): Classpath in use: <LIST OF ALL JARS IN MY WAR>
CACL[6eb4b4ad][AppName#AppName] GCL.findLocalClass(javax.faces.webapp.FacesServlet): Found class>
CACL[6eb4b4ad][AppName#AppName] GCL.defineClass(javax.faces.webapp.FacesServlet)>
I don't know how to fix this... Why is the AdminServer completely ignoring the deployment descriptors in my EAR, while the server that runs the EAR sets everything up properly?
Any ideas would be appreciated.

Kafka OSGI bundle - Producer initialization issue could not initialize class org.apache.kafka.clients.producer.ProducerConfig

I am trying to create a Kafka Producer in karaf 4.0.3.
ClassLoader currentLoader;
try{
currentLoader = Thread.currentThread().getContextClassLoader();
Thread.currentThread().setContextClassLoader(null);
Properties props = new Properties();
props.put("bootstrap.servers","localhost:9092");
props.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
props.put("key.serializer","org.apache.kafka.common.serialization.IntegerSerializer");
KafkaProducer<String,String> producer = new KafkaProducer<String,String>(props);
Thread.currentThread().setContextClassLoader(currentLoader);
producer.close();
My pom.xml contains,
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.0</version>
</dependency>
While deploying the code, I am getting the below error,
2016-05-09 14:18:12,127 | ERROR | nsole user karaf | ShellUtil | 44 - org.apache.karaf.shell.core - 4.0.3 | Exception caught while executing command
org.apache.karaf.shell.support.MultiException: Error executing command on bundles:
Error starting bundle143: Activator start error in bundle KafkaArtifact [143].
at org.apache.karaf.shell.support.MultiException.throwIf(MultiException.java:61)
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:69)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.bundle.command.BundlesCommand.execute(BundlesCommand.java:54)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)[44:org.apache.karaf.shell.core:4.0.3]
at org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)[44:org.apache.karaf.shell.core:4.0.3]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]
Caused by: java.lang.Exception: Error starting bundle143: Activator start error in bundle KafkaArtifact [143].
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)[24:org.apache.karaf.bundle.core:4.0.3]
... 12 more
Caused by: org.osgi.framework.BundleException: Activator start error in bundle KafkaArtifact [143].
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.Felix.startBundle(Felix.java:2144)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)[org.apache.felix.framework-5.4.0.jar:]
at org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)[24:org.apache.karaf.bundle.core:4.0.3]
at org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)[24:org.apache.karaf.bundle.core:4.0.3]
... 12 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.kafka.clients.producer.ProducerConfig
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:181)
at com.KafkaSample.KafkaArtifact.Producer.<init>(Producer.java:24)
at com.KafkaSample.KafkaArtifact.KafkaConsumerProducerDemo.start(KafkaConsumerProducerDemo.java:9)
at org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
at org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
... 16 more
I have tried workaround in the below link as well,
Karaf - Kafka OSGI bundle - Producer issue
But still getting the same error.
I know its a 3 months old question. Even i got stuck with this producer issue. After digging for 2 days i stumbled upon a jira issue where someone has already mentioned this issue and the resolution has been patched but will be available in the 2.18.0 version of the camel library. To resolve this however, what i did was to clone this camel repository and build the components i wanted for my project.These components are at 2.18.0-SNAPSHOT version at this time. Under the components folder select the components you want and execute maven clean package to get the jar generated in the target directory.
One additional thing, when you compile camel-core sub project. It executes a lot of test cases, which you might not be needing it, so just replace "" with "org/apache/camel/**/*.java" in the last line of properties.
Hope this helps.

Karaf 3.0.x config:update command does not create .cfg file in /etc

I am using karaf 3.0.1 with my bundle (https://github.com/johanlelan/camel-cxfrs-blueprint-example). I want to manage properties at runtime but I see that config:update does not create file on /etc, why?
<cm:property-placeholder persistent-id="org.apache.camel.examples.cxfrs.blueprint"
update-strategy="reload">
<!-- list some properties for this test -->
<cm:default-properties>
<cm:property name="cxf.application.in"
value="cxfrs:bean:rest.endpoint?throwExceptionOnFailure=false&bindingStyle=SimpleConsumer&loggingFeatureEnabled=true"/>
<cm:property name="common.tenant.in" value="direct-vm:common.tenant.in"/>
<cm:property name="common.authentication.in" value="direct-vm:common.authentication.in"/>
<cm:property name="application.put.in" value="direct-vm:application.putById"/>
<cm:property name="application.post.in"
value="direct-vm:application.postApplications"/>
<cm:property name="log.trace.level" value="INFO"/>
</cm:default-properties>
</cm:property-placeholder>
In karaf I try to modify an endpoint url:
karaf#root()> config:edit org.apache.camel.examples.cxfrs.blueprint
karaf#root()> config:property-set common.tenant.in direct-vm:test
karaf#root()> config:property-list
service.pid = org.apache.camel.examples.cxfrs.blueprint
common.tenant.in = direct-vm:test
felix.fileinstall.filename = file:/F:/travail/servers/karaf-lan/etc/org.apache.camel.examples.cxfrs.blueprint.cfg
karaf#root()> config:update
karaf#root()>
I precise that my bundle is updated after config:update but no file exists in /etc... I think it works in karaf 2.3.5.
Configurations are persisted by the ConfigurationAdmin service. If you are using Karaf, it uses the implementation from Felix ConfigAdmin [1]. By default Karaf configures ConfigAdmin to store files in its local bundle storage area under /data, but that can be changed by editing the felix.cm.dir property.
Also, the support for the .cfg files comes from Felix FileInstall [2].
[1] http://felix.apache.org/documentation/subprojects/apache-felix-config-admin.html
[2] http://felix.apache.org/site/apache-felix-file-install.html
It is a known issue at karaf 3.0.1
You may use apache karaf 3.0.2 that this bug is fixed.

OpenEJB tries to bind a remote EJB twice

I am having a strange issue regarding OpenEJB and integration tests.
I am using Open EJB 4.5.1, JUnit 4.10 and Maven3 with surefire plugin 2.13
I start OpenEJB from my test doing:
InitialContext ic = new InitialContext();
It works (apparently) fine and deploys the EJB (CentrosBeanFacade) correctly, but after that it tries to bind it again, which causes the server to fail.
These are the interesting log parts:
First time --> [OK] :
DEBUG - bound ejb at name: openejb/Deployment/CentrosBeanFacade/com.csc.health.xhis.arqtest.api.CentrosLogica, ref: org.apache.openejb.core.ivm.naming.BusinessRemoteReference#6506fe2b
DEBUG - bound ejb at name: openejb/Deployment/CentrosBeanFacade/com.csc.health.xhis.arqtest.api.CentrosLogica!Remote, ref: org.apache.openejb.core.ivm.naming.BusinessRemoteReference#6506fe2b
INFO - Jndi(name=CentrosBeanFacade) --> Ejb(deployment-id=CentrosBeanFacade)
INFO - Jndi(name=global/classpath.ear/test-ejb/CentrosBeanFacade!com.csc.health.xhis.arqtest.api.CentrosLogica) --> Ejb(deployment-id=CentrosBeanFacade)
INFO - Jndi(name=global/classpath.ear/test-ejb/CentrosBeanFacade) --> Ejb(deployment-id=CentrosBeanFacade)
But then it tries to deploy it a second time.
DEBUG - failed to bind ejb at name: openejb/Deployment/CentrosBeanFacade/com.csc.health.xhis.arqtest.api.CentrosLogica, ref: org.apache.openejb.core.ivm.naming.BusinessRemoteReference#32f00d9a
ERROR - Jndi name could not be bound; it may be taken by another ejb. Jndi(name=openejb/Deployment/CentrosBeanFacade/com.csc.health.xhis.arqtest.api.CentrosLogica!Remote)
The internal BusinessRemoteReference from EJB is a different object than the first time, so it is trying to deploy it a second time. But I have checked the classpath and the EJB is not included twice. If I execute a similar procedure but from a plain Java class, it also fails.
Any clues?