ArangoDB and Gephi: Import data from ArangoDB into Gephi - visualization

Is there a way to use ArangoDB as datasource for gephi? I tried https://github.com/datablend/gephi-blueprints-plugin/wiki . But it is only working with the indirection over rexster with included plugin blueprints-arangodb-graph.
I think this is very inelegant option with a lot overhead.
I wish some way, that I'm able to add a blueprints arango db plugin to gephi and then I'm able to choose a ArangoDB as datasource. Maybe in combination with gephi-blueprints-plugin.
I think a combination of the blueprints plugin for gephi and the arangodb blueprints API would be the nicest solution to avoid an additional step over a csv (or other) file to work with data from ArangoDB in Gephi.
The description of the blueprints plugin for gephi says: "The Gephi Blueprints plugin allows a user to import graph-data from any graph database that implements the Tinkerpop Blueprints generic graph API". But I don't know how - it supports out of the box only TinkerGraph, Neo4j, OrientDB, Dex, RexterGraph and FluxGraph.
I tried to create a arrangodb.xml in /etc/graph and add the blueprints implementation of arrangodb as jar in the ".gephi/dev/modules" folder. But gephi doesn't load the jar and so the menu entry "Import/Graph database ..." and the selection of "arangodb" leads to a null pointer error because of the missing class file the arangodb blueprints api.
Has someone worked with gephi-blueprints-plugin and/or blueprints-arangodb-graph and has some ideas?

This was discussed on this ticket of the Blueprints Adapter of ArangoDB. A plugin for Gephi has to be build specifically for ArangoDB. Axel Hoffmann is thinking about developing this plugin.

Related

How to set up Apache Sling to use a relational DB

I am on Sling 11, which uses Jackrabbit Oak as content repository. I was wondering how to set up Sling to store the JCR repo on an RDBMS (DB2 to be specific).
I found this link on Jackrabbit Persistence, but looks like it does not apply to Oak and Oak documentation is mostly about MongoDB.
Also found an implementation of a Cassandra Resource Provider, although that seems designed to access specific paths mapped to Cassandra without using Oak.
Thanks,
Answering here but credit goes to Sling user's mailing list
Package the DB driver in an OSGi bundle
Download Sling's starter project
In boot.txt add a new running mode (in my case oak_db2)
[settings]
sling.run.mode.install.options=oak_tar,oak_mongo,oak_db2
Download Sling's datasource project and compile it.
In oak.txt configure the running mode (this will load the bundles for you in Felix):
[artifacts startLevel=15 runModes=oak_db2]
com.h2database/h2-mvstore/1.4.196
com.ibm.db2/jcc4/11.1
org.apache.sling/org.apache.sling.datasource/1.0.3-SNAPSHOT
And set-up the services that will manage persistence:
[configurations runModes=oak_db2]
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService
documentStoreType="RDB"
org.apache.sling.datasource.DataSourceFactory
url="jdbc:db2://10.1.2.3:50000/sling"
driverClassName="com.ibm.db2.jcc.DB2Driver"
username="****"
password="****"
datasource.name="oak"
Create a 'sling' named database.
run with java -jar -Dsling.run.modes=oak_db2 sling-starter.jar

Using Hawkular for Rest services built using CXF

I am new to Distributed Tracing / Hawkular. And would like to experiment tracing for my distributed cxf rest services using hawkular.
Will it be possible to trace cxf servcies using hawkular and if any one has doc or reference sample app, that will be great.
Also, is there any other tracing tool which can solve this requirement(tracing java cxf rest services). Zipkin-brave has a feature for this which I am looking at also.
I'd recommend instrumenting your application using the OpenTracing API, and later choose a concrete implementation. Under the Hawkular project, there's the Hawkular APM module which provides a solution for capturing, visualizing and making sense of the data. However, we (Hawkular APM) recently decided to join the Jaeger project, to have a better support for the OpenTracing case. We expect to have similar features from Hawkular APM ported to Jaeger "soon".
For OpenTracing, there are quite a few "framework integrations" under the OpenTracing Contrib organization, including JAX-RS, which might serve as base or reference for a CXF-specific implementation. If nothing suits you, I'm certain we'd welcome a contribution.
If you are just looking to learn OpenTracing, I'd suggest taking a look at the Hawkular APM's example directory, including a vertx-opentracing example.

Using oreintdb in production with jdbc

I am planning to use orientdb in production using the jdbc drive so i need confirm some points
is jdbc driver can give all the orientdb Features like (transaction and links ...etc) or using the the java api is the best choice
I noticed that you have spring data implementation in the orientdb github is it ready to use in the production
At this link a discussion on the issue that you wrote.
in general, JDBC driver supports only a subset of OrientDB, only the part you can use with commands.
If you're a Java developer, I suggest you to use the Java Graph API: http://orientdb.com/docs/last/Graph-Database-Tinkerpop.html

Where are the ElasticSearch APIs exposed when running Crate?

I've successfully installed the elasticsearch head plugin on crate and can access its web UI but it fails to connect. I'd like to be able to use it to visualize the data in the underlying elasticsearch store. Is there a a way to access the elasticsearch API directly so that head can work?
You will need to enable the API which is done inside the crate.yml file. And the setting to change is:
es.api.enabled: true
However, Elasticsearch Plugins may not work out of the box because Crate and Elasticsearch aren't binary compatible (you will probably have to modify the namespaces and imports). Elasticsearch has a shading step in their maven configuration so the elasticsearch jar contains different namespaces then Crate does (because Crate doesn't use shading).

HDFS web interface alternative

Alright, this is annoying! I am new to Hadoop. And I am trying to find decent alternative to basic HDFS web interface. i tried with hadoop eclipse plugin but seems it's oudated already and it's pain to set it up correctly! I have cloudera's distribution installed and I heard about cloudera desktop but it's no longer available. Can anybody tell me decent alternative to HDFS web interface where I can download and upload files to HDFS via GUI easily? P.S I am running everything on my local no, cluster involved. Tried a lot to find , but nothing seems to be pointing towards right direction
You can use webhdfs of which REST API supports the complete FileSystem interface for HDFS. http://hadoop.apache.org/docs/r1.0.4/webhdfs.html
OR
You can integrate hadoop with hoop(HDFS over HTTP), which is used to access HDFS via HTTP protocol. Hoop provides access to all Hadoop Distributed File System (HDFS) operations (read and write) over HTTP/S
for more details please refer.
http://bigobject.blogspot.in/2013/03/hoop-https-over-hdfs.html
or also you can user HTTPFS as a option to Hoop
http://bigobject.blogspot.in/2013/03/apache-hadoop-httpfs-service-that.html