Where are the ElasticSearch APIs exposed when running Crate? - plugins

I've successfully installed the elasticsearch head plugin on crate and can access its web UI but it fails to connect. I'd like to be able to use it to visualize the data in the underlying elasticsearch store. Is there a a way to access the elasticsearch API directly so that head can work?

You will need to enable the API which is done inside the crate.yml file. And the setting to change is:
es.api.enabled: true
However, Elasticsearch Plugins may not work out of the box because Crate and Elasticsearch aren't binary compatible (you will probably have to modify the namespaces and imports). Elasticsearch has a shading step in their maven configuration so the elasticsearch jar contains different namespaces then Crate does (because Crate doesn't use shading).

Related

Helm: Datadog Agent with JDBC driver

I would like to use the Datadog Oracle Integration via the Helm Chart Datadog. Oracle Integration states To use the Oracle integration, either install the Oracle Instant Client libraries, or download the Oracle JDBC Driver.
I do not want to use a custom image to package the JDBC-driver, I want to use a standard image such as tag:7-jmx. Other options that come to mind (e.g. EFS volume with the driver inside) seem to be an overkill also.
Best option to me seems to be an init container that downloads the JDBC driver. But Datadog Helm Chart does not support custom init containers for the agents.
What's the best way to do this? To get an Datadog Agent with a JDBC driver via Helm?
Answer from Datadog Support to this:
Thanks again for reaching out to Datadog!
From looking further into this, there does not seem to be a way we can package the JDBC driver with the Datadog Agent. I understand that this is not desirable as you would prefer to use a standard image but I believe the best way to have these bundled together would be to have a custom image for your deployment.
Apologies for any inconveniences that this may cause.

How to create a backend Grafana plugin for annotations?

I understand how to use Grafana's Hashicorp Go-plugin system in order to create a generic time-series datasource plugin.
How can I make a backend plugin that could be used for annotations? The only official example provided is using fake-json-datasource for annotations which is a separate service and not a plugin.
I've found an example of built-in datasource that's providing annotations - https://github.com/grafana/grafana/blob/master/pkg/tsdb/stackdriver/stackdriver.go#L78-L79. However I'm not sure how to make Grafana do annotations queries to a backend plugin which is not built-in.

How to set up Apache Sling to use a relational DB

I am on Sling 11, which uses Jackrabbit Oak as content repository. I was wondering how to set up Sling to store the JCR repo on an RDBMS (DB2 to be specific).
I found this link on Jackrabbit Persistence, but looks like it does not apply to Oak and Oak documentation is mostly about MongoDB.
Also found an implementation of a Cassandra Resource Provider, although that seems designed to access specific paths mapped to Cassandra without using Oak.
Thanks,
Answering here but credit goes to Sling user's mailing list
Package the DB driver in an OSGi bundle
Download Sling's starter project
In boot.txt add a new running mode (in my case oak_db2)
[settings]
sling.run.mode.install.options=oak_tar,oak_mongo,oak_db2
Download Sling's datasource project and compile it.
In oak.txt configure the running mode (this will load the bundles for you in Felix):
[artifacts startLevel=15 runModes=oak_db2]
com.h2database/h2-mvstore/1.4.196
com.ibm.db2/jcc4/11.1
org.apache.sling/org.apache.sling.datasource/1.0.3-SNAPSHOT
And set-up the services that will manage persistence:
[configurations runModes=oak_db2]
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService
documentStoreType="RDB"
org.apache.sling.datasource.DataSourceFactory
url="jdbc:db2://10.1.2.3:50000/sling"
driverClassName="com.ibm.db2.jcc.DB2Driver"
username="****"
password="****"
datasource.name="oak"
Create a 'sling' named database.
run with java -jar -Dsling.run.modes=oak_db2 sling-starter.jar

Elastic search to be deployed as embedded or client/server mode

Which is the preferred mode of deployment for Elasticsearch, embedded mode (embedded in to the product/application) or client/server mode?
Apache Solr and most of the SQL, NOSQL databases are usually deployed in client/server mode. Where server runs as standalone, the client might be a driver library which will be used in the application.
In case of Elasticsearch, client and server binaries are the same. It would be difficult to package two separate Elasticsearch binaries, one for client to use in the application and another for the standalone server. I am planning to go with Rest API because I cannot package two set of Elasticsearch binaries in my product.
What is the general practice for Elasticsearch deployment? Keep Elasticsearch as standalone and use Rest API or embedded Elasticsearch within the application?
For production usage it is better to decouple your application from elasticsearch srever.
Lets say you want to upgrade to elastic 2.X that mean that you will need to re-compile your application - wouldnt it be overhead?
If you to run unit/data integration test you can use elasticsearch as embedded service to your testing needs

ArangoDB and Gephi: Import data from ArangoDB into Gephi

Is there a way to use ArangoDB as datasource for gephi? I tried https://github.com/datablend/gephi-blueprints-plugin/wiki . But it is only working with the indirection over rexster with included plugin blueprints-arangodb-graph.
I think this is very inelegant option with a lot overhead.
I wish some way, that I'm able to add a blueprints arango db plugin to gephi and then I'm able to choose a ArangoDB as datasource. Maybe in combination with gephi-blueprints-plugin.
I think a combination of the blueprints plugin for gephi and the arangodb blueprints API would be the nicest solution to avoid an additional step over a csv (or other) file to work with data from ArangoDB in Gephi.
The description of the blueprints plugin for gephi says: "The Gephi Blueprints plugin allows a user to import graph-data from any graph database that implements the Tinkerpop Blueprints generic graph API". But I don't know how - it supports out of the box only TinkerGraph, Neo4j, OrientDB, Dex, RexterGraph and FluxGraph.
I tried to create a arrangodb.xml in /etc/graph and add the blueprints implementation of arrangodb as jar in the ".gephi/dev/modules" folder. But gephi doesn't load the jar and so the menu entry "Import/Graph database ..." and the selection of "arangodb" leads to a null pointer error because of the missing class file the arangodb blueprints api.
Has someone worked with gephi-blueprints-plugin and/or blueprints-arangodb-graph and has some ideas?
This was discussed on this ticket of the Blueprints Adapter of ArangoDB. A plugin for Gephi has to be build specifically for ArangoDB. Axel Hoffmann is thinking about developing this plugin.