Aggregate multiple embedded neo4j graph into one neo4j instance - aggregate

I'm using a Neo4j java embedded graph database to store each simulation results on each node of a computer grid. Now to ask good question to this data, i need to aggregate this huge amount of multiple embedded graph object into one unique central neo4j object (or server).
Do you have some in head some internet ressources which explain this ? Do you have some experience to share with me on this use case ?
Thanks !

Mmh,
that depends on the layout of your merge. IS it do go into different subgraphs, time-ordered or otherwise? There is no utility per se, since the merge semantics are not generic. Ask on http://groups.google.com/group/neo4j/ for others who might have good algos for this.

Related

Neptune-Gremlin-Python | Best practises for scaling network analysis and serving use cases like recommendations in realtime

I have a generic question around the best practises on usage of Neptune DB as a network database and its ability to scale up for complex computing. I want to develop a user recommendation system where incoming users on the platform are prompted other users they can likely follow in order to grow the network.
For implementing a simple technique like Triadic Closure, should I use gremlin queries on the Network DB(AWS Neptune in my case) for generating the recommendations? I believe in this case I would have to create python scripts that parallelise queries for multiple nodes and generate recommendation for each node at scale.
OR is it a more common practise to store the network data in the form of nodes, edges and their properties into a relational database, and then perform computations on the same by running SQL queries to load the network data into python, and then using packages like NetworkX on top of that. In this case I won't have to worry about batch computations since a relational database like Redshift would take care of it. However I would be writing python logics to implement techniques such as triadic closure.
Additionallly in the future I may want to use more complex graph computational techniques like graph clustering, partitioning, calculation of different kinds of centralities. Are all/any of these possible within the framework of Neptune+Gremlin.
With the above context below are the questions I am seeking answers for:
Whats is the commonly used tech stack by a data science team working with graph data to build solutions such as user recommendations? By data-science tech stack I mean technologies that help query, analyse, visualise, compute and serve.
Can Neptune + Gremlin replace python packages such as NetworkX for network analysis and centrality measurement?
Is Neptune DB ideal only as a data store OR can it also support complex network analysis and recommendation serving?
Any insight/resources on this would be really helpful!
It is definitely possible to do triadic closure in Gremlin. I have also seen data scientists use both NetworkX and Gremlin together by running the gremlin-python client in a Jupyter Notebook. As this question is quite specific to Amazon Neptune you may want to post to the Neptune support forum at [1]. There are also some useful Gremlin Recipes at [2]
If you post to the support forum I am sure someone will respond.
[1] https://forums.aws.amazon.com/forum.jspa?forumID=253&start=0
[2] http://tinkerpop.apache.org/docs/current/recipes/

Combining MongoDB and a GraphDB like Neo4J

As part of a CMS I'm developing I've got MongoDB as the primary datastore which feeds to ElasticSearch and Redis. All this is configured decleratively.
I'm currently trying to develop a declarative api in JSON (A DSL of sorts) which, when implemented, will enable me to write uniform queries in JSON, but at the backend these datastores work in tandem to come up with the result. Federated search if you will.
Now, while fleshing out the supported types of queries for this Json api, I've come across a class of queries not (efficiently) supported by my current setup: graph-based queries, like friend-of-friend, RDF-queries, etc. Something I'd like to support as well.
So I'm looking for a way to introduce a GraphDB into this ecosystem with the best fit. I should probably say the app-layer sits in Node.js.
I've come across lots of articles comparing Neo4J (a popular GraphDB) vs MongoDB, but not so much of actual use-cases, real world scenarios in which the 2 are complemented.
Any pointers highly appreciated.
You might want to take a look at structr[1], which has a RESTful graph database backend that you can configure using Java beans. In future versions, there will be a configuration option using REST calls only, so that you can fire up a structr server and configure and use it as a standalone graph database backend.
Just contact us on twitter or via email.
(disclaimer: I'm one of the developers of structr, so this comment may not be 100% impartial :))
[1] http://structr.org
The databases are very much complementary.
Use MongoDB to store your raw data/system of record and load the raw data into Neo4j for additional insights/analysis. When you are dealing with unstructured data, you want to store the information in a datastore which is conducive to unstructure data - MongoDB fits the bill (as does other similar NOSQL databases). While Neo4j is considered a NOSQL database, it doesn't fit the bill for unstructured data. Because you have to determine what is a relationship, what is a node, and what properties are stored for each - it's better suited when you have semi-structured data and some understanding of the type of analysis you want to do.
A great architecture is store your unstructured data in MongoDB and use jobs to load them into Neo4j. This allows you to re-load your graph if you figure out there are new pieces of information you'd like to store in the graph for additional analysis.
They are definitely NOT replacements for each other. They fit very different use cases.

NoSQL for time series/logged instrument reading data that is also versioned

My Data
It's primarily monitoring data, passed in the form of Timestamp: Value, for each monitored value, on each monitored appliance. It's regularly collected over many appliances and many monitored values.
Additionally, it has the quirky feature of many of these data values being derived at the source, with the calculation changing from time to time. This means that my data is effectively versioned, and I need to be able to simply call up only data from the most recent version of the calculation. Note: This is not versioning where the old values are overwritten. I simply have timestamp cutoffs, beyond which the data changes its meaning.
My Usage
Downstream, I'm going to have various undefined data mining/machine learning uses for the data. It's not really clear yet what those uses are, but it is clear that I will be writing all of the downstream code in Python. Also, we are a very small shop, so I can really only deal with so much complexity in setup, maintenance, and interfacing to downstream applications. We just don't have that many people.
The Choice
I am not allowed to use a SQL RDBMS to store this data, so I have to find the right NoSQL solution. Here's what I've found so far:
Cassandra
Looks totally fine to me, but it seems like some of the major users have moved on. It makes me wonder if it's just not going to be that much of a vibrant ecosystem. This SE post seems to have good things to say: Cassandra time series data
Accumulo
Again, this seems fine, but I'm concerned that this is not a major, actively developed platform. It seems like this would leave me a bit starved for tools and documentation.
MongoDB
I have a, perhaps irrational, intense dislike for the Mongo crowd, and I'm looking for any reason to discard this as a solution. It seems to me like the data model of Mongo is all wrong for things with such a static, regular structure. My data even comes in (and has to stay in) order. That said, everybody and their mother seems to love this thing, so I'm really trying to evaluate its applicability. See this and many other SE posts: What NoSQL DB to use for sparse Time Series like data?
HBase
This is where I'm currently leaning. It seems like the successor to Cassandra with a totally usable approach for my problem. That said, it is a big piece of technology, and I'm concerned about really knowing what it is I'm signing up for, if I choose it.
OpenTSDB
This is basically a time-series specific database, built on top of HBase. Perfect, right? I don't know. I'm trying to figure out what another layer of abstraction buys me.
My Criteria
Open source
Works well with Python
Appropriate for a small team
Very well documented
Has specific features to take advantage of ordered time series data
Helps me solve some of my versioned data problems
So, which NoSQL database actually can help me address my needs? It can be anything, from my list or not. I'm just trying to understand what platform actually has code, not just usage patterns, that support my super specific, well understood needs. I'm not asking which one is best or which one is cooler. I'm trying to understand which technology can most natively store and manipulate this type of data.
Any thoughts?
It sounds like you are describing one of the most common use cases for Cassandra. Time series data in general is often a very good fit for the cassandra data model. More specifically many people store metric/sensor data like you are describing. See:
http://rubyscale.com/blog/2011/03/06/basic-time-series-with-cassandra/
http://www.datastax.com/dev/blog/advanced-time-series-with-cassandra
http://engineering.rockmelt.com/post/17229017779/modeling-time-series-data-on-top-of-cassandra
As far as your concerns with the community I'm not sure what is giving you that impression, but there is quite a large community (see irc, mailing lists) as well as a growing number of cassandra users.
http://www.datastax.com/cassandrausers
Regarding your criteria:
Open source
Yes
Works well with Python
http://pycassa.github.com/pycassa/
Appropriate for a small team
Yes
Very well documented
http://www.datastax.com/docs/1.1/index
Has specific features to take advantage of ordered time series data
See above links
Helps me solve some of my versioned data problems
If I understand your description correctly you could solve this multiple ways. You could start writing a new row when the version changes. Alternatively you could use composite columns to store the version along with the timestamp/value pair.
I'll also note that Accumulo, HBase, and Cassandra all have essentially the same data model. You will still find small differences around the data model in regards to specific features that each database offers, but the basics will be the same.
The bigger difference between the three will be the architecture of the system. Cassandra takes its architecture from Amazon's Dynamo. Every server in the cluster is the same and it is quite simple to setup. HBase and Accumulo or more direct clones of BigTable. These have more moving parts and will require more setup/types of servers. For example, setting up HDFS, Zookeeper, and HBase/Accumulo specific server types.
Disclaimer: I work for DataStax (we work with Cassandra)
I only have experience in Cassandra and MongoDB but my experience might add something.
So your basically doing time based metrics?
Ok if I understand right you use the timestamp as a versioning mechanism so that you query per a certain timestamp, say to get the latest calculation used you go based on the metric ID or whatever and get ts DESC and take off the first row?
It sounds like a versioned key value store at times.
With this in mind I probably would not recommend either of the two I have used.
Cassandra is too rigid and it's too heirachal, too based around how you query to the point where you can only make one pivot of graph data from (I presume you would wanna graph these metrics) the columfamily which is crazy, hence why I dropped it. As for searching (which Facebook use it for, and only that) it's not that impressive either.
MongoDB, well I love MongoDB and I am an elite of the user group and it could work here if you didn't use a key value storage policy but at the end of the day if your mind is not set and you don't like the tech then let me be the very first to say: don't use it! You will be no good at a tech that you don't like so stay away from it.
Though I would picture this happening in Mongo much like:
{
_id: ObjectID(),
metricId: 'AvailableMessagesInQueue',
formula: '4+5/10.01',
result: NaN
ts: ISODate()
}
And you query for the latest version of your calculation by:
var results = db.metrics.find({ 'metricId': 'AvailableMessagesInQueue' }).sort({ ts: -1 });
var latest = results.getNext();
Which would output the doc structure you see above. Without knowing more of exactly how you wish to query and the general servera and app scenario etc thats the best I can come up with.
I fond this thread on HBase though: http://mail-archives.apache.org/mod_mbox/hbase-user/201011.mbox/%3C5A76F6CE309AD049AAF9A039A39242820F0C20E5#sc-mbx04.TheFacebook.com%3E
Which might be of interest, it seems to support the argument that HBase is a good time based key value store.
I have not personally used HBase so do not take anything I say about it seriously....
I hope I have added something, if not you could try narrowing your criteria so we can answer more dedicated questions.
Hope it helps a little,
Not a plug for any particular technology but this article on Time Series storage using MongoDB might provide another way of thinking about the storage of large amounts of "sensor" data.
http://www.10gen.com/presentations/mongodc-2011/time-series-data-storage-mongodb
Axibase Time-Series Database
Open source
There is a free Community Edition
Works well with Python
https://github.com/axibase/atsd-api-python. There are also other language wrappers, for example ATSD R client.
Appropriate for a small team
Built-in graphics and rule engine make it productive for building an in-house reporting, dashboarding, or monitoring solution with less coding.
Very well documented
It's hard to beat IBM redbooks, but we're trying. API, configuration, and administration is documented in detail and with examples.
Has specific features to take advantage of ordered time series data
It's a time-series database from the ground-up so aggregation, filtering and non-parametric ARIMA and HW forecasts are available.
Helps me solve some of my versioned data problems
ATSD supports versioned time-series data natively in SE and EE editions. Versions keep track of status, change-time and source changes for the same timestamp for audit trails and reconciliations. It's a useful feature to have if you need clean, verified data with tracing. Think energy metering, PHMR records. ATSD schema also supports series tags, which you could use to store versioning columns manually if you're on CE edition or you need to extend default versioning columns: status, source, change-time.
Disclosure - I work for the company that develops ATSD.

Implement Lucene on Existing .NET / SQL Server stack with multiple webservers - store indexes in the database?

This article offered me a huge amount of information:
Implement Lucene on Existing .NET / SQL Server stack with multiple webservers
I'd like to follow on from this by asking about the notion of implementing a Lucene Directory that would persist the indexes to the database (in my case SQL Server) - if anyone has a SWAG on effort that would be helpful.
I can see that the Java realm has this (e.g. Compass), and I'm really hoping the Stackoverflow folks might have considered this to? Any feedback would be appreciated.
My rookie thinking is that persisting indexes to the DB would be a way to solve for the 'distribution' problem. So instead of implementing messaging (not possible for my software because of deployment restrictions), or scheduling (would be ok'ish - product folks always get jumpy in making decisions about how 'current' indexed data has to be), the IndexReader reopen() would efficiently update the index snapshot on whichever server node.
Does this work if DB concurrency/load is not the heart of the problem being solved? - our use is focused around facilitating different data analysis on fields which in turns facilitates different forms of matching.
Our deployment architecture/restrictions do not really allow us to insist on dedicated servers ala SOLR, so this notion of distribution has been discounted by us.
How much index changes do you await? When do you want to read in the index? (On application startup?) Putting the index into the database and "downloading" it on index creation might consume too much resources.
Not sure about your deployment restrictions, but can you have a shared file space for your machines (e.g. SMB/NFS share or similar, or even a SAN-based solution)?
I would be a bit afraid of performance issues with the indexes in the db. Have a look at Elasticsearch. It's the successor of compass. It requires Java, but has a very neat REST interface for your .NET solution. Elasticsearch supports distribution and replication between several nodes. You can run it on the webserver nodes.
This solution will kill performance of the index, since it has to retrieve it from the DB.
I would highly recommend moving to a newer/better alternative, that is Solr (using Solr.NET for example) or ElasticSearch (using NEST)
Solr is a high level interface/manager for Lucene indexes, with a simplified configuration, clustering, replication, etc. solved for you. The nice thing is that if you have some exp. with Lucene, this will not be such a big step
ElasticSearch is a different approach but it's not hard to learn.

share backbone code on client and server side with mongodb storage

I'm looking for a solution to code only once the models for a backbone, mongodb, nodejs based app.
The storage can be only server side, but I still need proper model definitions both on the server and the client. On the server side I've decided to go with mongodb.
After all the only thing I've found is https://github.com/donedotcom/backbone-mongodb.
I think I've understood backbone quite well, but have never use mongodb before, and I can't figure out how to really use backbone-mongodb. Could someone tell me how it complements backbone, what Document and EmbeddedDocument are meant for and how they related to Backbone.Model? Does this have anything to do with code sharing b/w client and server?
Of course, my idea would be to share the model definitions and validation (done mostly with backbone-validation) b/w the server and the client.
thanks, Viktor
I've just finished rewriting backbone-mongodb
there is an example todo application (stay with commit eb935ae7480c18c9d6fcf2f5a2187cdff3d17a13) available as well
TL;DR
Document <-> Backbone.Model
Read and write data on Node.js by overriding Backbone.sync.
EmbeddedDocument no exact match: probably possible to implement via Backbone-relational, some assembly required.
Long read
Since MongoDB is a document-centric database Backbone.Model's will fit Mongo's Document's quite nicely. You can think about MongoDB's Documents as if you could store searchable JSON blobs (..oversimplification for the sake of getting started, but still). They will more will more or less be an exact match to Backbones Models. EmbeddedDocument's corresponds somewhat (..oversimplification again, same reason) to related tables in traditional relational systems. They don't have an exact match in the Backbone world, but you could possible use Backbone-relational to handle them in your Node application. I haven't tried it but I'm making a qualified guess that it will need certain amount of hand-holding.
On the Node side, you'll want to override Backbone.sync, probably globally to read and write Modelobjects to MongoDB Documents.
Also, embedded documents are just that - they are the actual data stored inside another object, not a link to that data stored independently (docs). It's also possible to do links, which are more like traditional relations (see same link).
To be able to correctly program something with this combination, I think you should read at least a bit more on MongoDB, here's some pointers:
Getting started with MongoDB and Python, Python-centric but still a very good introduction to MongoDB.
Have you checked out this MongoDB port of the typical Backbone Todo?
Here's another example of someone describing a webapp using Node & MongoDB. It's not Backbone-driven but it'll still show you a lot about how to work with MongoDB from Node.js.