Gremlin: What's an efficient way of finding an edge between two vertices? - titan

So obviously, a straight forward way to find an edge between two vertices is to:
graph.traversal().V(outVertex).bothE(edgeLabel).filter(__.otherV().is(inVertex))
I feel that filter step will have to iterate through all edges making really slow for some applications with a lot of edges.
Another way could be:
traversal = graph.traversal()
.V(outVertex)
.bothE(edgeLabel)
.as("x")
.otherV()
.is(outVertex) // uses index?
.select("x");
I'm assuming the second approach could be much faster since it will be using ID index which will make it faster than the first the approach.
Which one is faster and more efficient (in terms of IO)?
I'm using Titan, so you could also make your answer Titan specific.
Edit
In terms of time, seems like the first approach is faster (edges were 20k for vertex b
gremlin> clock(100000){g.V(b).bothE().filter(otherV().is(a))}
==>0.0016451789999999999
gremlin> clock(100000){g.V(b).bothE().as("x").otherV().is(a).select("x")}
==>0.0018231140399999999
How about IO?

I would expect the first query to be faster. However, few things:
None of the queries is optimal, since both of them enable path calculations. If you need to find a connection in both directions, then use 2 queries (I will give an example below)
When you use clock(), be sure to iterate() your traversals, otherwise you'll only measure how long it takes to do nothing.
These are the queries I would use to find an edge in both directions:
g.V(a).outE(edgeLabel).filter(inV().is(b))
g.V(b).outE(edgeLabel).filter(inV().is(a))
If you expect to get at most one edge:
edge = g.V(a).outE(edgeLabel).filter(inV().is(b)).tryNext().orElseGet {
g.V(b).outE(edgeLabel).filter(inV().is(a)).tryNext()
}
This way you get rid of path calculations. How those queries perform will pretty much depend on the underlying graph database. Titan's query optimizer recognizes that query pattern and should return a result in almost no time.
Now if you want to measure the runtime, do this:
clock(100) {
g.V(a).outE(edgeLabel).filter(inV().is(b)).iterate()
g.V(b).outE(edgeLabel).filter(inV().is(a)).iterate()
}

In case one does not know the vertex Id's, another solution might be
g.V().has('propertykey','value1').outE('thatlabel').as('e').inV().has('propertykey','value2').select('e')
This is also only unidirectional so one needs to reformulate the query for the opposite direction.

Related

Database schema for a tinder like app

I have a database of million of Objects (simply say lot of objects). Everyday i will present to my users 3 selected objects, and like with tinder they can swipe left to say they don't like or swipe right to say they like it.
I select each objects based on their location (more closest to the user are selected first) and also based on few user settings.
I m under mongoDB.
now the problem, how to implement the database in the way it's can provide fastly everyday a selection of object to show to the end user (and skip all the object he already swipe).
Well, considering you have made your choice of using MongoDB, you will have to maintain multiple collections. One is your main collection, and you will have to maintain user specific collections which hold user data, say the document ids the user has swiped. Then, when you want to fetch data, you might want to do a setDifference aggregation. SetDifference does this:
Takes two sets and returns an array containing the elements that only
exist in the first set; i.e. performs a relative complement of the
second set relative to the first.
Now how performant this is would depend on the size of your sets and the overall scale.
EDIT
I agree with your comment that this is not a scalable solution.
Solution 2:
One solution I could think of is to use a graph based solution, like Neo4j. You could represent all your 1M objects and all your user objects as nodes and have relationships between users and objects that he has swiped. Your query would be to return a list of all objects the user is not connected to.
You cannot shard a graph, which brings up scaling challenges. Graph based solutions require that the entire graph be in memory. So the feasibility of this solution depends on you.
Solution 3:
Use MySQL. Have 2 tables, one being the objects table and the other being (uid-viewed_object) mapping. A join would solve your problem. Joins work well for the longest time, till you hit a scale. So I don't think is a bad starting point.
Solution 4:
Use Bloom filters. Your problem eventually boils down to a set membership problem. Give a set of ids, check if its part of another set. A Bloom filter is a probabilistic data structure which answers set membership. They are super small and super efficient. But ya, its probabilistic though, false negatives will never happen, but false positives can. So thats a trade off. Check out this for how its used : http://blog.vawter.com/2016/03/17/Using-Bloomfilters-to-Avoid-Repetition/
Ill update the answer if I can think of something else.

Bipartite graph distributed processing with dynamic programming <?>

I am trying to figure out efficient algorithm for processing Documents in distributed (FaaS to be more precise) environment.
Bruteforce approach would be O(D * F * R) where:
D is amount of Documents to process
F is amount of filters
R is highest amount of Rules in single Filter
I can assume, that:
single Filter has no more than 10 Rules
some Filters may share Rules (so it's N-to-N relation)
Rules are boolean functions (predicates) so I can try to take advantage of early cutting, meaning that if I have f() && g() && h() with f() evaluating to false then I do not have to process g() and h() at all and can return false immediately.
in single Document amount of Fields is always same (and about 5-10)
Filters, Rules and Documents are already in database
every Filter has at least one Rule
Using sharing (second assumption) I had an idea to first process Document against every Rule and then (after finishing) for every Filter using already computed Rules compute result. This way if Rule is shared then I am computing it only once. However, it doesn't take advantage of early cutting (third assumption).
Second idea is to use early cutting as slightly optimized bruteforce, but it won't use Rules sharing then.
Rules sharing looks like subproblem sharing, so probably memoization and dynamic programming will be helpful.
I have noticed, that Filter-Rule relation is bipartite graph. Not quite sure if it can help me though. I also have noticed, that I could use reverse sets and in every Rule store corresponding Set. This would however create circular dependency and may cause desynchronization problems in database.
Default idea is that Documents are streamed, and every single of them is event that will create FaaS instance to process it. However, this would probably force every FaaS instance to query for all Filters, which leaves me at O(F * D) queries because of Shared-Nothing architecture.
Sample Filter:
{
'normalForm': 'CONJUNCTIVE',
'rules':
[
{
'isNegated': true,
'field': 'X',
'relation': 'STARTS_WITH',
'value': 'G',
},
{
'isNegated': false,
'field': 'Y',
'relation': 'CONTAINS',
'value': 'KEY',
},
}
or in more condense form:
document -> !document.x.startsWith("G") && document.y.contains("KEY")
for Document:
{
'x': 'CAR',
'y': 'KEYBOARD',
'z': 'PAPER',
}
evaluates to true.
I can slightly change data model, stream something else instead of Document (ex. Filters) and use any nosql database and tools to help it. Apache Flink (event processing) and MongoDB (single query to retrieve Filter with it's Rules) or maybe Neo4j (as model looks like bipartite graph) looks like could help me, but not sure about it.
Can it be processed efficiently (with regard to - probably - database queries)? What tools would be appropriate?
I have been also wondering, if maybe I am trying to solve special case of some more general (math) problem that may have useful theorems and algorithms.
EDIT: My newest idea: Gather all Documents in cache like Redis. Then single event starts up and publishes N functions (as in Function as a Service), and every function selects F/N (amount of Filters divided by number of instances - so just evenly distributing Filters across instances) this way every Filter is fetched from database only once.
Now, every instance streams all Documents from cache (one document should be less than 1MB and at the same time I should have 1-10k of them so should fit in cache). This way every Document is selected from database only once (to cache).
I have reduced database read operations (still some Rules are selected multiple times), but still I am not taking advantage of Rule sharing across Filters. I could intentionally ignore it by using document database. This way by selecting Filter I will also get it's Rules. Still - I have to recalculate it's value.
I guess that's what I get for using Shared Nothing scalable architecture?
I realized that although my graph is indeed (in theory) bipartite but (in practice) it's going to be set of disjoint bipartite graphs (as not all Rules are going to be shared). This means, that I can process those disjoint parts independently on different FaaS instances without recalculating same Rules.
This reduces my problem to processing single bipartite connected graph. Now, I can use benefits of dynamic programming and share result of Rule computation only if memory i shared, so I cannot divide (and distribute) this problem further without sacrificing this benefit. So I thought this way: if I have already decided, that I will have to recompute some Rules, then let it be low compared to disjoint parts that I will get.
This is actually minimum cut problem, that has (fortunately) polynomial complexity known algorithm.
However, this may be not ideal in my case, because I don't want to cut any part of graph - I would like to cut graph ideally in half (divide and conquer strategy, that could be reapplied recursively till graph would be so small that could be processed in seconds in FaaS instance, that has time bound).
This means, that I am looking for cut, that would create two disjoint bipartite graphs, with possibly same amount of vertexes each (or at least similar).
This is sparsest cut problem, that is NP-hard, but has O(sqrt(logN)) approximated algorithm, that also favors less cut edges.
Currently, this does look like solution for my problem, however I would love to hear any suggestions, improvements and other answers.
Maybe it can be done better with other data model or algorithm? Maybe I can reduce it further with some theorem? Maybe I could transform it to other (simpler) problem, or at least that is easier to divide and distribute across nodes?
This idea and analysis strongly suggests using graph database.

what is the best way to retrive information in a graph through has Step

I'm using titan graph db with tinkerpop plugin. What is the best way to retrieve a vertex using has step?
Assuming employeeId is a unique attribute which has a unique vertex centric index defined.
Is it through label
i.e g.V().has(label,'employee').has('employeeId','emp123')
g.V().has('employee','employeeId','emp123')
(or)
is it better to retrieve a vertex based on Unique properties directly?
i.e g.V().has('employeeId','emp123')
Which one of the two is the quickest and better way?
First you have 2 options to create the index:
mgmt.buildIndex('byEmployeeId', Vertex.class).addKey(employeeId).buildCompositeIndex()
mgmt.buildIndex('byEmployeeId', Vertex.class).addKey(employeeId).indexOnly(employee).buildCompositeIndex()
For option 1 it doesn't really matter which query you're going to use. For option 2 it's mandatory to use g.V().has('employee','employeeId','emp123').
Note that g.V().hasLabel('employee').has('employeeId','emp123') will NOT select all employees first. Titan is smart enough to apply those filter conditions, that can leverage an index, first.
One more thing I want to point out is this: The whole point of indexOnly() is to allow to share properties between different types of vertices. So instead of calling the property employeeId, you could call it uuid and also use it for employers, companies, etc:
mgmt.buildIndex('employeeById', Vertex.class).addKey(uuid).indexOnly(employee).buildCompositeIndex()
mgmt.buildIndex('employerById', Vertex.class).addKey(uuid).indexOnly(employer).buildCompositeIndex()
mgmt.buildIndex('companyById', Vertex.class).addKey(uuid).indexOnly(company).buildCompositeIndex()
Your queries will then always have this pattern: g.V().has('<label>','<prop-key>','<prop-value>'). This is in fact the only way to go in DSE Graph, since we got completely rid of global indexes that span across all types of vertices. At first I really didn't like this decision, but meanwhile I have to agree that this is so much cleaner.
The second option g.V().has('employeeId','emp123') is better as long as the property employeeId has been indexed for better performance.
This is because each step in a gremlin traversal acts a filter. So when you say:
g.V().has(label,'employee').has('employeeId','emp123')
You first go to all the vertices with the label employee and then from the employee vertices you find emp123.
With g.V().has('employeeId','emp123') a composite index allows you to go directly to the correct vertex.
Edit:
As Daniel has pointed out in his answer, Titan is actually smart enough to not visit all employees and leverages the index immediately. So in this case it appears there is little difference between the traversals. I personally favour using direct global indices without labels (i.e. the first traversal) but that is just a preference when using Titan, I like to keep steps and filters to a minimum.

Possible to do TSP with mongodb?

I'm working an a map app where I can add waypoints along specific routes. I need to pull my waypoints out in order obviously so I can get directions from A-D in the correct order.
I've read a bit about geoJSON in mongodb but I'm curious if theres a way to query my data so that my points come out ordered by how close they are together rather than the order I've put them in.
Basically what I'm asking...Is there a way to do a "Traveling Salesman query" so that my waypoints are ordered in the smartest order?
I think the short answer is no. You're going to need to add an order key to your waypoints. This is a pretty standard pattern for any nav system with waypoints. At the very least you need to know which point is first and which is last so that you can then solve the TSP.

How to search for multiple tags around one location?

I'm trying to figure out what's the best solution to find all nodes of certain types around a given GPS-Location.
Let's say I want to get all cafes, pubs, restaurant and parks around a given point X.xx,Y.yy.
[out:json];(node[amenity][leisure](around:500,52.2740711,10.5222147););out;
This returns nothing because I think it searches for nodes that are both, amenity and leisure which is not possible.
[out:json];(node[amenity or leisure](around:500,52.2740711,10.5222147););out;
[out:json];(node[amenity,leisure](around:500,52.2740711,10.5222147););out;
[out:json];(node[amenity;leisure](around:500,52.2740711,10.5222147););out;
[out:json];(node[amenity|leisure](around:500,52.2740711,10.5222147););out;
[out:json];(node[amenity]|[leisure](around:500,52.2740711,10.5222147););out;
[out:json];(node[amenity],[leisure](around:500,52.2740711,10.5222147););out;
[out:json];(node[amenity];[leisure](around:500,52.2740711,10.5222147););out;
These solutions result in an error (400: Bad Request)
The only working solution I found is the following one which results in really long queries
[out:json];(node[amenity=cafe](around:500,52.2740711,10.5222147);node[leisure=park](around:500,52.2740711,10.5222147);node[amenity=pub](around:500,52.2740711,10.5222147);node[amenity=restaurant](around:500,52.2740711,10.5222147););out;
Isn't there an easier solution without multiple "around" statements?
EDIT:
Found This on which is a little bit shorter. But still multiple "around" statements.
[out:json];(node["leisure"~"park"](around:400,52.2784715,10.5249662);node["ameni‌​ty"~"cafe|pub|restaurant"](around:400,52.2784715,10.5249662););out;
What you're probably looking for is regular expression support for keys (not only values).
Here's an example based on your query above:
[out:json];
node[~"^(amenity|leisure)$"~"."](around:500,52.2740711,10.5222147);
out;
NB: Since version 0.7.54 (released in Q1/2017) Overpass API also supports filter criteria with 'or' conditions. See this example on how to use this new (if: ) filter.