My setup:
I'm using OrientDB with a large graph of People vertices. I'm using the gremlin java driver to access this database since I would like to eventually switch to a different graph database down the line.
My database:
Each person has certain preference vertices (connected via a labeled edge describing that relation to that preference). All preferences are then connected to core concept vertex.
Problem I'm trying to solve:
I'm trying to find a way (kudos if its as simple as a Gremlin query) to start at a Person vertex and traverse down to all people with identical preferences via a core concept.
Here is a made up example of a matching case. Person B will be returned in a list of perfect matches of people when starting at Person A. I forgot to draw the directions to those edges on this picture :/ take a look at the non matching case to see the directions.
Here is an example of a non matching case. Person B will not be returned in a list of perfect matches of People. Why? Because all outgoing edges on Person B do not resolve to identically matching edges on Person A; in this case, Person A refuses to eat apples, but Person B doesn't list a similar preference to anything they refuse to eat.
Another non matching case from the above example: If Person A refuses to eat apples and Person B refuses to eat bananas -- they will not match.
If Person B likes Fries the most and likes Cheeseburgers the least, that would be a non-matching case as well.
My initial idea (that I'm not sure how to implement) with a query
I would start at person A
Find all outgoing edges to preference vertices and store some sort of "marker" or map to that preference vertex with the edge label.
Traverse out of those vertices down all SimilarTo labeled edges. Copy those markers from preference vertex into the concept vertex.
Reverse the line: concept vertex -> preference vertex (copy makers from concept to preference vertex)
... then somehow match ALL edges to those markers...
exclude person a from the results
Any ideas?
Let's start with the creation of your sample graph:
gremlin> g = TinkerGraph.open().traversal()
==>graphtraversalsource[tinkergraph[vertices:0 edges:0], standard]
gremlin> g.addV("person").property("name", "Person A").as("pa").
......1> addV("person").property("name", "Person B").as("pb").
......2> addV("food").property("name", "Hamburgers").as("hb").
......3> addV("food").property("name", "Chips").as("c").
......4> addV("food").property("name", "Cheeseburgers").as("cb").
......5> addV("food").property("name", "Fries").as("f").
......6> addV("category").property("name", "Burgers").as("b").
......7> addV("category").property("name", "Appetizers").as("a").
......8> addE("most").from("pa").to("hb").
......9> addE("most").from("pb").to("cb").
.....10> addE("least").from("pa").to("c").
.....11> addE("least").from("pb").to("f").
.....12> addE("similar").from("hb").to("b").
.....13> addE("similar").from("cb").to("b").
.....14> addE("similar").from("c").to("a").
.....15> addE("similar").from("f").to("a").iterate()
The query, you're looking for, is the following (I will explain each step later):
gremlin> g.V().has("person", "name", "Person A").as("p").
......1> outE("most","least","refuses").as("e").inV().out("similar").
......2> store("x").by(constant(1)).
......3> in("similar").inE().where(eq("e")).by(label).outV().where(neq("p")).
......4> groupCount().as("m").
......5> select("x").by(count(local)).as("c").
......6> select("m").unfold().
......7> where(select(values).as("c")).select(keys).values("name")
==>Person B
Now, when we add the "refuses to eat Apples" relation:
gremlin> g.V().has("person", "name", "Person A").as("p").
......1> addV("food").property("name", "Apples").as("a").
......2> addV("category").property("name", "Fruits").as("f").
......3> addE("refuses").from("p").to("a").
......4> addE("similar").from("a").to("f").iterate()
...Person B is no longer a match:
gremlin> g.V().has("person", "name", "Person A").as("p").
......1> outE("most","least","refuses").as("e").inV().out("similar").
......2> store("x").by(constant(1)).
......3> in("similar").inE().where(eq("e")).by(label).outV().where(neq("p")).
......4> groupCount().as("m").
......5> select("x").by(count(local)).as("c").
......6> select("m").unfold().
......7> where(select(values).as("c")).select(keys).values("name")
gremlin>
Let's go through the query step by step / line by line:
g.V().has("person", "name", "Person A").as("p").
This should be pretty clear: start at Person A.
outE("most","least","refuses").as("e").inV().out("similar").
Traverse the out edges and set a marker, so that we can reference the edges later. Then traverse to what I called category vertices.
store("x").by(constant(1)).
For every category vertex add a 1 to an internal collection. You could also store the vertex itself, but this would be a waste of memory, since we won't need any information from the vertices.
in("similar").inE().where(eq("e")).by(label).outV().where(neq("p")).
Traverse the other direction along the similar edges to the food and then along those edges that have the same label as the marked edge from the beginning. In the end ignore the person where the traversal started (Person A).
groupCount().as("m").
Count the number of traversers that made it to each person vertex.
select("x").by(count(local)).as("c").
Count the number of Category vertices (the 1s).
select("m").unfold().
Unfold the person counters, so the keys will be the person vertices and the values will be the number of traversers that made it to this vertex.
where(select(values).as("c")).select(keys).values("name")
Ultimately the number of crossed category vertices must match the number of traversers on a person vertex. If that's the case, we have a match.
Note, that it's necessary to have a similar edge incident to the Apples vertex.
Related
I am a newbie in the graph databases world, and I made a query to get leaves of the tree, and I also have a list of Ids. I want to merge both lists of leaves and remove duplicates in a new one to sum property of each. I cannot merge the first 2 sets of vertex
g.V().hasLabel('Group').has('GroupId','G001').repeat(
outE().inV()
).emit().hasLabel('User').as('UsersList1')
.V().has('UserId', within('001','002')).as('UsersList2')
.select('UsersList1','UsersList2').dedup().values('petitions').sum().unfold()
Regards
There are several things wrong in your query:
you call V().has('UserId', within('001','002')) for every user that was found by the first part of the traversal
the traversal could emit more than just the leafs
select('UsersList1','UsersList2') creates pairs of users
values('petitions') tries to access the property petitions of each pair, this will always fail
The correct approach would be:
g.V().has('User', 'UserId', within('001','002')).fold().
union(unfold(),
V().has('Group','GroupId','G001').
repeat(out()).until(hasLabel('User'))).
dedup().
values('petitions').sum()
I didn't test it, but I think the following will do:
g.V().union(
hasLabel('Group').has('GroupId','G001').repeat(
outE().inV()
).until(hasLabel('User')),
has('UserId', within('001','002')))
.dedup().values('petitions').sum()
In order to get only the tree leaves, it is better to use until. Using emit will output all inner tree nodes as well.
union merges the two inner traversals.
How do i write a statement for similarity cosine using ga.nlp.ml.similarity.cosine for node News:
CREATE (n:News)
SET n.text = "Scores of people were already lying dead or injured inside a crowded Orlando nightclub,
and the police had spent hours trying to connect with the gunman and end the situation without further violence.
But when Omar Mateen threatened to set off explosives, the police decided to act, and pushed their way through a
wall to end the bloody standoff.";
What is the proper syntax?
This is the call structure:
CALL ga.nlp.ml.similarity.cosine([<nodes>],depth,Query,Relationship type)
//nodes->The list of annotated nodes for which it will compute the distances
//depth->Integer. if 0, it will not use Concept Net 5 imported data for the distance computing. If greater than 0 it will consider concepts during computation, the value will define how much in general it should go.
//Query->String. It is the query that will be used to compute the tags vector, some are already defined, so this cold be null
//Relationship Type->String. The name to assign to the Relationship created between AnnotatedText nodes.
This is an example:
MATCH (a:AnnotatedText)
with collect(a) as list
CALL ga.nlp.ml.similarity.cosine(list, 0, null, "SIMILARITY") YIELD result
return result
CALL ga.nlp.ml.similarity.cosine([<nodes>],depth,Query,Relationship type)
//nodes->Must be annotated nodes
//depth->integer data
//Query->String
//Relationship Type->String
I'm new to Titan and looking for the best way to iterate over the entire set of vertices with a given label without running out of memory. I come from a strong SQL background so I am still working on switching my way of thinking away from SQL-type thinking. Let's say I have 1 million profile vertices. I would like to iterate over each one and perform some type of statistical analysis of the information linked to each profile. I don't really care how long the entire analysis process takes, but I need to iterate over all of the profiles. In SQL I would do SELECT * FROM MY_TABLE, using a scroll-sensitive result, fetch the next result, grab and process the info linked to that row, then fetch the next result. I also don't care if the result is real-time accurate as it is just for gathering general stats, so if a new profile is added during iteration and I miss it, that's ok.
Even if there is a way to grab all the values for a given property, that would probably work too because then I could go through that list and grab each vertex by its ID for example.
I believe titan does lazy loading so you should be able to just iterate over the whole graph:
GraphTraversal<Vertex, Vertex> it = graph.traversal().V();
while(it.hasNext()){
Vertex v = it.next():
//Do what you want here
}
Another option would be to use the range step so that you explicitly choose the range of vertices you need. For example:
List<Vertex> vertices = graph.traversal().V().range(0, 3).toList();
//Do what you want with your batch of vertices.
With regards to getting vertices of a specific type you can query vertices based on their internal properties. For example if you have and internal property "TYPE" which defined the type you are interested in. You can query for those vertices by:
graph.traversal().V().has("TYPE", "A"); //Gets vertices of type A
graph.traversal().V().has("TYPE", "B"); //Gets vertices of type B
We are using Titan with Persistit as backend, for a graph with about 100.000 vertices. Our use-case is quite complex, but the current problem can be illustrated with a simple example. Let's assume that we are storing Books and Authors in the graph. Each Book vertex has an ISBN number, which is unique for the whole graph.
I need to answer the following query:
Give me the set of ISBN numbers of all Books in the Graph.
Currently, we are doing it like this:
// retrieve graph instance
TitanGraph graph = getGraph();
// Start a Gremlin query (I omit the generics for brevity here)
GremlinPipeline gremlin = new GremlinPipeline().start(graph);
// get all vertices in the graph which represent books (we have author vertices, too!)
gremlin.V("type", "BOOK");
// the ISBN numbers are unique, so we use a Set here
Set<String> isbnNumbers = new HashSet<String>();
// iterate over the gremlin result and retrieve the vertex property
while(gremlin.hasNext()){
Vertex v = gremlin.next();
isbnNumbers.add(v.getProperty("ISBN"));
}
return isbnNumbers;
My question is: is there a smarter way to do this faster? I am new to Gremlin, so it might very well be that I do something horribly stupid here. The query currently takes 2.5 seconds, which is not too bad, but I would like to speed it up, if possible. Please consider the backend as fixed.
I doubt that there is a much faster way (you will always need to iterate over all book vertices), however a less verbose solution to your task is possible with groovy/gremlin.
On the sample graph you can run e.g. the following query:
gremlin> namesOfJaveProjs = []; g.V('lang','java').name.store(namesOfJaveProjs)
gremlin> namesOfJaveProjs
==>lop
==>ripple
Or for your book graph:
isbnNumbers = []; g.V('type','BOOK').ISBN.store(isbnNumbers)
I have a Graph object (this is in Perl) for which I compute its transitive closure (i.e. for solving the all-pairs shortest paths problem).
From this object, I am interested in computing:
Shortest path from any vertices u -> v.
Distance matrix for all vertices.
General reachability questions.
General graph features (density, etc).
The graph has about 2000 vertices, so computing the transitive closure (using Floyd-Warshall's algorithm) takes a couple hours. Currently I am simply caching the serialized object (using Storable, so it's pretty efficient already).
My problem is, deserializing this object still takes a fair amount of time (a minute or so), and consumes about 4GB of RAM. This is unacceptable for my application.
Therefore I've been thinking about how to design a database schema to hold this object in 'unfolded' form. In other words, precompute the all-pairs shortest paths, and store those in an appropriate manner. Then, perhaps use stored procedures to retrieve the necessary information.
My other problem is, I have no experience with database design, and have no clue about implementing the above, hence my post. I'd also like to hear about other solutions that I may be disregarding. Thanks!
To start with, sounds like you need two entities: vertex and edge and perhaps a couple tables for results. I would suggest a table that stores node-to-node information. If A is reachable from Y the relationship gets the reachable attribute. So here goes
Vertex:
any coordinates (x,y,...)
name: string
any attributes of a vertex*
Association:
association_id: ID
association_type: string
VertexInAssociation:
vertex: (constrained to Vertex)
association: (constrained to association)
AssociationAttributes:
association_id: ID (constrained to association)
attribute_name: string
attribute_value: variable -- possibly string
* You might also want to store vertex attributes in a table as well, depending on how complex they are.
The reason that I'm adding the complexity of Association is that an edge is not felt to be directional and it simplifies queries to consider both vertexes to just be members of a set of vertexes "connected-by-edge-x"
Thus an edge is simply an association of edge type, which would have an attribute of distance. A path is an association of path type, and it might have an attribute of hops.
There might be other more optimized schemas, but this one is conceptually pure--even if it doesn't make the first-class concept of "edge" a first class entity.
To create an minimal edge you would need to do this:
begin transaction
select associd = max(association_id) + 1 from Association
insert into Association ( association_id, association_type )
values( associd, 'edge' )
insert
into VertexInAssociation( association_id, vertex_id )
select associd, ? -- $vertex->[0]->{id}
UNION select associd, ? -- $vertex->[1]->{id}
insert into AssociationAttributes ( association_id, association_name, association_value )
select associd, 'length', 1
UNION select associd, 'distance', ? -- $edge->{distance}
commit
You might also want to make association types classes of sorts. So that the "edge" association automatically gets counted as a "reachable" association. Otherwise, you might want to insert UNION select associd, reachable, 'true' in there as well.
And then you could query a union of reachable associations of both vertexes and dump them as reachable associations to the other node if they did not exist and dump existing length attribute value + 1 into the length attribute.
However, you'd probably want an ORM for all that though, and just manipulate it inside the Perl.
my $v1 = Vertex->new( 'V', x => 23, y => 89, red => 'hike!' );
my $e = Edge->new( $v1, $v2 ); # perhaps Edge knows how to calculate distance.