Technology Stack:
Tinkerpop Stack 2.4 (Rexster HTTP REST Server)
Titan 0.5.4
DynamoDB (AWS)
NodeJS
Goal:
I would like to utilize the Rexster RESTful based API for querying and traversals of my graph database. I am trying to understand the _properties query parameter for filtering results based on Vertex Query syntax.
Result of Vertices Query:
http://localhost:8182/graphs/mygraph/vertices
{
"version": "2.5.0",
"results": [
{
"name": "Frank Stein",
"_id": 25600768,
"_type": "vertex"
},
{
"name": "John Doe",
"_id": 25600512,
"_type": "vertex"
}
],
"totalSize": 2,
"queryTime": 219.86688
}
Result of Edge Query:
http://localhost:8182/graphs/mygraph/vertices
{
"version": "2.5.0",
"results": [
{
"_id": "f8q68-f8phc-4is5-f8pog",
"_type": "edge",
"_outV": 25600512,
"_inV": 25600768,
"_label": "friends"
}
],
"totalSize": 1,
"queryTime": 164.384768
}
Problem:
These URI's do not return what I am assuming I would get returned, always return an empty set.:
Requests:
_http://localhost:8182/graphs/privvy/vertices/25600768/both?properties=[[name,=,"John Doe"]]
_http://localhost:8182/graphs/privvy/vertices/25600768/both?properties=[[name,=,John Doe]]
_http://localhost:8182/graphs/privvy/vertices/25600768/both?properties=[[name,=,(s,"John Doe")]]
_http://localhost:8182/graphs/privvy/vertices/25600768/both?properties=[[name,=,(s,John Doe)]]
Response:
{
"version": "2.5.0",
"results": [],
"totalSize": 0,
"queryTime": 22.641152
}
Additional Information:
The following URI does return a resulting set of adjacent vertices if I just switch the = (equal operator) to the <> (not equal) operator:
Request:
_http://localhost:8182/graphs/privvy/vertices/25600768/both?properties=[[name,<>,"John Doe"]]
Response:
{
"version": "2.5.0",
"results": [
{
"name": "John Doe",
"_id": 25600512,
"_type": "vertex"
}
],
"totalSize": 1,
"queryTime": 17.451008
}
Anyone have any clue where I may be going wrong?
References:
https://github.com/tinkerpop/rexster/wiki/Basic-REST-API
https://github.com/tinkerpop/rexster/wiki/Property-Data-Types (Shows NO EXAMPLES of string use for data type Vertex Queries.)
https://github.com/tinkerpop/blueprints/wiki/Vertex-Query
Thanks Friends!
Tom
In the link you provided, note this section explicitly:
https://github.com/tinkerpop/blueprints/wiki/Vertex-Query#query-use-cases
Note that all the use cases involved "edges". You are trying to do a vertex query over property values on the adjacent vertex of an edge. If you want your query to work that way, you would have to denormalize your data to include the "name" property on the edges.
Note that in my curl request against the default graph below, things work as expected when I build my vertex query against "weight" (and edge property):
$ curl -g "http://localhost:8182/graphs/tinkergraph/vertices/1/out?_properties=[[weight,=,(f,0.4)]]"
{"version":"2.5.0","results":[{"name":"lop","lang":"java","_id":"3","_type":"vertex"}],"totalSize":1,"queryTime":1.070072}
Related
On my local overpass api server with only french data on which is applied hourly planet diff, some of the query responses are wrong.
It's not doing it for each query : but something like once every 200 requests , sometime more ...
for example :
[timeout:360][out:json];way(48.359900103518235,5.708088852670471,48.360439696481784,5.708900947329539)[highway];out ;
return 3 ways :
{
"version": 0.6,
"generator": "Overpass API 0.7.54.13 ff15392f",
"osm3s": {
"timestamp_osm_base": "2019-09-23T15:00:00Z",
},
"elements": [
{
"type": "way",
"id": 53290349,
"nodes": [...],
"tags": {
"highway": "secondary",
"maxspeed": "100",
"ref": "L 385"
}
},
{
"type": "way",
"id": 238493649,
"nodes": [...],
"tags": {
"highway": "residential",
"name": "Rue du Stand",
"ref": "C 3",
"source": "..."
}
},
{
"type": "way",
"id": 597978369,
"nodes": [...],
"tags": {
"highway": "service"
}
}
]
}
First one is in Germany, far East ...
My question :
On an overpass api server, is there a way to apply diff only for defined area ? it is not documented ( neither here : https://wiki.openstreetmap.org/wiki/Overpass_API/Installation
or here : https://wiki.openstreetmap.org/wiki/User:Breki/Overpass_API_Installation#Configuring_Diffs )
if not, how to get rid of those wrong results ?
Thanks,
Two questions, so two answer :
i found that there is French diff file existing : http://download.openstreetmap.fr/replication/europe/france/minute/ so i will restart my server with those diffs.
The best way to get rid of those wrong result is to have a consistant server : no world diff for just France Data.
I am attempting to store a connected Node's data in a property of a selected Node in OrientDB via the OUT() projection. e.g.:
SELECT *, OUT("Has_Friend") AS Friends FROM Person
Given that a "Person" Node is connected to several "Friend" Nodes via the "Has_Friend" Edge, I would like the actual Friend Node properties to be stored in the "Friends" property on each Person Node returned by this query. e.g.:
{
"result": [
{
"Name": "Joe",
"Friends": [
{
"Name": "Ben",
"Title": "Mr."
},
{
"Name": "Stan",
"Title": "Dr."
}
]
},
{
"Name": "Tim",
"Friends": [
{
"Name": "Terrance",
"Title": "Esq."
},
{
"Name": "Sarah",
"Title": "Dr."
}
]
}
]
}
However, the query only stores the RID of each "Friend" Node in the "Friends" property rather than the actual data of that "Friend" Node. e.g.:
{
"result": [
{
"Name": "Joe",
"Friends": [
"#228:1",
"#227:1"
]
},
{
"Name": "Tim",
"Friends": [
"#225:1",
"#226:1"
]
}
]
}
I've searched the OrientDB documentation but am unsure as to how I might accomplish this. I suspect there's a way to nest queries for those Friend nodes inside of the primary query, but I'm not entirely sure how to do that. Any insight is greatly appreciated!
try to use expand() function. It would expands the document pointed by that link and give all properties of this document. So your query should looks like this one:
SELECT expand(in("Has_Friend")) AS Friend FROM Person
I am trying to figure out specific mongoDb query, so far unsuccessfully.
Documents in my collections looks someting like this (contain more attributes, which are irrelevant for this query):
[{
"_id": ObjectId("596e01b6f4f7cf137cb3d096"),
"code": "A",
"name": "name1",
"sys": {
"cts": ISODate("2017-07-18T12:40:22.772Z"),
}
},
{
"_id": ObjectId("596e01b6f4f7cf137cb3d097"),
"code": "A",
"name": "name2",
"sys": {
"cts": ISODate("2017-07-19T12:40:22.772Z"),
}
},
{
"_id": ObjectId("596e01b6f4f7cf137cb3d098"),
"code": "B",
"name": "name3",
"sys": {
"cts": ISODate("2017-07-16T12:40:22.772Z"),
}
},
{
"_id": ObjectId("596e01b6f4f7cf137cb3d099"),
"code": "B",
"name": "name3",
"sys": {
"cts": ISODate("2017-07-10T12:40:22.772Z"),
}
}]
What I need is to get current versions of documents, filtered by code or name, or both. Current version means that from two(or more) documents with same code, I want pick the one which has latest sys.cts date value.
So, result of this query executed with filter name="name3" would be the 3rd document from previous list. Result of query without any filter would be 2nd and 3rd document.
I have an idea how to construct this query with changed data model but I was hoping someone could lead me right way without doing so.
Thank you
Is there a way specify in an OData query that instead of certain name/value pairs being returned, a raw array should be returned instead? For example, if I have an OData query that results in the following:
{
"#odata.context": "http://blah.org/MyService/$metadata#People",
"value": [
{
"Name": "Joe Smith",
"Age": 55,
"Employers": [
{
"Name": "Acme",
"StartDate": "1/1/1990"
},
{
"Name": "Enron",
"StartDate": "1/1/1995"
},
{
"Name": "Amazon",
"StartDate": "1/1/1999"
}
]
},
{
"Name": "Jane Doe",
"Age": 30,
"Employers": [
{
"Name": "Joe's Crab Shack",
"StartDate": "1/1/2007"
},
{
"Name": "TGI Fridays",
"StartDate": "1/1/2010"
}
]
}
]
}
Is there anything I can add to the query to instead get back:
{
"#odata.context": "http://blah.org/MyService/$metadata#People",
"value": [
{
"Name": "Joe Smith",
"Age": 55,
"Employers": [
[ "Acme", "1/1/1990" ],
[ "Enron", "1/1/1995" ],
[ "Amazon", "1/1/1999" ]
]
},
{
"Name": "Jane Doe",
"Age": 30,
"Employers": [
[ "Joe's Crab Shack", "1/1/2007" ],
[ "TGI Fridays", "1/1/2010" ]
]
}
]
}
While I could obviously do the transformation client side, in my use case the field names are very large compared to the data, and I would rather not transmit all those names over the wire nor spend the CPU cycles on the client doing the transformation. Before I come up with my own custom parameters to indicate that the format should be as I desire, I wanted to check if there wasn't already a standardized way to do so.
OData provides several options to control the amount of data and metadata to be included in the response.
In OData v4, you can add odata.metadata=minimal to the Accept header parameters (check the documentation here). This is the default behaviour but even with this, it will still include the field names in the response and for a good reason.
I can see why you want to send only the values without the fields name but keep in mind that this will change the semantic meaning of the response structure. It will make it less intuitive to deal with as a json record on the client side.
So to answer your question, The answer is 'NO',
Other options to minimize the response size:
You can use the $value OData option to gets the raw value of a single property.
Check this example:
services.odata.org/OData/OData.svc/Categories(1)/Products(1)/Supplier/Address/City/$value
You can also use the $select option to cherry pick only the fields you need by selecting a subset of properties to include in the response
I created ES (with MongoDB river plugin) index with folowing information:
{
"type": "mongodb",
"mongodb": {
"db": "mydatabase",
"collection": "Users"
},
"index": {
"name": "users",
"type": "user"
}
}
When I insert simple object like:
{
"name": "Joe",
"surname": "Black"
}
Everything work without problem (I can see data using ES Head web interface).
But when I insert bigger object, it doesn't index it:
{
"object": {
"text": "Let's do it again!",
"boolTest": false
},
"type": "coolType",
"tags": [
""
],
"subObject1": {
"count": 0,
"last3": [],
"array": []
},
"subObject2": {
"count": 0,
"last3": [],
"array": []
},
"subObject3": {
"count": 0,
"last3": [],
"array": []
},
"usrID": "5141a5a4d8f3a79c09000001",
"created": Date(1363527664000),
"lastUpdate": Date(1363527664000)
}
Where can be problem please?
Thank you for your help!
EDIT: This is error from ES console:
org.elasticsearch.index.mapper.MapperParsingException: object mapping
for [stream] tried to parse as object, but got EOF, has a concrete
value been provided to it? at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:457)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:486)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:430)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:318)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:157)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:533)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:431)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722) [2013-03-20
10:35:05,697][WARN
][org.elasticsearch.river.mongodb.MongoDBRiver$Indexer] failed to
executefailure in bulk execution: [0]: index [stream], type [stream],
id [514982c9b7f3bfbdb488ca81], message [MapperParsingException[object
mapping for [stream] tried to parse as object, but got EOF, has a
concrete value been provided to it?]] [2013-03-20 10:35:05,698][INFO
][org.elasticsearch.river.mongodb.MongoDBRiver$Indexer] Indexed 1
documents, 1 insertions 0, updates, 0 deletions, 0 documents per
second
Which version of MongoDB river are you using?
Please look at issue #26 [1]. It contains examples on indexing large json documents with no issue.
If you can still reproduce the issue please provide more details: river settings, mongodb (version, specific settings), elasticsearch (version, specific settings).
https://github.com/richardwilly98/elasticsearch-river-mongodb/issues/26