Neo4j: Converting REST call output to JSON - rest

I have a requirement to convert the output of cypher into JSON.
Here is my code snippet.
RestCypherQueryEngine rcqer=new RestCypherQueryEngine(restapi);
String nodeN = "MATCH n=(Company) WITH COLLECT(n) AS paths RETURN EXTRACT(k IN paths | LAST(nodes(k))) as lastNode";
final QueryResult<Map<String,Object>> queryResult = rcqer.query(searchQuery);
for(Map<String,Object> row:queryResult)
{
System.out.println((ArrayList)row.get("lastNode"));
}
Output:
[http://XXX.YY6.192.103:7474/db/data/node/445, http://XXX.YY6.192.103:7474/db/data/node/446, http://XXX.YY6.192.103:7474/db/data/node/447, http://XXX.YY6.192.103:7474/db/data/node/448, http://XXX.YY6.192.103:7474/db/data/node/449, http://XXX.YY6.192.103:7474/db/data/node/450, http://XXX.YY6.192.103:7474/db/data/node/451, http://XXX.YY6.192.103:7474/db/data/node/452, http://XXX.YY6.192.103:7474/db/data/node/453]
I am not able to see the actual data (I am getting URL's). I am pretty sure I am missing something here.
I would also like to convert the output to JSON.
The cypher works in my browser interface.
I looked at various articles around this:
Java neo4j, REST and memory
Neo4j Cypher: How to iterate over ExecutionResult result
Converting ExecutionResult object to json
The last 2 make use of EmbeddedDatabase which may not be possible in my scenario (as the Neo is hosted in another cloud, hence the usage of REST).
Thanks.

Try to understand what you're doing? Your query does not make sense at all.
Perhaps you should re-visit the online course for Cypher: http://neo4j.com/online-course
MATCH n=(Company) WITH COLLECT(n) AS paths RETURN EXTRACT(k IN paths | LAST(nodes(k))) as lastNode
you can just do:
MATCH (c:Company) RETURN c
RestCypherQueryEngine rcqer=new RestCypherQueryEngine(restapi);
final QueryResult<Map<String,Object>> queryResult = rcqer.query(query);
for(Node node : queryResult.to(Node.class))
{
for (String prop : node.getPropertyKeys()) {
System.out.println(prop+" "+node.getProperty(prop));
}
}
I think it's better to use the JDBC driver for what you try to do, and also actually return the properties you're trying to convert to JSON.

Related

How to set the proper data type when writing to an OPC-UA node in Milo?

I am an OPC-UA newbie integrating a non-OPC-UA system to an OPC-UA server using the Milo stack. Part of this includes writing values to Nodes in the OPC-UA server. One of my problems is that values from the other system comes in the form of a Java String and thus needs to be converted to the Node's proper data type. My first brute force proof-of-concept uses the below code in order to create a Variant that I can use to write to the Node (as in the WriteExample.java). The variable value is the Java String containing the data to write, e.g. "123" for an Integer or "32.3" for a Double. The solution now includes hard-coding the "types" from the Identifiers class (org.eclipse.milo.opcua.stack.core, see the switch statement) which is not pretty and I am sure there is a better way to do this?
Also, how do I proceed if I want to convert and write "123" to a node that is, for example,
UInt64?
try {
VariableNode node = client.getAddressSpace().createVariableNode(nodeId);
Object val = new Object();
Object identifier = node.getDataType().get().getIdentifier();
UInteger id = UInteger.valueOf(0);
if(identifier instanceof UInteger) {
id = (UInteger) identifier;
}
System.out.println("getIdentifier: " + node.getDataType().get().getIdentifier());
switch (id.intValue()) {
// Based on the Identifiers class in org.eclipse.milo.opcua.stack.core;
case 11: // Double
val = Double.valueOf(value);
break;
case 6: //Int32
val = Integer.valueOf(value);
break;
}
DataValue data = new DataValue(new Variant(val),StatusCode.GOOD, null);
StatusCode status = client.writeValue(nodeId, data).get();
System.out.println("Wrote DataValue: " + data + " status: " + status);
returnString = status.toString();
} catch (Exception e) {
System.out.println("ERROR: " + e.toString());
}
I've looked at Kevin's response to this thread: How do I reliably write to a OPC UA server? But I'm still a bit lost... Some small code example would really be helpful.
You're not that far off. Every sizable codebase eventually has one or more "TypeUtilities" classes, and you're going to need one here.
There's no getting around that fact that you need to be able to map types in your system to OPC UA types and vice versa.
For unsigned types you'll use the UShort, UInteger, and ULong classes from the org.eclipse.milo.opcua.stack.core.types.builtin.unsigned package. There are convenient static factory methods that make their use a little less verbose:
UShort us = ushort(foo);
UInteger ui = uint(foo);
ULong ul = ulong(foo);
I'll explore that idea of including some kind of type conversion utility for an upcoming release, but even with that, the way OPC UA works you have to know the DataType of a Node to write to it, and in most cases you want to know the ValueRank and ArrayDimensions as well.
You either know these attribute values a priori, obtain them via some other out of band mechanism, or you read them from the server.

jayData complex filter evaluation

I am new to jayData and am trying to filter on an entity set. The filter needs to perform an complex evaluation beyond what I saw in the samples.
Here is a working sample of what I am trying to accomplish (the listView line isn't and is just there to show what I plan to do with the data):
function () {
var weekday = moment().isoWeekday()-1;
console.log(weekday);
var de = leagueDB.DailyEvents.toArray(function (events) {
console.log(events);
var filtered = [];
for (var e = 0; e < events.length;e++) {
console.log(events[e]);
console.log(events[e].RecurrenceRule);
var rule = RRule.fromString(events[e].RecurrenceRule);
var ruleOptions = rule.options.byweekday;
var isDay = ruleOptions.indexOf(weekday);
console.log(ruleOptions, isDay);
if(isDay =! -1)
{
filtered.push(events[e]);
}
}
$("#listView").kendoListView({dataSource:filtered});
});
Basically it is just evaluating a recurring rule string to see if the current day meets that criteria, if so add that event to the list for viewing.
But it blows up when I try to do this:
eventListLocal:leagueDB.DailyEvents.filter(function(e){
console.log("The Weekday is:"+viewModel.weekday);
console.log(e);
console.log("The recurrence rule is:"+e.RecurrenceRule);
var rruleOptions = viewModel.rruleOptions(e.RecurrenceRule);
if (rruleOptions !== -1) {
return true;
}
}).asKendoDataSource()
The error that is generating is:
Exception: Unable to resolve type:undefined
The thing is it seems to be occurring on "e" and the console logs like the event is not being passed in. However, I am not seeing a list either. In short I am really confused as to what is going on.
Any help would be appreciated.
Thanks,
You can't write filter expressions such as this.
When you write .filter(...), jaydata will parse your expression and then it will generate filter for underlying provider, for example where for webSql and $filter for oDataProvider.
Both JayData expression parser and the data provider itself should understand your filter.
Your filter is not suitable for this approach, because most of your codes are not familiar for jaydata expression parser and the underlying data provider, for example your console.log etc.
You can simplify your filter, or you should load all your data into an array, and then you can use filter method of array itself, there, you can write any filter you like, and your filter will work. Of course this has performance issue in some scenarios when your data set is large.
Read more on http://jaydata.org/tutorials/entityexpressions-the-heart-of-jaydata

What's wrong with my Meteor publication?

I have a publication, essentially what's below:
Meteor.publish('entity-filings', function publishFunction(cik, queryArray, limit) {
if (!cik || !filingsArray)
console.error('PUBLICATION PROBLEM');
var limit = 40;
var entityFilingsSelector = {};
if (filingsArray.indexOf('all-entity-filings') > -1)
entityFilingsSelector = {ct: 'filing',cik: cik};
else
entityFilingsSelector = {ct:'filing', cik: cik, formNumber: { $in: filingsArray} };
return SB.Content.find(entityFilingsSelector, {
limit: limit
});
});
I'm having trouble with the filingsArray part. filingsArray is an array of regexes for the Mongo $in query. I can hardcode filingsArray in the publication as [/8-K/], and that returns the correct results. But I can't get the query to work properly when I pass the array from the router. See the debugged contents of the array in the image below. The second and third images are the client/server debug contents indicating same content on both client and server, and also identical to when I hardcode the array in the query.
My question is: what am I missing? Why won't my query work, or what are some likely reasons it isn't working?
In that first screenshot, that's a string that looks like a regex literal, not an actual RegExp object. So {$in: ["/8-K/"]} will only match literally "/8-K/", which is not the same as {$in: [/8-K/]}.
Regexes are not EJSON-able objects, so you won't be able to send them over the wire as publish function arguments or method arguments or method return values. I'd recommend sending a string, then inside the publish function, use new RegExp(...) to construct a regex object.
If you're comfortable adding new methods on the RegExp prototype, you could try making RegExp an EJSON-able type, by putting this in your server and client code:
RegExp.prototype.toJSONValue = function () {
return this.source;
};
RegExp.prototype.typeName = function () {
return "regex";
}
EJSON.addType("regex", function (str) {
return new RegExp(str);
});
After doing this, you should be able to use regexes as publish function arguments, method arguments and method return values. See this meteorpad.
/8-K/.. that's a weird regex. Try /8\-K/.
A minus (-) sign is a range indicator and usually used inside square brackets. The reason why it's weird because how could you even calculate a range between 8 and K? If you do not escape that, it probably wouldn't be used to match anything (thus your query would not work). Sometimes, it does work though. Better safe than never.
/8\-K/ matches the string "8-K" anywhere once.. which I assume you are trying to do.
Also it would help if you would ensure your publication would always return something.. here's a good area where you could fail:
if (!cik || !filingsArray)
console.error('PUBLICATION PROBLEM');
If those parameters aren't filled, console.log is probably not the best way to handle it. A better way:
if (!cik || !filingsArray) {
throw "entity-filings: Publication problem.";
return false;
} else {
// .. the rest of your publication
}
This makes sure that the client does not wait unnecessarily long for publications statuses as you have successfully ensured that in any (input) case you returned either false or a Cursor and nothing in between (like surprise undefineds, unfilled Cursors, other garbage data.

What would be the opposite to hasFields?

I'm using logical deletes by adding a field deletedAt. If I want to get only the deleted documents it would be something like r.table('clients').hasFields('deletedAt'). My method has a withDeletes parameter which determines if deleted documents are excluded or not.
Finally, people at the #rethinkdb IRC channel suggested me to use the filter method and that did the trick:
query = adapter.table(table).filter(filters)
if withDeleted
query = adapter.filter (doc) ->
return doc.hasFields 'deletedAt'
else
query = adapter.filter (doc) ->
return doc.hasFields('deletedAt').not()
query.run connection, (err, results) ->
...
My question is why do I have to use filter and not something like:
query = adapter.table(table).filter(filters)
query = if withDeleted then query.hasFields 'deletedAt' else query.hasFields('deletedAt').not()
...
or something like that.
Thanks in advance.
The hasFields function can be called on both objects and sequences, but not cannot.
This query:
query.hasFields('deletedAt')
Behaves the same as this one (on sequences of objects):
query.filter((doc) -> return doc.hasFields('deletedAt'))
However, this query:
query.hasFields('deletedAt').not()
Behaves like this:
query.filter((doc) -> return doc.hasFields('deletedAt')).not()
But that doesn't make sense. you want the not to be inside the filter, not after it. Like this:
query.filter((doc) -> return doc.hasFields('deletedAt').not())
One nice that about RethinkDB is that because of the way queries are built up in host language it's very easy to define new fluent syntax by just defining functions in your language. For example if you wanted to have a lacksFields function you could define it in Python (sorry I don't really know coffeescript) like so:
def lacks_fields(stream, *args):
res = stream
for arg in args:
res = res.filter(lambda x: ~x.has_fields(arg))
return res
Then you can use a nice fluent syntax like:
lacks_fields(stream, "foo", "bar", "buzz")

Protovis - dealing with a text source

lets say I have a text file with lines as such:
[4/20/11 17:07:12:875 CEST] 00000059 FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on D:/Prgs/testing/WebSphere/AppServer/profiles/ProcCtr01/logs/ffdc/server1_3d203d20_11.04.20_17.07.12.8755227341908890183253.txt com.test.testserver.management.cmdframework.CmdNotificationListener 134
[4/20/11 17:07:27:609 CEST] 0000005d wle E CWLLG2229E: An exception occurred in an EJB call. Error: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
com.lombardisoftware.core.TeamWorksException: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
at com.lombardisoftware.server.ejb.persistence.CommonDAO.assertNotNull(CommonDAO.java:70)
Is there anyway to easily import a data source such as this into protovis, if not what would the easiest way to parse this into a JSON format. For example for the first entry might be parsed like so:
[
{
"Date": "4/20/11 17:07:12:875 CEST",
"Status": "00000059",
"Msg": "FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I",
},
]
Thanks, David
Protovis itself doesn't offer any utilities for parsing text files, so your options are:
Use Javascript to parse the text into an object, most likely using regex.
Pre-process the text using the text-parsing language or utility of your choice, exporting a JSON file.
Which you choose depends on several factors:
Is the data somewhat static, or are you going to be running this on a new or dynamic file each time you look at it? With static data, it might be easiest to pre-process; with dynamic data, this may add an annoying extra step.
How much data do you have? Parsing a 20K text file in Javascript is totally fine; parsing a 2MB file will be really slow, and will cause the browser to hang while it's working (unless you use Workers).
If there's a lot of processing involved, would you rather put that load on the server (by using a server-side script for pre-processing) or on the client (by doing it in the browser)?
If you wanted to do this in Javascript, based on the sample you provided, you might do something like this:
// Assumes var text = 'your text';
// use the utility of your choice to load your text file into the
// variable (e.g. jQuery.get()), or just paste it in.
var lines = text.split(/[\r\n\f]+/),
// regex to match your log entry beginning
patt = /^\[(\d\d?\/\d\d?\/\d\d? \d\d:\d\d:\d\d:\d{3} [A-Z]+)\] (\d{8})/,
items = [],
currentItem;
// loop through the lines in the file
lines.forEach(function(line) {
// look for the beginning of a log entry
var initialData = line.match(patt);
if (initialData) {
// start a new item, using the captured matches
currentItem = {
Date: initialData[1],
Status: initialData[2],
Msg: line.substr(initialData[0].length + 1)
}
items.push(currentItem);
} else {
// this is a continuation of the last item
currentItem.Msg += "\n" + line;
}
});
// items now contains an array of objects with your data