how to retrieve and inject sections from and to JSON - scala

If I have an incoming JSON of following structure
[
{
"personId" : 12,
"name": "John Doe",
"age": 48,
"birthdate": "12/1/1954",
"relationships": [
{
"relationType":"parentOf",
"value" : "Johnny walker"
},
{
"relationType":"sonOf",
"value" : "Charles Smith"
}
]
},
{
"personId" : 13,
"name": "Merry Christmas",
"age": 28,
"birthdate": "12/1/1985",
"relationships": [
{
"relationType":"sisteOf",
"value" : "Will Smith"
},
{
"relationType":"cousinOf",
"value" : "Brad Pitt"
}
]
}
]
And requirement is that for each Person record controller will have to carve out relationships array and store each record from it in a separate relationship table with personId association while persisting this incoming JSON.
And subsequently when querying these persons records system will have to lookup relationships for each person from relationships table and inject them to form the same above looking JSON to give back to UI for rendering.
What's the best efficient way to perform this "carve out" and later "inject" operations using Play framework in Scala? (using Slick in persistent layer APIs) I have looked at this JSON transformation link and json.pickBranch in there but not quite sure if that'll be fully applicable here for "carve out" and "inject" use cases for preparing JSON shown in the example. are there any elegant ideas?

One way, which is pretty straightforward, is to use case classes along with Play Json inceptions
import play.api.libs.json.Json
case class Relationship(relationType: String, value: String)
object Relationship {
implicit val RelationshipFormatter = Json.format[Relationship]
}
case class Person(personId: String, name: String, age: Int, birthdate: String, relationships: Seq[Relationship]) {
def withRelationships(relationship: Seq[Relationship]) = copy(relationships = relationships ++ relationship)
}
object Person {
implicit val PersonFormatter = Json.format[Person]
}
Now you can convert a json value to Person by using the following code, provided that jsValue is a json value of type JsValue (which in play controllers you can get by request.body.asJson):
Json.fromJson[Person](jsValue)
In Play controllers, you can
For converting a Person to json you can use the following code provided that person is a value of type Person in your context:
Json.toJson(person)
The only remaining thing is your Slick schemas which is pretty straight forward.
One option is to use a simple schema for Person, without relations and one schema for Relation with a foreign key to Person table. Then you need to find all relations associated with a specific Person, namely person and then append them to that person by calling the withRelationships method which gives you a new Person which you can serve as json:
val person = .... // find person
val relations = .... // find relationships associated with this person
val json = Json.toJson(person.withRelationships(relations))

Related

Compound queries

I have a RESTful service that accepts a custom query, like this:
/entities/User?actions=
{
"$link": {
"entityType": "Manager",
"entity": {
"name": "John Smith"
},
"linkName": "managers",
"backLinkName": "account",
"$set": {
"propertyName": "aclWrite",
"propertyValue": {
"$ref": {
"propertyName": "entityId"
}
}
}
}
}
Which simply means:
Create a new Entity of type User
Create a new Entity of type Manager with the field name, linking the User to be created to this Manager through link name "managers"
Then back-linking the Manager entity to be created to the User with a link name "account" and setting the Manager entity write ACL (Access Control List) to the ID of the User to be created.
I created this query structure because I can't find any suitable Query language that can seem to support such action/procedure.
The question here is are there any Query language that can support such compound action/procedure or can GraphQL handle such?
As a specification, GraphQL doesn't care what fields your schema defines, what arguments those fields take or what your field resolvers do with those arguments. So it's perfectly feasible to design a schema that would let the client compose an equivalent mutation:
mutation {
link(
entityType: "Manager"
entity: {
name: "John Smith"
}
linkName: "managers"
backLinkName: "account"
set: {
propertyName: "aclWrite"
propertyValue: {
ref: {
propertyName: "entityId"
}
}
}
) {
# some fields here to return in the response
}
}
GraphQL does not support references to other nodes inside the same query, so you would still probably want a single mutation whose input mirrored your existing API. That said, using GraphQL for this may still be preferable because of request validation, which is all the more important with complex requests like this. Having an IDE like GraphiQL or GraphQL Playground that lets you write your queries using autocomplete is a big plus too.

What is a good ExtJS component to manage a Map data structure with dynamic key/value pairs

I'd like to ask for your advice with the following problem.
I have an entity representation in JSON that looks like this:
{
id: 1,
prop1: "someValue",
prop2: "someValue",
dynamicProperties: {
name1: "value1",
name2: "value2",
name3: "value3"
}
}
As you see, my entity has properties "prop1" and "prop2" which can take any value, and it also has a property "dynamicProperties" which can have a variable number of properties (e.g. "name1, "name2", "name3", and so on).
I want my users to be able to create/update/delete the property dynamicProperties. That is, it will be possible to add a new property "name4" with value "value4" and change the value of the property "name2" and/or delete the pair "name1"/"value1".
I initially thought about using Ext.grid.PropertyGrid in order to show the dynamicProperties. This allows me to edit the values of my properties, but it doesn't allow me to add new properties. By default, the name column of PropertyGrid is not editable and I haven't been able to change this.
is there a way to achieve what I am looking for?
The only alternative that I have thought of is to change my JSON representation to something like the following JSON and use a regular Ext.grid.Panel to manage it:
{
id: 1,
prop1: "someValue",
prop2: "someValue",
dynamicProperties: [
{
name: "name1",
value: "value1"
},
{
name: "name2",
value: "value2"
},
{
name: "name3",
value: "value3"
}
]
}
I don't like this approach because I would need to validate that the name must be unique, and maybe add an id property.
On the backend I use a Java Entity where the dynamicProperties is a HashMap like this:
#ElementCollection
#MapKeyColumn(name="name")
#Column(name="value")
#CollectionTable(name="entity_dynamic_properties", joinColumns=#JoinColumn(name="entity_id"))
private Map<String, String> dynamicProperties;
Thanks for your advice.
You can include additional components inside the row editor.
In your case I would suggest putting a cell editing grid beneath the non-dynamic properties that allows for two fields (name/value) and users can type what they want.
I ran into a similar problem and solved it with this question and answer which I think covers the basis of how to do this.

How do you model case classes to reflect database queries results in a reusable manner

I will go with an example.
Say I have three tables defined like this:
(pseudocode)
Realm
id: number, pk
name: text, not null
Family
id: number, pk
realm_id: number, fk to Realm, pk
name: text, not null
Species
id: number, pk
realm_id: number, fk to Family (and therefore to Realm), pk,
family_id: number, fk to Family, pk,
name: text, not null
A temptative case classes definition would be
case class Realm (
id: Int,
name: String
)
case class Family (
id: Int,
realm: Realm,
name: String
)
case class Species (
id: Int,
family: Family,
name: String
)
If I make a json out of this after querying the database it would look like this:
SELECT *
FROM realm
JOIN family
ON family.realm_id = realm.id
JOIN species
ON species.family_id = family.id
AND species.realm_id = family.realm_id
Example data:
[{
"id": 1,
"family": {
"id": 1,
"name": "Mammal",
"realm": {
"id": 1,
"name": "Animal"
}
},
"name": "Human"
},
{
"id": 2,
"family": {
"id": 1,
"name": "Mammal",
"realm": {
"id": 1,
"name": "Animal"
}
},
"name": "Cat"
}]
Ok, so far... This is usable, if I need to show every species grouped by realm, I would transform the JsValue or in javascript do filters, etc. However when posting data back to the server, these classes seem a little awkward. If I want to add a new species I would have to post something like this:
{
"id": ???,
"family": {
"id": 1,
"name": "Mammal", // Awkward
"realm": {
"id": 1,
"name": "Animal" // Awkward
}
},
"name": "Cat"
}
Should my classes be then:
case class Realm (
id: Int,
name: Option[String]
)
case class Family (
id: Int,
realm: Realm,
name: Option[String]
)
case class Species (
id: Option[Int],
family: Family,
name: String
)
Like this, I can omit posting what it seems to be unnecesary data, but then the classes definition don't reflect what is in the database which are not nullable fields.
Queries are projection of data. More or like Table.map(function) => Table2. When data is extracted from the database and I don't get the name field, it doesn't mean it is null. How do you deal with this things?
One way to deal with it is to represent the interconnection using other data structures instead of letting each level know about the next.
For example, in the places where you need to represent the entire tree, you could represent it with:
Map[Realm, Map[Family, Seq[Species]]]
And then just Realm in some places for example as a REST/JSON resource, and maybe (Species, Family, Realm) in some place where you only want to work with one species but need to know about the other two levels in the hierarchy.
I would also advice you to think two or three times about letting your model structure define your JSON structure, what happens with the codes that consumes your JSON when you change anything in your model classes? (And if you really want that, do you actually need to go via a model structure, why not build your JSON directly from the database results and skip one level of data transformation?)

neo4j cypher update existing node or create new node

I have a graph with approximately nine million nodes, and twelve million relationships. For each of the nodes within the graph there is a subset of properties for each respective node which forms a unique identity for the node, by Label. The graph is being updated by various data sources which augment existing nodes within the graph, or create new nodes if the nodes don't exist. I don't want to create duplicates according to the unique set of properties within the graph during the update.
For example, I have People in the graph, and their uniqueness is determined by their first name and last name. The following code is to create two distinct people:
CREATE (p:Person{first:"barry",last:"smith",height:187});
CREATE (p:Person{first:"fred",last:"jones",language:"welsh"});
Later, from one of the data sources I receive the following data records (one per line):
first: "fred", last: "lake", height: 201
first: "barry", last: "smith", language: "english"
first: "fred", last: "jones", language: "welsh", height: 188
first: "fred", last: "jones", eyes: "brown"
first: "barry", last: "smith"
After updating the graph I want to have the following nodes:
(:Person{first:"fred",last:"jones",language:"welsh",height:"188,eyes:"brown"})
(:Person{first:"barry",last:"smith",language"english",height:187})
(:Person{first:"fred",last"lake",height:201})
I'm trying to formulate a MERGE query which can do this kind of update. I have come up with the following approach:
Start with a MERGE that uses the uniqueness properties (first and last from the example) to find or create the initial node;
Then do a SET containing each property defined in the incoming record.
So, for the three examples records given above:
MERGE (p:Person{first:"fred",last:"lake"}) SET p.height = 201;
MERGE (p:Person{first:"barry",last:"smith"}) SET p.language = "english";
MERGE (p:Person{first:"fred",last:"jones"}) SET p.language = "welsh", p.height = 188;
MERGE (p:Person{first:"fred",last:"jones"}) SET p.eyes = "brown";
MERGE (p:Person{first:"barry",last:"smith"});
I've tried this out and it works, but I'm curious to know whether this is the best way (most efficient...) to ensure uniqueness in the nodes based on a set of properties, and allow additional information to be added (or not) as updates come in over time?
Just a naive approach: what if you run a MERGE and just create or update it?
Given your list of records, consider each record as a map:
{ first: "fred", last: "lake", height: 201 }
{ first: "barry", last: "smith", language: "english" }
{ first: "fred", last: "jones", language: "welsh", height: 188 }
{ first: "fred", last: "jones", eyes: "brown" }
{ first: "barry", last: "smith" }
Then write your query in a parametric way:
MERGE (p:Person { first: { map }.first, last: { map }.last })
ON CREATE SET p = { map }
ON MATCH SET p += { map }
Description of the query:
In case of creation it should create a new node using all the properties passed in the {map}
In case of matching it should add new properties to the node without deleting any
I've run some queries in console of the page linked above with a MERGE ON MATCH and it seems to update existing properties to new values.
The queries I've run are the following:
MATCH (peter { name: 'Peter' }) RETURN peter
MERGE (peter { name: 'Peter' }) ON MATCH SET peter += { hungry: TRUE , position: 'Entrepreneur' }
MATCH (peter { name: 'Peter' }) RETURN peter
// added two new properties here
MERGE (peter { name: 'Peter' }) ON MATCH SET peter += { hungry: FALSE , position: 'Entrepreneur' }
MATCH (peter { name: 'Peter' }) RETURN peter
// hungry is now false in here
I'd say that this is the best way. Depending on the Neo4j interface you are using, you could write a single query that would handle everything without custom SET commands, but I'm guessing that you were just simplifying the question and have that covered.

GORM get/find resource by ID using MongoDB in Grails

Grails makes it easy to get a domain object by ID (handy for building a REST API).
A controller to retrieve a resource can be as simple as:
MetricController.groovy
import grails.converters.JSON
class MetricController {
def index() {
def resource = Metric.get(params.id)
render resource as JSON
}
}
When using the Grails plugin for MongoDB GORM (compile ":mongodb:1.2.0"), the default id type of Long needs to be changed to type String or ObjectId.
Metric.groovy
import org.bson.types.ObjectId
class Metric {
static mapWith = "mongo"
ObjectId id
String title
}
However, doing a .get(1) will now result in:
Error 500: Internal Server Error
URI
/bow/rest/metric/1
Class
java.lang.IllegalArgumentException
Message
invalid ObjectId [1]
I took a guess and changed the controller to use findById:
def resource = Metric.findById(new ObjectId(new Date(), params.id.toInteger()))
That fixed the error, but it fails to find the object (always returns null).
For example, using the id of "-1387348672" does not find this test object:
{ "class" : "Metric",
"id" : { "class" : "org.bson.types.ObjectId",
"inc" : -1387348672,
"machine" : 805582250,
"new" : false,
"time" : 1371329632000,
"timeSecond" : 1371329632
},
"title" : "Test"
}
The ObjectId.inc field may not even be the correct field to use for the resource ID.
So, what is the simplest way to retrieve a domain object by ID when using MongoDB?
When a domain object is persisted in MongoDB, it is stored as a document with an ObjectId as a unique 12 bytes BSON primary key. For example, if you have a domain object Product like
import org.bson.types.ObjectId
class Product {
ObjectId id
String name
static mapWith = "mongo"
}
then the persisted entity in MongoDB would look like below if you save with a name as "TestProduct".
//db.product.find() in mongodb
{
"_id" : ObjectId("51bd047c892c8bf0b3a58b21"),
"name" : "TestProduct",
"version" : 0
}
The _id becomes your primary key for that document. To get this document you need the primary key ObjectId. Speaking from a RESTful context, you atleast need the 12 bytes hexcode 51bd047c892c8bf0b3a58b21 as part of the request.
So in the above case, you can fetch that particular document by doing something like
Product.get(new ObjectId("51bd047c892c8bf0b3a58b21"))
Product.findById(new ObjectId("51bd047c892c8bf0b3a58b21"))
Have a look at the API for ObjectId which would make clear how to retrieve a document.
When you retrieve the document as JSON, then it just shows the ObjectId class with its elements.
{
"class": "com.example.Product",
"id": {
"class": "org.bson.types.ObjectId",
"inc": -1280996575,
"machine": -1993569296,
"new": false,
"time": 1371341948000,
"timeSecond": 1371341948
},
"name": "TestProduct"
}
For completeness, here's the domain with a controller to get a resource by ID string (instead of ObjectID).
Example:
Metric.get("51bf8ccc30040460f5a05579")
Domain
import org.bson.types.ObjectId
class Metric {
ObjectId id
String title
static mapWith = "mongo"
def out() {
return [
id: id as String, //convert Mongo ObjectId to 12-byte hex BSON
title: title
]
}
}
The out() method is used to render the resource showing its ID string (not its ObjectID).
Controller
import grails.converters.JSON
class MetricController {
def index() {
def resource = Metric.read(params.id)
render resource.out() as JSON
}
}
Example JSON response for /rest/metric/51bf8ccc30040460f5a05579
{ "id": "51bf8ccc30040460f5a05579", "title": "Accounts" }