Tinkerpop Frames #Adjacency to Orientdb LINKLIST - orientdb

Is there a way to map Tinkerpop Frames's #Adjacency annotated property to Orientdb LINKLIST? Right now I've something like this:
interface Person {
#Adjacency(label = "personCars", direction = Direction.OUT)
Iterable<Car> getCars();
#Adjacency(label = "personCars", direction = Direction.OUT)
void addCar(Car car);
}
I want this to be mapped to LINKLIST in Orientdb database to keep an order of added vertices. But this is by default mapped to LINKBAG type. Is there any clean solution to set Orientdb to map adjacencies to LINKLISTs?

OrientDB, by default, uses a set to handle the edge collection. Sometimes it's better having an ordered list to access the edge by an offset. Example:
person.createEdgeProperty(Direction.OUT, "Photos").setOrdered(true);
Every time you access the edge collection the edges are ordered. Below is an example to print all the photos in an ordered way.
for (Edge e : loadedPerson.getEdges(Direction.OUT, "Photos")) {
System.out.println( "Photo name: " + e.getVertex(Direction.IN).getProperty("name") );
}
To access the underlying edge list you have to use the Document Database API. Here's an example to swap the 10th photo with the last.
// REPLACE EDGE Photos
List<ODocument> photos = loadedPerson.getRecord().field("out_Photos");
photos.add(photos.remove(9));
From the official documentation.

Related

Strapi - How to GET data sorted based on relation property?

I have an Articles table and it has a relation with the Authors table. Each author has a name property. I want to make a GET request for the articles and receive the articles sorted based on their author's name.
When I use _sort=author in the url parameters, I get the articles sorted based on the author object's ID.
When I use _sort=author.name in the url parameters, I get the articles in a seemingly random order but definitely not based on author.name.
How can I get the data sorted based on author.name?
I use mongoDB as my database.
That is the default behavior of _sort. However, you can simply accomplish this by overriding the find method in api/article/services/article.js to sort by Author's name like so:
module.exports = {
async find(params, populate) {
let result = await strapi.query('article').find(params, populate);
if (params._sort == 'author')
return result.sort((a, b) => (a.author.name > b.author.name) ? 1 : -1);
return result;
},
}
Check the documentation for customizing services to get more info: https://strapi.io/documentation/v3.x/concepts/services.html#concept

Getting ElasticSearch document fields inside of loaded records in searchkick

Is it possible to get ElasticSearch document fields inside of loaded AR records?
Here is a gist that illustrates what I mean: https://gist.github.com/allomov/39c30905e94c646fb11637b45f43445d
In this case I want to avoid additional computation of total_price after getting response from ES. The solution that I currently see is to include the relationship and run total_price computation for each record, which is not so optimal way to perform this operation, as I see it.
result = Product.search("test", includes: :product_components).response
products_with_total_prices = result.map do |product|
{
product: product
total_price: product.product_components.map(&:price).compact.sum
}
end
Could you please tell if it is possible to mix ES document fields into AR loaded record?
As far as I'm aware it isn't possible to get a response that merges the document fields into the loaded record.
Usually I prefer to completely rely on the data in the indexed document where possible (using load: false as a search option), and only load the AR record(s) as a second step if necessary. For example:
result = Product.search("test", load: false).response
# If you also need AR records, could do something like:
product_ids = result.map(&:id)
products_by_id = {}
Product.where(id: product_ids).find_each do |ar_product|
products_by_id[ar_product.id] = ar_product
end
merged_result = result.map do |es_product|
es_product[:ar_product] = products_by_id[es_product.id]}
end
Additionally, it may be helpful to retrieve the document stored in the ES index for a specific record, which I would normally do by defining the following method in your Product class:
def es_document
return nil unless doc = Product.search_index.retrieve(self).presence
Hashie::Mash.new doc
end
You can use select: true and the with_hit method to get the record and the search document together. For your example:
result = Product.search("test", select: true)
products_with_total_prices =
result.with_hit.map do |product, hit|
{
product: product,
total_price: hit["_source"]["total_price"]
}
end

ArangoDB create Vertex REST API without knowing the vertex id's

Is there a way to create with ArangoDB an Edge with REST API without knowing the Vertex ids? With a query to find the vertexs and link them?
Like this with OrientDB: create edge Uses from (select from Module where name = 'm2') to (select from Project where name = 'p1')
I don't want to query via REST the two vertex before, and after create the Edge. I don't want to use Foxx also.
Perhaps with AQL?
Thanks.
Yes, it is doable with a single AQL query:
LET from = (FOR doc IN Module FILTER doc.name == 'm2' RETURN doc._id)
LET to = (FOR doc IN Project FILTER doc.name == 'p1' RETURN doc._id)
INSERT {
_from: from[0],
_to: to[0],
/* insert other edge attributes here as needed */
someOtherAttribute: "someValue"
}
INTO nameOfEdgeCollection

How to implement Polymorphic objects in CouchDB / NoSQL?

I'd like to implement Polymorphic objects in a NoSQL / Document DB?
What is best practice?
Example:
Master Class
Item Object (All should have Item.Title, Item.Subtitle, Item.IconURL)
SubClasses: ItemPhoto, ItemPDF, ItemURL, ItemHTML
(Each subclass would have different properties)
I'd like to list all Items Generically - then get specific data when i drill down.
Possible Options:
Save a two different Documents -with Master/Child Type & ID
Save all as SubClass Documents with Internal Item Object
Other options??
Thanks
CouchDB stores documents (data), not classes (data with code). There's the code in map, validation, list, and show functions which handle documents, but those documents are plain objects that carry data only.
In your example, you can define a library function to check that a given document contains the data of an item, and then use this function to decide what to do. For example:
// in a "appTypes" library:
exports.isItem = function(doc) {
return doc.Title && doc.Subtitle && doc.IconURL;
}
// in a map function
function(doc) {
var appTypes = require('appTypes');
if (appTypes.isItem(doc)) {
// doc is an Item...
}
}
Obviously you can put all code belonging to an Item in an Item class and create instances of that class initialized with the data in the doc. But that's your choice, and does not change how CouchDB will handle the document.

How to set a 2D index in Play Morphia?

How do I set a 2D index in play morphia?
Example:
db.places.ensureIndex( { loc : "2d" } )
http://www.mongodb.org/display/DOCS/Geospatial+Indexing
I assume you mean play 1.2.x.
You can't do this from the #Indexed annotation yet, it seems: http://code.google.com/p/morphia/issues/detail?id=290
You can do it with this somewhat hacky [untested] code:
MorphiaPlugin.ds()
.getMongo()
.getDB('dbname')
.getCollection('places')
.ensureIndex(BasicDBObject(loc, "2d"));
But you might just want to do it from the shell, as you show. It's a one time thing.
Just to add to this, years later:
#Indexes(
#Index(fields=#Field( value = "location", type= IndexType.GEO2DSPHERE))
)
on the #Entity class (if the member holding the GeoPoint is called location) generates the correct indices for spherical geospatial queries.
Don't forget to set the 4th param of the .near() method to true (spherical).
Also you should ensure the indices were generated by simply calling datastore.ensureIndexes() before querying.