Elasticsearch how to join publications and keywords - postgresql

I have defined two indexes in elasticsearch that are populated with two different queries coming from a postgres database. I have many hundred of documents with thousand of keywords, and I have used logstash to populate the two indexes.
The first index is called publication and is defined as follow:
"mappings" : {
"doc" : {
"properties" : {
"external_id" : {"type": "text" },
"title" : {"type": "text", "analyzer":"english" },
"description" : { "type" : "text", "analyzer":"english" }
}
}
}
The second index is called keyword and is defined as follow:
"mappings" : {
"doc" : {
"properties" : {
"publication_id" : {"type": "keyword" },
"keyword" : {"type": "keyword" }
}
}
}
The relationship between the two indexes is based on the external_id <-> publication_id.
I am trying to define other indexes in a way that I can locate all the publications that have a specific keyword or all the keywords that are defined for a specific publication

Related

Perform a search on main collection field and array of objects simultaneously

I have my document structure as below:
{
"codeId" : 8.7628945723895E13, // long numeric value stored in scientific notation by Mongodb
"problemName" : "Hardware Problem",
"problemErrorCode" : "97695686856",
"status" : "active",
"problemDescription" : "ghdsojgnhsdjgh sdojghsdjoghdghd i0dhgjodshgddsgsdsdfghsdfg",
"subProblems" : [
{
"codeId" : 8.76289457238896E14,
"problemName" : "Some problem",
"problemErrorCode" : "57790389503490249640",
"problemDescription" : "This is edited",
"status" : "active",
"_id" : ObjectId("589476eeae39b20b1c15535b")
},
...
]
}
I have a search field which should search by codeId which basically serves as parentCodeID in search fields as shown below
Now, along with parentIdCode I want to search for codeId, problemCode, problemName and problemDescription as well.
How do I query the submodules with a regex search and at same time tag some parent field with "$or" clause etc. to achieve this ?
You can try something like this.
query = {
'$or': [{
"codeId":somevalue
}, {
"subProblems.codeId": {
"$regex": searchValue,
"$options": "i"
}
}, {
//rest of sub modules fields
}]
};

MongoDb - Query for specific subdocument

I have a set of mongodb documents with the following structure:
{
"_id" : NUUID("58fbb893-dfe9-4f08-a761-5629d889647d"),
"Identifiers" : {
"IdentificationLevel" : 2,
"Identifier" : "extranet\\test#test.com"
},
"Personal" : {
"FirstName" : "Test",
"Surname" : "Test"
},
"Tags" : {
"Entries" : {
"ContactLists" : {
"Values" : {
"0" : {
"Value" : "{292D8695-4936-4865-A413-800960626E6D}",
"DateTime" : ISODate("2015-04-30T09:14:45.549Z")
}
}
}
}
}
}
How can I make a query with the mongo shell which finds all documents with a specific "Value" (e.g.{292D8695-4936-4865-A413-800960626E6D} in the Tag.Entries.ContactLists.Values path?
The structure is unfortunately locked by Sitecore, so it is not an options to use another structure.
As your sample collection structure show Values is object, it contains only one Value. Also you must check for Value as it contains extra paranthesis. If you want to get Value from given structure try following query :
db.collection.find({
"Tags.Entries.ContactLists.Values.0.Value": "{292D8695-4936-4865-A413-800960626E6D}"
})

Resolving MongoDB DBRef array using Mongo Native Query and working on the resolved documents

My MongoDB collection is made up of 2 main collections :
1) Maps
{
"_id" : ObjectId("542489232436657966204394"),
"fileName" : "importFile1.json",
"territories" : [
{
"$ref" : "territories",
"$id" : ObjectId("5424892224366579662042e9")
},
{
"$ref" : "territories",
"$id" : ObjectId("5424892224366579662042ea")
}
]
},
{
"_id" : ObjectId("542489262436657966204398"),
"fileName" : "importFile2.json",
"territories" : [
{
"$ref" : "territories",
"$id" : ObjectId("542489232436657966204395")
}
],
"uploadDate" : ISODate("2012-08-22T09:06:40.000Z")
}
2) Territories, which are referenced in "Map" objects :
{
"_id" : ObjectId("5424892224366579662042e9"),
"name" : "Afghanistan",
"area" : 653958
},
{
"_id" : ObjectId("5424892224366579662042ea"),
"name" : "Angola",
"area" : 1252651
},
{
"_id" : ObjectId("542489232436657966204395"),
"name" : "Unknown",
"area" : 0
}
My objective is to list every map with their cumulative area and number of territories. I am trying the following query :
db.maps.aggregate(
{'$unwind':'$territories'},
{'$group':{
'_id':'$fileName',
'numberOf': {'$sum': '$territories.name'},
'locatedArea':{'$sum':'$territories.area'}
}
})
However the results show 0 for each of these values :
{
"result" : [
{
"_id" : "importFile2.json",
"numberOf" : 0,
"locatedArea" : 0
},
{
"_id" : "importFile1.json",
"numberOf" : 0,
"locatedArea" : 0
}
],
"ok" : 1
}
I probably did something wrong when trying to access to the member variables of Territory (name and area), but I couldn't find an example of such a case in the Mongo doc. area is stored as an integer, and name as a string.
I probably did something wrong when trying to access to the member variables of Territory (name and area), but I couldn't find an example
of such a case in the Mongo doc. area is stored as an integer, and
name as a string.
Yes indeed, the field "territories" has an array of database references and not the actual documents. DBRefs are objects that contain information with which we can locate the actual documents.
In the above example, you can clearly see this, fire the below mongo query:
db.maps.find({"_id":ObjectId("542489232436657966204394")}).forEach(function(do
c){print(doc.territories[0]);})
it will print the DBRef object rather than the document itself:
o/p: DBRef("territories", ObjectId("5424892224366579662042e9"))
so, '$sum': '$territories.name','$sum': '$territories.area' would show you '0' since there are no fields such as name or area.
So you need to resolve this reference to a document before doing something like $territories.name
To achieve what you want, you can make use of the map() function, since aggregation nor Map-reduce support sub queries, and you already have a self-contained map document, with references to its territories.
Steps to achieve:
a) get each map
b) resolve the `DBRef`.
c) calculate the total area, and the number of territories.
d) make and return the desired structure.
Mongo shell script:
db.maps.find().map(function(doc) {
var territory_refs = doc.territories.map(function(terr_ref) {
refName = terr_ref.$ref;
return terr_ref.$id;
});
var areaSum = 0;
db.refName.find({
"_id" : {
$in : territory_refs
}
}).forEach(function(i) {
areaSum += i.area;
});
return {
"id" : doc.fileName,
"noOfTerritories" : territory_refs.length,
"areaSum" : areaSum
};
})
o/p:
[
{
"id" : "importFile1.json",
"noOfTerritories" : 2,
"areaSum" : 1906609
},
{
"id" : "importFile2.json",
"noOfTerritories" : 1,
"areaSum" : 0
}
]
Map-Reduce functions should not be and cannot be used to resolve DBRefs in the server side.
See what the documentation has to say:
The map function should not access the database for any reason.
The map function should be pure, or have no impact outside of the
function (i.e. side effects.)
The reduce function should not access the database, even to perform
read operations. The reduce function should not affect the outside
system.
Moreover, a reduce function even if used(which can never work anyway) will never be called for your problem, since a group w.r.t "fileName" or "ObjectId" would always have only one document, in your dataset.
MongoDB will not call the reduce function for a key that has only a
single value

Filtering Mongo items by multiple fields and subfields

I have the following items in my collection:
> db.test.find().pretty()
{ "_id" : ObjectId("532c471a90bc7707609a3d4f"), "name" : "Alice" }
{
"_id" : ObjectId("532c472490bc7707609a3d50"),
"name" : "Bob",
"partner_type1" : {
"status" : "rejected"
}
}
{
"_id" : ObjectId("532c473e90bc7707609a3d51"),
"name" : "Carol",
"partner_type2" : {
"status" : "accepted"
}
}
{
"_id" : ObjectId("532c475790bc7707609a3d52"),
"name" : "Dave",
"partner_type1" : {
"status" : "pending"
}
}
There are two partner types: partner_type1 and partner_type2. A user cannot be accepted partner in the both of types. But he can be a rejected partner in partner_type1 but accepted in the another, for example.
How can I build Mongo query that fetches the users that can become partners?
When your user can only be accepted in one partner-type, you should turn it around: Have a field accepted_as:"partner_type1" or accepted_as:"partner_type2". For people who aren't accepted yet, either have no such field or set it to null.
In both cases, your query to get any non-accepted will then be:
{
data.accepted_as: null
}
(null matches both non-existing fields as well as fields explicitly set to null)
For me the logical schema would be this:
"partner : {
"type": 1,
"status" : "rejected"
}
At least that keeps the paths consistent between documents.
So if you want to stay away from using mapReduce type methods to find out "which field" it is on, and otherwise use plain queries and the aggregation pipeline, then don't vary field paths on documents. If you alter the "data" then that is the most consistent form.

Can I utilize indexes when querying by MongoDB subdocument without known field names?

I have a document structure like follows:
{
"_id": ...,
"name": "Document name",
"properties": {
"prop1": "something",
"2ndprop": "other_prop",
"other3": ["tag1", "tag2"],
}
}
I can't know the actual field names in properties subdocument (they are given by the application user), so I can't create indexes like properties.prop1. Neither can I know the structure of the field values, they can be single value, embedded document or array.
Is there any practical way to do performant queries to the collection with this kind of schema design?
One option that came to my mind is to add a new field to the document, index it and set used field names per document into this field.
{
"_id": ...,
"name": "Document name",
"properties": {
"prop1": "something",
"2ndprop": "other_prop",
"other3": ["tag1", "tag2"],
},
"property_fields": ["prop1", "2ndprop", "other3"]
}
Now I could first run query against property_fields field and after that let MongoDB scan through the found documents to see whether properties.prop1 contains the required value. This is definitely slower, but could be viable.
One way of dealing with this is to use schema like below.
{
"name" : "Document name",
"properties" : [
{
"k" : "prop1",
"v" : "something"
},
{
"k" : "2ndprop",
"v" : "other_prop"
},
{
"k" : "other3",
"v" : "tag1"
},
{
"k" : "other3",
"v" : "tag2"
}
]
}
Then you can index "properties.k" and "properties.v" for example like this:
db.foo.ensureIndex({"properties.k": 1, "properties.v": 1})