atlas search: How to index UUID field and query by UUID - mongodb-atlas

I have mapped UUID column(_id is UUID in mongo) to string and set the anylyzer to keyword like this in atlas search index mapping
"_id": {
"analyzer": "lucene.keyword",
"type": "string"
},
and when I am querying _id, that's not working.
"text": {
"path": "_id",
"query": "568ae8be-168d-4675-a442-fe93836e1b50"
}
So not sure how UUID mapping should be set up and searched in altas search

This is not supported yet according to
https://www.mongodb.com/community/forums/t/atlas-search-compound-index-uuid-text/142813
https://feedback.mongodb.com/forums/924868-atlas-search/suggestions/41287666-add-support-for-uuid-datatypes-in-atlas-search?_ga=2.54212203.255378698.1643462300-46448687.1607590524&_gac=1.216671778.1643266205.EAIaIQobChMI186v1KvR9QIVkA4rCh3fLwzVEAAYASABEgJtrfD_BwE

Related

Sorting by nested objects attributes in mongoose when using populate

I'm trying to sort parent documents by an attribute from a populated child document.
so if a person schema has an attribute business and the business schema has an attribute name, I want to sort the list of person documents by the name of their business alphabetically.
This seems very possible since the relationship between the person and business is always 1 to 1. but it seems mongoose doesn't allow such sorting mechanism as whenever I pass business.name as sorting arguments it will default sort instead (same sorting as passing unknown arguments).
I'm trying to use aggregate but the docs on that are very bad and not all arguments are clear.
I would like to know if there is a way of doing.
This is my aggregate code:
let populatedArray = [
{
"path": "business",
"schema": require("../models/business").collection.name
},
{
"path": "createdBy",
"schema": require("../models/User").collection.name
},
{
"path": "schema2",
"schema": require("../models/schema2").collection.name
},
{
"path": "schema3",
"schema": require("../models/schema3").collection.name
},
{
"path": "schema4",
"schema": require("../models/schema4").collection.name
},
{
"path": "schema5",
"schema": require("../models/schema5").collection.name
},
{
"path": "schema6",
"schema": require("../models/schema6").collection.name,
"populate": [{
"path": "schema7",
"schema": require("../models/schema7").collection.name
}]
}];
populatedArray.forEach((elem)=>{
docsPromise.lookup({from:elem.schema,localField: elem.path, foreignField: '_id', as: elem.path})
docsPromise.unwind("$"+elem.path)
})
With the unwind command I get no documents. without the unwind command I get 500 documents while I only have 140 in the database. I know that aggregate is near a LEFT_JOIN on a SQL DB which can give similar result but I don't know to stop it form doing so.

How to create mongodb 2dsphere index on strapi?

My strapi project use Mongo database.
Strapi version : 3.0.0-beta.19.5
I have tried :
creating the 2dsphere index manually with the command in the mongo console, but when the application start to run, the index got deleted. I think maybe the database kinda synchronize with the strapi model configuration.
I checked the strapi document and I see there is an option to create an index by adding a configuration to the model.settings.json, but there is only a single field index option.
Is there any way to create a 2dspere index?
I just found a solution. I had to look up on the mongoose documentation index part.
In strapi documentation, they only tell that the value of 'index' is a boolean type. Which is different from mongoose doc. Actually the model.settings.json structure follow mongoose documentation.
So, to create the 2dsphere index we just need to specify "2dsphere" in the key "index" on that field.
Eg.
{
"kind": "collectionType",
"connection": "default",
"collectionName": "phone_stores",
"info": {
"name": "phoneStore"
},
"options": {
"increments": true,
"timestamps": true
},
"attributes": {
"car": {
"type": "integer",
"required": true
},
"userStoreId": {
"type": "objectId"
},
"location": {
"type": "json",
"index": "2dsphere" // <------ <1>
},
}
}
<1> if true were specify, Single field index will be created on this field. But u can also specify other type of index, like in my case i use '2dsphere'.
UPDATE
What I said about
u can also specify other type of index
is not correct. The index type is limited because of the framework. So far I have test with 2dsphere, which is working. I test also text index, but it didn't work.

Getting an error while loading data with DMS from mongodb to elasticsearch, any ideas?

I am trying to use AWS DMS and transfer data from mongodb to amazon elasticsearch.
i am encountering the following log in CloudWatch.
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Field [_id] is a metadata field and cannot be added inside a document. Use the index API request parameters."
}
],
"type": "mapper_parsing_exception",
"reason": "Field [_id] is a metadata field and cannot be added inside a document. Use the index API request parameters."
},
"status": 400
}
This is my configuration for the mongo db source.
it has the _id as a separete column check box enabled.
i tried disabling it and it says that there is no primary key.
is there anything that you guys know that can fix it ?
quick note:
i have added mapping of the _id field to old_id and now it doesn't import all the other field, even when i add them in the mapping
As ElasticSearch will not support the LOB data type, Other fields are not migrated.
Add additional transformation rule to change the data type to String
{
"rule-type": "transformation",
"rule-id": "3",
"rule-name": "3",
"rule-action": "change-data-type",
"rule-target": "column",
"object-locator": {
"schema-name": "test",
"table-name": "%",
"column-name": "%"
},
"data-type": {
"type": "string",
"length": "30"
}
}

Update an internal document and a field in Mongodb

I'm wondering if it's not possible to use update on two elements in the same query. I have this...
db.types.update({"user_id": "aaaawa", "type": "type d"}, {"$set": {"days.7/21/18": {"check": .955}}, {"last_date": new ISODate("2018-07-22)}})
which returns an invalid property id. But if I do them as separate updates they work...updating internal document...
db.types.update({"user_id": "aaaawa", "type": "type d"}, {"$set": {"days.7/21/18": {"check": .955}}})
updating field...
db.types.update({"user_id": "aaaawa", "type": "type d"}, {"$set": {"last_date": new ISODate("2018-07-22)}})

ElasticSearch with river to MongoDB

I have installed and configured MongoDB and ES with mongodb river. But I'm not sure if I really understand rivers in ES. For example, I want index collection "users" from mongodb.
I will send curl PUT/POST request to url /_river/mongodb_users/_meta
{
"type": "mongodb",
"mongodb": {
"db": "somedb",
"collection": "users"
},
"index": {
"name": "users",
"type": "user"
}
}
But now, I want to index second collection, for example "users2". I really need to create new river using curl POST/PUT on url like /_river/mongodb_users2/_meta with JSON:
{
"type": "mongodb",
"mongodb": {
"db": "somedb",
"collection": "users2"
},
"index": {
"name": "users2",
"type": "user"
}
}
I can not use already created river "mongodb_users"? I will need create one river for one collection?
Thank you for explanation!
Yes. The way MongoDB river works does not allow to fetch content from more than one collection in a single river.
But, you can create as many river as you need.
That said, if you want to index users1 to users type in Elasticsearch and users2 to the same users type, you can (as soon as they don't use the same IDs).
Just modify index.type to "users".
Does it help?