MongoException: An equivalent index already exists with a different name and options - mongodb

Mongodb version: 6.0.3
Yii2 version: 2.0.34
Debug_log:
MongoDB\Driver\Exception\CommandException: An equivalent index already exists with a different name and options. Requested index: { v: 2, key: { _fts: "text", _ftsx: 1 }, name: "full.eng_text", weights: { full.eng: 1 }, default_language: "english", language_override: "language", textIndexVersion: 3 }, existing index: { v: 2, key: { _fts: "text", _ftsx: 1 }, name: "full.uk_text", weights: { full.uk: 1 }, default_language: "english", language_override: "language", textIndexVersion: 3 } in /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2-mongodb/src/Command.php:186
Stack trace:
#0 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2-mongodb/src/Command.php(186): MongoDB\Driver\Manager->executeCommand()
#1 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2-mongodb/src/Command.php(357): yii\mongodb\Command->execute()
#2 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2-mongodb/src/Collection.php(153): yii\mongodb\Command->createIndexes()
#3 /www/wwwroot/jumbotwo.space/public_shtml/YII/components/GoogleMapsApi.php(62): yii\mongodb\Collection->createIndex()
#4 /www/wwwroot/jumbotwo.space/public_shtml/YII/components/GoogleMapsApi.php(46): app\components\GoogleMapsApi->createDBs()
#5 /www/wwwroot/jumbotwo.space/public_shtml/YII/components/GoogleMapsApiGeocode.php(23): app\components\GoogleMapsApi->__construct()
#6 /www/wwwroot/jumbotwo.space/public_shtml/YII/controllers/AjaxController.php(36): app\components\GoogleMapsApiGeocode->__construct()
#7 [internal function]: app\controllers\AjaxController->actionSearch()
#8 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2/base/InlineAction.php(57): call_user_func_array()
#9 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2/base/Controller.php(157): yii\base\InlineAction->runWithParams()
#10 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2/base/Module.php(528): yii\base\Controller->runAction()
#11 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2/web/Application.php(103): yii\base\Module->runAction()
#12 /www/wwwroot/jumbotwo.space/public_shtml/YII/vendor/yiisoft/yii2/base/Application.php(386): yii\web\Application->handleRequest()
#13 /www/wwwroot/jumbotwo.space/public_shtml/index.php(14): yii\base\Application->run()
#14 {main}
Function works when the site language is main(uk), response code 200 but gets error when language is english (en), response code 500.
I think that the problem is with $collection->createIndex() function on existing index. But idk

You are trying to create a text index on a collection that already has one.
From the docs:
A collection can have at most one text index.

Related

Creation of Unique Index not working using Mongo shell

I have created small mongodb database, I wanted to create username column as unique. So I used createIndex() command to create index for that column with UNIQUE property.
I tried creating unique index using below command in Mongosh.
db.users.createIndex({'username':'text'},{unqiue:true,dropDups: true})
For checking current index, I used getIndex() command. below is the output for that.
newdb> db.users.getIndexes()
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{
v: 2,
key: { _fts: 'text', _ftsx: 1 },
name: 'username_text',
weights: { username: 1 },
default_language: 'english',
language_override: 'language',
textIndexVersion: 3
}
]
Now Index is created, so for confirmation I checked same in MongoDB Compass.But I am not able to see UNIQUE property got assign to my newly created index. Please refer below screenshot.
MongoDB Screenshot
I tried deleting old index, as it was not showing UNIQUE property and Created again using MongoDB Compass GUI, and now I can see UNIQUE Property assign to index.
MongoDB Screentshot2
And below is output for getIndex() command in Mongosh.
newdb> db.users.getIndexes()
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{
v: 2,
key: { _fts: 'text', _ftsx: 1 },
name: 'username_text',
unique: true,
sparse: false,
weights: { username: 1 },
default_language: 'english',
language_override: 'language',
textIndexVersion: 3
}
]
I tried searching similar topics, but didn't found anything related. Is there anything I am missing or doing wrong here?
Misspelled the property unique as unqiue, which leads to this issue.
I tried again with the correct spelling, and it is working now.
Sorry for a dumb question

Mongodb get error message "MongoError: Path collision at activity"

I am using mongoose library for the mongodb on my node.js project. On of my logs file getting the mongodb error message:
{
message: 'Path collision at activity',
stack: 'MongoError: Path collision at activity\n' +
' at Connection.<anonymous> (/project/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:443:61)\n' +
' at Connection.emit (events.js:315:20)\n' +
' at Connection.EventEmitter.emit (domain.js:483:12)\n' +
' at processMessage (/project/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:364:10)\n' +
' at Socket.<anonymous> (/project/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:533:15)\n' +
' at Socket.emit (events.js:315:20)\n' +
' at Socket.EventEmitter.emit (domain.js:483:12)\n' +
' at addChunk (_stream_readable.js:295:12)\n' +
' at readableAddChunk (_stream_readable.js:271:9)\n' +
' at Socket.Readable.push (_stream_readable.js:212:10)\n' +
' at TCP.onStreamRead (internal/stream_base_commons.js:186:23)',
operationTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1600849377 },
ok: 0,
errmsg: 'Path collision at activity',
code: 31250,
codeName: 'Location31250',
'$clusterTime': {
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1600849377 },
signature: {
hash: Binary {
_bsontype: 'Binary',
sub_type: 0,
position: 20,
buffer: <Buffer d2 34 b7 ac bc a7 3f ea 38 d1 5c e3 26 58 39 43 d8 11 6c 83>
},
keyId: Long { _bsontype: 'Long', low_: 4, high_: 1596659428 }
}
},
name: 'MongoError',
level: 'info',
timestamp: '2020-09-23 08:22:57',
[Symbol(mongoErrorContextSymbol)]: {}
}
This error did not point the location of error that on which function return this error.
If anyone have anyclue then kindly let me know. Thanks in advance.
The problem should rely in the projection. This error is introduced as part of breaking change since v4.4.
From MongoDB 4.4 release notes here:
Path Collision: Embedded Documents and Its Fields
Starting in MongoDB 4.4, it is illegal to project an embedded document with any of the embedded document’s fields.
For example, consider a collection inventory with documents that contain a size field:
{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }
Starting in MongoDB 4.4, the following operation fails with a Path collision error because it attempts to project both size document and the size.uom field:
db.inventory.find( {}, { size: 1, "size.uom": 1 } ) // Invalid starting in 4.4
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
If the projection of the embedded document comes after any and all projections of its fields, MongoDB projects the embedded document. For example, the projection document { "size.uom": 1, size: 1 } produces the same result as the projection document { size: 1 }.
If the projection of the embedded document comes before the projection any of its fields, MongoDB projects the specified field or fields. For example, the projection document { "size.uom": 1, size: 1, "size.h": 1 } produces the same result as the projection document { "size.uom": 1, "size.h": 1 }
For the record, I got this error when I wanted to select a set of fields of a populated collection, also paginating them. With this as an author-book example:
author: {
_id: ObjectId
name: String,
birthday: String,
...,
books: [
_id: ObjectId,
_id: ObjectId,
]
}
book: {
_id: ObjectId,
title: String,
publisher: String,
year: string,
...
}
I was trying to paginate sets of 5 books of the author and select the title and year fields using:
Author.findById(author_id)
.populate('books')
.select('books': { $slice: { [offset, limit + offset] } })
.select('books books.title books.year');
Which was throwing the error. Instead, solved it by using select property on populate function:
Author.findById(author_id)
.populate(
path: 'books',
select: 'title year'
)
.select('books': { $slice: { [offset, limit + offset] } });
await FFF.findOne({rid: req.body.w.toString()}, {projection:{'c.u':1,'c': {$elemMatch: {k: 12}}} })
Just remove duplicated values like (show c [show everthing from c array], show c.u [show u in c])
await FFF.findOne({rid: req.body.w.toString()}, {projection:{'c': {$elemMatch: {k: 12}}} })

MongoDB Optional Partial Unique Index in Nested Arrays

I am trying to create a solution for a unique partial index on token field in a nested array apps.tokens, such that the nested array tokens is optional or can be empty.
I create the index as:
collection.createIndex(
Indexes.ascending("apps.tokens.token"),
new IndexOptions()
.unique(true)
.partialFilterExpression(
Filters.type("apps.tokens.token", BsonType.STRING)
)
);
The value of the field apps.tokens.token is never explicitly null and will always be some unique string. I am currently not worried about duplicates within the same document.
However, I can't get the partial index to behave the way I would expect. It is mostly working as intended, except for situations when there is an item in the apps array with an empty or missing tokens array.
Creating the following structure fails with error E11000 duplicate key error collection: db1.testCollection index: apps.tokens.token_1 dup key: { apps.tokens.token: null } :
[
{
"apps": [
{
"client_id": "capp1",
"tokens": [
{
"token": "t1",
"expiration": "2020-09-10T23:31:17.119+01:00"
}
]
},
{
"client_id": "capp2"
}
],
"uuid": "89337f58-a491-4e17-b8dd-726c9319dcaa"
},
{
"apps": [
{
"client_id": "capp3",
"tokens": [
{
"token": "t2",
"expiration": "2020-09-10T23:31:17.119+01:00"
}
]
},
{
"client_id": "capp4"
}
],
"uuid": "4ccc4d81-990f-4650-b26e-1d26fd22d91a"
}
]
However, this structure is perfectly valid according to the same index:
[
{
"apps": [
{
"client_id": "capp1"
},
{
"client_id": "capp2"
}
],
"uuid": "89337f58-a491-4e17-b8dd-726c9319dcaa"
},
{
"apps": [
{
"client_id": "capp3"
},
{
"client_id": "capp4"
}
],
"uuid": "4ccc4d81-990f-4650-b26e-1d26fd22d91a"
}
]
My guess is that the first test case fails, because, having the first item inserted, the index checks that it has a apps.token.token field that is a String and adds this whole document to the insert/update comparison.
On the other hand, the second test case does not fail, because none of the documents match the condition of apps.tokens.token being a String.
As it looks at the second item to be inserted, it somehow deduces that it has a apps.token.token field that is implicitly null (because there is no tokens array in one of the apps items), then it checks whether the existing item matches {"apps.tokens.token": null} and indeed it does, and ends the operation in a failure.
What am I doing wrong?
I have tried to create the partial index with exists filter too, but it does not help.
Filters.and(
Filters.type("apps.tokens.token", BsonType.STRING),
Filters.exists("apps.tokens.token"),
Filters.exists("apps.tokens")
)
Is it possible to supplement the filter with some sort of function that will handle cases when tokens does not exist or is empty for each apps item in a document?
The purpose of an index in MongoDB is to map specific values to documents.
In the case of an index on an array (multikey index) there will be multiple values in the index for a single document.
An example:
Documents
#1 { apps: [
{ tokens: [
{token: "T1"},
{token: "T2"}
]},
{ tokens: [] }
]},
#2 { apps: [
{ tokens: [
{token: "T3"},
{token: "T4"}
]},
{ notokens: true }
])
#3 { apps: [
{ notokens: true }
{ notokens: true }
]}
#4 { apps: [
{ tokens: [
{ token: "T5" },
{ token: "T5" }
]}
]}
Index
If we create an index on {"apps.tokens.token": 1}, the index will have the following:
NULL -> #1
NULL -> #2
NULL -> #3
"T1" -> #1
"T2" -> #1
"T3" -> #2
"T4" -> #2
"T5" -> #4
Unique
If we had instead created that index with a unique constraint, document #2 and #3 would have been both rejected because they would have caused the NULL value to be duplicated in the index.
Note that document #4 would be accepted. Since it is the value entered into the index that must be unique, and a value is only indexed once for a given document, the "T5" is not duplicated in the index even though it appears twice in the document, so this does not violate the unique constraint.
Partial
A partial index filter is matched against the document as a whole. If the filter matches, the document included in the index.
If we create the index with the partial filter {"apps.tokens.token":{$type:"string"}}, it is matched in the same manner as if we had passed it to find, i.e. if any element of the array matches, the document is matched.
This would mean that documents #1, #2, and #4 would be included in the index, while #3 would be excluded.
If we had made the index both partial and unique, documents #1, #3, and #4 would be accepted, and #2 would be rejected for duplicating the NULL value.
Looks like the solution may be to use a sparse index, although the official documentation states:
Partial indexes offer a superset of the functionality of sparse
indexes. If you are using MongoDB 3.2 or later, partial indexes should
be preferred over sparse indexes.
My tests are passing with:
collection.createIndex(
Indexes.ascending("apps.tokens.token"),
new IndexOptions()
.unique(true)
.sparse(true)
);
I wonder if this has any other implications that are not currently obvious to me.
For the fullness of the solution, note that the index addresses uniqueness across documents. However, it won't check for uniqueness within the same document, so it would be possible to add a token that already exists in one of the apps in the same document. To work around this issue, I add a filter to the update query, such that a document that already has a token I'm trying to add is not included in the documents that would be updated:
Document doc = Document.parse("{\"token\":\"t1\"}");
collection.updateOne(
Filters.and(
Filters.eq("uuid", "89337f58-a491-4e17-b8dd-726c9319dcaa"),
Filters.not(Filters.eq("apps.tokens.token", "t1"))
),
Updates.push("apps.$[app].tokens", doc),
new UpdateOptions().arrayFilters(Arrays.asList(
Filters.eq("app.client_id", "capp1")
))
);

mongodb terminal - $push/$pull - SyntaxError: invalid property id #(shell)

I'm trying to add and update a simple data, what is wrong with my request ?
I'm using the website https://www.jdoodle.com/online-mongodb-terminal
#1
db.Vendor.find()
#2
db.Vendor.insert({
employee: [
ObjectId('fffffa000000000000000002'),
ObjectId('fffffa000000000000000003')
]
});
#3
db.Vendor.update({
"employee": ObjectId("fffffa000000000000000002")
}, {
$push: {
"employee" : ObjectId("fffffa000000000000000004")
}
});
It's jdoodle terminal specific I guess. The document should be a valid json, not just a js object as in the cli mongo shell.
db.Vendor.update({
"employee": ObjectId("fffffa000000000000000002")
}, {
"$push": {
"employee" : ObjectId("fffffa000000000000000004")
}
});
does the job.

mongodb aggregating nested values across whole collection with counts

I have documents with the following structure:
{
_id: 123,
machine_id: 456,
data: {
some_data: 100,
exceptions: [{
hash: 789,
value: 'something',
stack_trace: 'line 123: oops',
count: 5,
}]
}
}
{
_id: 234,
machine_id: 567,
data: {
some_other_data: 200,
exceptions: [{
hash: 789,
value: 'something',
stack_trace: 'line 123: oops',
count: 1,
}, {
hash: 890,
value: 'something_else',
stack_trace: 'line 678: ouch',
count: 3,
}]
}
}
The hash is a combination of the value and stack_trace (I added this specifically to try to aggregate exceptions across the whole collection). I want to run a query which returns each distinct exception, along with the total count, and the value and stack trace. In this case the result would look something like:
[{
hash: 789,
value: 'something',
stack_trace: 'line123: oops',
count: 6,
}, {
hash: 890,
value: 'something_else',
stack_trace: 'line 678: ouch',
count: 3,
}]
I'm quite new to mongoDB, and sturggling to get the aggregation pipeline stages to give me any meaningful output.
Would also welcome comments on structuring this data, if you think there is a better way.
Your structure looks fine. You can drop the hash if you want and use value and stack_trace as a grouping key.
You can use below aggregation.
For $grouping on hash, you will need to $unwind exceptions embedded array followed by $first to keep the value and stack_trace and $sum to count no of distinct exceptions.
db.collection.aggregate(
{$unwind:"$data.exceptions"},
{$group:{_id:"$data.exceptions.hash", value:{$first:"$data.exceptions.value"}, stack_trace:{$first:"$data.exceptions.stack_trace"}, count:{$sum:"$data.exceptions.count"}}})