I have created small mongodb database, I wanted to create username column as unique. So I used createIndex() command to create index for that column with UNIQUE property.
I tried creating unique index using below command in Mongosh.
db.users.createIndex({'username':'text'},{unqiue:true,dropDups: true})
For checking current index, I used getIndex() command. below is the output for that.
newdb> db.users.getIndexes()
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{
v: 2,
key: { _fts: 'text', _ftsx: 1 },
name: 'username_text',
weights: { username: 1 },
default_language: 'english',
language_override: 'language',
textIndexVersion: 3
}
]
Now Index is created, so for confirmation I checked same in MongoDB Compass.But I am not able to see UNIQUE property got assign to my newly created index. Please refer below screenshot.
MongoDB Screenshot
I tried deleting old index, as it was not showing UNIQUE property and Created again using MongoDB Compass GUI, and now I can see UNIQUE Property assign to index.
MongoDB Screentshot2
And below is output for getIndex() command in Mongosh.
newdb> db.users.getIndexes()
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{
v: 2,
key: { _fts: 'text', _ftsx: 1 },
name: 'username_text',
unique: true,
sparse: false,
weights: { username: 1 },
default_language: 'english',
language_override: 'language',
textIndexVersion: 3
}
]
I tried searching similar topics, but didn't found anything related. Is there anything I am missing or doing wrong here?
Misspelled the property unique as unqiue, which leads to this issue.
I tried again with the correct spelling, and it is working now.
Sorry for a dumb question
Related
When I used numbers as keys, the compound index prefixes created were not as expected。
To define a compound index:
admin> use TestDB
TestDB> db.createCollection('coll')
TestDB> db.coll.createIndex({4:1,1:-1},{unique:true})
TestDB> db.coll.getIndexex()
Output
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{ v: 2, key: { '1': -1, '4': 1 }, name: '1_-1_4_1', unique: true }
]
I expected 4 to be the prefix of the index, but it turned out to be 1, why?
It looks like the sorting happens
And when I use strings as keywords, the results are completely inconsistent。
Following:
TestDB> db.createCollection("coll2")
TestDB> db.coll2.createIndex({'s':1, 'a':-1},{unique:true})
Output
[
{ v: 2, key: { _id: 1 }, name: '_id_' },
{ v: 2, key: { s: 1, a: -1 }, name: 's_1_a_-1', unique: true }
]
What? It doesn't seem to be sorting.
According to this issue, which references the official specification, this is a result of how Javascript works.
For example, in Node:
$ node
Welcome to Node.js v16.17.0.
Type ".help" for more information.
> var pattern = { 4: 1, 1: -1 }
undefined
> pattern
{ '1': -1, '4': 1 }
You can see similar behavior in this jsfiddle.
If you are attempting to create indexes that have numeric keys, you will need to do so with a different language/driver.
I have 2 dedicated Mongo clusters which have the same exact Model, Indexes and we query both envs the same way but the result is different.
user.model.js
const schema = mongoose.Schema({
_id: ObjectId,
role: {
type: String,
enum: ['user', 'admin'],
required: true,
},
score: { type: Number, default: 0 },
deactivated: { type: Date },
});
schema.index(
{ deactivated: 1, role: 1, score: -1 },
{ name: 'search_index', collation: { locale: 'en', strength: 2 } }
);
I noticed that one of our common queries was causing issues on the PROD environment.
The query looks like this:
db.getCollection('users')
.find({deactivated: null, role: 'user'})
.sort({score: -1})
.limit(10)
.collation({locale: 'en', strength: 2})
On the Testing Environment the query runs as expected fully utilizing the index. (has ~80K records total, 1300 deactivated)
But in our PROD env the query, seems to be using only the first part of the compound index. (has ~50K records total, ~20K records deactivated)
The executionStats looks like:
As we can see it is using at least the first part of the index to only search in non-deactivated records, but the SORT is in memory.
This is a legacy application so the first thing I did was ensure that the types of the indexed fields are following the schema in all the records.
I wonder if it could be the "role" collation somehow?
Any hint or clue will be greatly appreciated. Thanks in advance.
Thanks for providing the plans. It is a combination of a few things (including the multikeyness of the production index) that is causing the problem.
There are a few ways to potentially solve this, let's start with the obvious question. Is score supposed to be an array?
The schema suggests not. With MongoDB, an index becomes multikey once a single document is inserted that has an array (even empty) for a key in the index. There is no way to way to "undo" this change apart from rebuilding the index. If the field is not supposed to contain an array, then I would suggest fixing any documents that contain the incorrect data and then rebuilding the index. As this is production, you may want to build a temporary index to reduce the impact to the application while the original index is dropped and recreated. You may also want to look into schema validation to help prevent incorrect data from getting inserted in the future.
If score can be an array, then we'll need to take a different approach. We can see in the UAT plan that a SORT_MERGE is used. The only reason that stage is required is because {"deactivated" : null} seems to have an additional index bound looking for undefined. That may be some internal implementation quirk as that BSON type appears to be deprecated. So updating the data to have an explicit false value for this field and using that check in the query predicate (rather than a check for null) will remove the need to split the plan out with a SORT_MERGE and will probably allow the multikey index to provide the sort:
winningPlan: {
stage: 'LIMIT',
limitAmount: 10,
inputStage: {
stage: 'FETCH',
inputStage: {
stage: 'IXSCAN',
keyPattern: { deactivated: 1, role: 1, score: -1 },
indexName: 'search_index',
collation: {
locale: 'en',
caseLevel: false,
caseFirst: 'off',
strength: 2,
numericOrdering: false,
alternate: 'non-ignorable',
maxVariable: 'punct',
normalization: false,
backwards: false,
version: '57.1'
},
isMultiKey: true,
multiKeyPaths: { deactivated: [], role: [], score: [ 'score' ] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: {
deactivated: [ '[false, false]' ],
role: [
'[CollationKey(0x514d314b0108), CollationKey(0x514d314b0108)]'
],
score: [ '[MaxKey, MinKey]' ]
}
}
}
}
I'm trying to $merge documents from Read-Only View (aka ROV) into On-Demand Materialized View (aka OMV). With Compass for now. I've created a pipeline in Aggregations section of Compass:
[{
$project: {
_id: 0,
RunDate: 1,
Manufacturer: 1,
Name: 1,
Workgroup: 1,
TotalPhysicalMemory: 1,
Model: 1,
PartOfDomain: 1,
Domain: 1,
NumberOfLogicalProcessors: 1
}
}, {
$merge: {
into: 'omv_WSCReportIndex',
on: 'Name',
whenMatched: 'replace',
whenNotMatched: 'insert'
}
}]
Basically I want OMV with unique 'Name' field where I will be inserting computer docs like this one (and overwriting ones, if needed):
RunDate: "2022-08-23 10:54:38Z"
Manufacturer: "VMware, Inc."
Name: "SRV010"
Workgroup: null
TotalPhysicalMemory: 16
Model: "VMware7,1"
PartOfDomain: true
Domain: "test.loc"
NumberOfLogicalProcessors: 4
Problem is I'm getting an error when I click RUN: Cannot find index to verify that join fields will be unique. I've tried creating index on 'omv_WSCReportIndex' for 'Name' field like "1-Type Index with Unique", "Text-Type Index with Unique", tried to create index with Collation object..
I can't just figure it out how to make this OMV. Please help!
Using Mongo 3.2.
Let's say I have a collection with this schema:
{ _id: 1, type: a, source: x },
{ _id: 2, type: a, source: y },
{ _id: 3, type: b, source: x },
{ _id: 4, type: b, source: y }
Of course that my db is much larger and with many more types and sources.
I have created 4 indexes combinations of type and source (even though 1 should be enough):
{type: 1}
{source: 1},
{type: 1, source: 1},
{source: 1, type: 1}
Now, I am running this distinct query:
db.test.distinct("source", {type: "a"})
The problem is that this query takes much more time that it should take.
If I run it with runCommand:
db.runCommand({distinct: 'test', key: "source", query: {type: "a"}})
this is the result i get:
{
"waitedMS": 0,
"values": [
"x",
"y"
],
"stats": {
"n": 19400840,
"nscanned": 19400840,
"nscannedObjects": 19400840,
"timems": 14821,
"planSummary": "IXSCAN { type: 1 }"
},
"ok": 1
}
For some reason, mongo use only the type: 1 index for the query stage.
It should use the index also for the distinct stage.
Why is that? Using the {type: 1, source: 1} index would be much better, no? right now it is scanning all the type: a documents while it has an index for it.
Am I doing something wrong? Do I have a better option for this kind of distinct?
As Alex mentioned, apparently MongoDB doesn't support this right now.
There is an open issue for it:
https://jira.mongodb.org/browse/SERVER-19507
Just drop first 2 indexes. You don't need them. Mongo can use {type: 1, source: 1} in any query that may need {type: 1} index.
I have a search function that looks like this:
The criteria from this search is passed through to MongoDB's find() method as a criteria object, e.g:
{
designer: "Designer1".
store: "Store1",
category: "Category1",
name: "Keyword",
gender: "Mens",
price: {$gte: 50}
}
I'm only just learning about indexes in MongoDB so please bear with me. I know I can create an index on each individual field, and I can also create a multi-index on several fields. For instance, for one index I could do:
db.products.ensureIndex({designer: 1, store: 1, category: 1, name: 1, gender: 1, price: 1})
The obvious issue arises if someone searches for, say, a category, but not a designer or store it won't be indexed.
I'm currently looking up these terms using an $and operator, so my question is:
How can I create an index that allows for this type of searching with flexibility? Do I have to create an index for each possible combination of these 6 terms? Or if I use $and in my search will it be enough to index each individual term and I'll get the best performance?
$and won't work as MongoDB can only use one index per query at the moment. So if you create an index on each field that you search on, MongoDB will select the best fitting index for that query pattern. You can try with explain() to see which one is selected.
Creating an index for each possible combination is probably not a good idea, as you'd need 6 * 5 * 4 * 3 * 2 * 1 indexes, which is 720 indexes... and you can only have 63 indexes. You could pick the most likely ones perhaps but that won't help a lot.
One solution could be to store your data differently, like:
{
properties: [
{ key: 'designer', value: "Designer1" },
{ key: 'store', value: "Store1" },
{ key: 'category', value: "Category1" },
{ key: 'name', value: "Keyword" },
{ key: 'gender', value: "Mens" },
{ key: 'price', value: 70 },
]
}
Then you can create one index on:
db.so.ensureIndex( { 'properties.key': 1, 'properties.value': 1 } );
And do searches like:
db.so.find( { $and: [
{ properties: { $elemMatch: { key: 'designer', value: 'Designer1' } } },
{ properties: { $elemMatch: { key: 'price', value: { $gte: 30 } } } }
] } )
db.so.find( { $and: [
{ properties: { $elemMatch: { key: 'price', value: { $gte: 45 } } } }
] } )
In both cases, the index is used, but only for the first part of the $and element right now. So do check which key type has the most values, and order your $and elements accordingly in the query.