I have an application where users can follow each other. Once this relationship is made a document is added into the collection. That document has two fields follower and followee. I want to prevent insertions of duplicate relationships. I do not want to query the db, wait for a promise, then insert as this seems like an inefficient approach. I'd rather stop it from saving a new document if the new document's follower and followee matches an existing document.
Look into creating a Unique Compound Index index:
db.members.createIndex( { follower: 1, followee: 1 }, { unique: true } )
The created index enforces uniqueness for the combination of follower and followee values.
A unique index ensures that the indexed fields do not store duplicate
values; i.e. enforces uniqueness for the indexed fields. By default,
MongoDB creates a unique index on the _id field during the creation of
a collection
Related
https://scalegrid.io/blog/fast-paging-with-mongodb/
Example : {
_id,
name,
company,
state
}
I've gone through the 2 scenarios explained in the above link and it says sorting by object id makes good performance while retrieve and sort the results. Instead of default sorting using object id , I want to index for my own custom field "name" and "company" want to sort and pagination on this two fields (Both fields holds the string value).
I am not sure how we can use gt or lt for a name, currently blocked on how to resolve this to provide pagination when a user sort by name.
How to index and do pagination for two fields?
Answer to your question is
db.Example.createIndex( { name: 1, company: 1 } )
And for pagination explanation the link you have shared on your question is good enough. Ex
db.Example.find({name = "John", country = "Ireland"}). limit(10);
For Sorting
db.Example.find().sort({"name" = 1, "country" = 1}).limit(userPassedLowerLimit).skip(userPassedUpperLimit);
If the user request to fetch 21-30 first documents after sorting on Name then country both in ascending order
db.Example.find().sort({"name" = 1, "country" = 1}).limit(30).skip(20);
For basic understand of Indexing in MonogDB
Indexes support the efficient execution of queries in MongoDB. Without indexes, MongoDB must perform a collection scan, i.e. scan every document in a collection, to select those documents that match the query statement. If an appropriate index exists for a query, MongoDB can use the index to limit the number of documents it must inspect.
Indexes are special data structures, that store a small portion of the collection’s data set in an easy to traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field.
Default _id Index
MongoDB creates a unique index on the _id field during the creation of a collection. The _id index prevents clients from inserting two documents with the same value for the _id field. You cannot drop this index on the _id field.
Create an Index
Syntax to execute on Mongo Shell
db.collection.createIndex( <key and index type specification>, <options> )
Ex:
db.collection.createIndex( { name: -1 } )
for ascending use 1,for descending use -1
The above rich query only creates an index if an index of the same specification does not already exist.
Index Types
MongoDB provides different index types to support specific types of data and queries. But i would like to mention 2 important types
1. Single Field
In addition to the MongoDB-defined _id index, MongoDB supports the creation of user-defined ascending/descending indexes on a single field of a document.
2. Compound Index
MongoDB also supports user-defined indexes on multiple fields, i.e. compound indexes.
The order of fields listed in a compound index has significance. For instance, if a compound index consists of { name: 1, company: 1 }, the index sorts first by name and then, within each name value, sorts by company.
Source for my understanding and answer and to know more about MongoDB indexing MongoDB Indexing
Say I have a collection that records correlation values between brands (nevermind how such a correlation would be generated or interpreted.) Then the fields in this collection would include: 'brand1', 'brand2', and 'correlation'.
For the sake of an example, let's say that brands can take on string values such as "google", "microsoft", etc., so that each document records the correlation between various brand names.
I would want to create a unique index on the 'brand1' and 'brand2' fields so that each document records the correlation between a pair of brands only once in the collection. In order to do this, the ordering of the key in the index must be taken into account when determining uniqueness in the collection. A key of ['google', 'microsoft'] should be considered the same as a key of ['microsoft', 'google'], so that if a document already exists with the former key, an insertion of a document with the latter key would be prohibited.
Is this kind of index possible?
There is no way to enforce that kind of constraint on a MongoDB collection.
What you can do, however, is enforce a constraint in your software that the two components of this index are always stored in sorted order. (For instance, always store ["a","b"], not ["b","a"].) This makes it so that there's only one "canonical" version of any pair in the collection.
I have two collections items with 120,000 entries and itemHistories with more than 20 million entries. I periodically update all items and itemHistories by fetching an API that lists all history data for an item.
What I need to do is batch insert the history data to the collection while avoiding duplicates. Also the history API returns only date, info, item_id values.
Is it possible to batch insert in Mongo so that it doesn't add duplicates for 2 values combined (date, item_id). So if there already is an entry with the same date and item_id don't add it. Basically the date is an unique index for the item_id. It's allowed to have duplicate date values in the collection but only if the item_id is different for all the duplicates.
One item can have close to a million entries so I don't think fetching the history from the collection and comparing it to the API response is going to be optimal.
My current idea was to add another key to the collection called hash that is an md5(date,info,item_id) and make it an unique index. Suggestions?
A little bit of digging in the documentation of Mongoose and MongoDB I found out that there is a thing called Unique Compound Index that solves my problem and answers this question. Since I've never used indexes before I didn't know such a thing was possible.
You can also enforce a unique constraint on compound indexes. If you
use the unique constraint on a compound index, then MongoDB will
enforce uniqueness on the combination of the index key values.
For example, to create a unique index on groupNumber, lastname, and
firstname fields of the members collection, use the following
operation in the mongo shell:
db.members.createIndex( { groupNumber: 1, lastname: 1, firstname: 1 }, { unique: true } )
Source: https://docs.mongodb.org/manual/core/index-unique/
In my case I can use this code below to avoid duplicates:
db.itemHistories.createIndex( { date: 1, item_id: 1 }, { unique: true } )
Lets say you have a collection with a field called "primary_key",
{"primary_key":"1234", "name":"jimmy", "lastname":"page"}
and I have an index on "primary_key".
This collection has millions of rows, I want to see how expensive is to change primary_key for one of the records. Does it trigger a reindex of the entire table? or does it just reindex the changed record? in either case is that expensive to do?
Updating an indexed field in mongodb causes an update of the index (or indices if you have more than one) that use it. It does not "reindex". Shouldn't be all that expensive - effectively you will delete the old entry and insert a new one.
This document has a fair amount of detail on mongodb indexes:
http://docs.mongodb.org/master/MongoDB-indexes-guide.pdf
BTW, keep in mind that there is one special field, _id, that mongodb uses as it's primary key
_id
A field required in every MongoDB document. The _id field must have a unique value. You can think of the _id field as the document’s
primary key. If you create a new document without an _id field,
MongoDB automatically creates the field and assigns a unique BSON
ObjectId.
You cannot update the _id field.
I am trying to create a collection with 50+ fields. I understand that the purpose of the primary key is to uniquely identify a record. Since the primary key is the _id in MongoDB that gets created automatically, isn't it obvious that all my records including duplicate would go into my DB with unique _id for evert record? Tell me where I'm going wrong.Other articles and discussions are more confusing.
How to set any one/more of the other fields as a primary key? But I don't want the default _id as primary key.
In what way, compound indexes are different from compound/primary key?
There is no such notion as a primary key in MongoDB. Terminology matters. Not knowing the terminology is a sure sign someone hasn't read the docs or at least not carefully.
A document in a collection must have an _id field which may be and by default is an ObjectId. This field has an index on it which enforces a unique constraint, so there can not be any two documents with the same value or combination of values in the _id field. Which, by what you describe, presumably is what you want.
My suggestion is to reuse the default _id as often as you can. Additional indices are expensive (RAM-wise). You have two options here: either use a different single value as _id or use multiple values, if the cardinality of the single field isn't enough.
Let us assume you want a clickstream per user recorded. Obviously, you need to have the unique user. But that would not be enough, since a user only could only have one entry. But since you need a timestamp fo each click anyway, you move it to the _id field:
{
_id:{
user: "some user",
ts: new ISODate()
},
...
}
Unless your Mongo installation is sharded, you can you create a unique compound index on multiple fields and use this as a surrogate composite primary key.
db.collection.createIndex( { a: 1, b: 1 }, { unique: true } )
Alternatively you could create your own _id values. However, as the default ObjectId is also a timestamp, personally I find it useful for auditing purposes.
Regarding the difference between compound index and composite primary key, by definition primary keys cannot be defined on a missing (null) fields and there can only be one primary key per document. In MongoDB only the _id field can be used as a primary key, as it is added by default when missing. In contrast, a compound index can be applied on missing fields by defining it as parse and you can define multiple compound indices on the same document.