MongoDB: Unique and sparse compound indexes with sparse values - mongodb

I'm trying to store the following link:
URL = {
hostname: 'i.imgur.com',
webid: 'qkELz.jpg'
}
I want a unique and sparse compound index on these two fields because:
A combination of hostname and webid should be unique.
webid will always be queried with hostname.
webid need not be globally unique.
A URL need not have a webid.
However, when I do this, I get the following error:
MongoError: E11000 duplicate key error index: db.urls.$hostname_1_webid_1 dup key: { : "imgur.com", : null }
I guess in the case of compound indexes, nulls are counted, whereas in regular indexes, they are not.
Any way out of this problem? For now I'm just going to index hostname and webid separately.

Keep in mind that mongodb can only use one index per query (it won't join indexes together to make a query on two fields that have separate indexes faster).
That said, if you want to try to check for uniqueness, you could do a query from the app before inserting (which only partially solves the problem, because there's a gap between when you query and when you insert).
You might want to vote on this JIRA issue for filtered indexes, which will probably help your use case:
https://jira.mongodb.org/browse/SERVER-785

Related

Optimizing MongoDB indexing (two fields query)

I have two fields scheduledStamp and email in a mongodb collection called inventory.
Having the following jpa query:
fun findAllByScheduledStampAfterAndEmailEquals(scheduledStamp:Long,email:String):List<Inventory>
What is the best way to index this collection?
I want to have less indexes as possible, avoiding unnecessary indexes.
Knowing that:
This collection can have more than million entries (index is needed)
Querying by:
db.inventory.find({ scheduledStamp: {$gt:1594048295294}})
for sure results few entries
Querying by:
db.inventory.find({ email: "abc#gmail.com"})
for sure results few entries
If you need to support query only on email : Indexing email is must
If you need to support query only on scheduledStamp: Indexing scheduledStamp is must
If you want of query on both, a third index is required. But you can create a compound index to cover this query and one of the above queries.
Since Mongo follows prefix match for selecting index:
You may have index on {"email":1} and {"scheduledStamp:1","email":1}
OR
You may have index on {"scheduledStamp":1} and {"email:1","scheduledStamp":1}
But since you said these fields return few documents:
Just having 2 indexes on {"email":1} and {"scheduledStamp":1} may perform good if not optimum.

MongoDB Indexing a field which may not exist

I have a collection which has an optional field xy_id. About 10% of the documents (out of 500k) does not have this xy_id field.
I have quite a lot of queries to this collection like find({xy_id: <id>}).
I tried indexing it normally (.createIndex({xy_id: 1}, {"background": true})) and it does improve the query speed.
Is this the correct way to index the field in this case? or should I be using a sparse index or another way?
Yes, this is the correct way. The default behaviour of MongoDB is serving well in this case. You can see in the docs that index creation supports an unique flag, which is false by default. All your documents missing the index key will be indexed under a single index entry. Queries can use this index in all cases because all the documents are indexed.
On the other hand, if you use sparse index the documents missing the index key will not be indexed at all. Some operations such as count, sort and other queries will not be able to use the sparse index unless explicitly hinted to do so. If explicitly hinted, you should be okay with incorrect results - the entries not in the index will be omitted in the result. You can read about it here.

Unique multi key hashed index in MongoDB

I have a collection with several billion documents and need to create a unique multi-key index for every attribute of my documents.
The problem is, I get an error if I try to do that because the generated keys would be too large.
pymongo.errors.OperationFailure: WiredTigerIndex::insert: key too large to index, failing
I found out MongoDB lets you create hashed indexes, which would resolve this problem, however they are not to be used for multi-key indexes.
How can i resolve this?
My first idea was to create another attribute for each of my document with an hash of every value of its attributes, then creating an index on that new field.
However this would mean to recalculate the hash every time I wish to add a new attribute, plus the excessive amount of time necessary to create both the hashes and the indexes.
This is a feature added in mongoDB since 2.6 to prevent the total size of an index entry to exceed 1024 bytes (also known as Index Key Length Limit).
In MongoDB 2.6, if you attempt to insert or update a document so that the value of an indexed field is longer than the Index Key Length Limit, the operation will fail and return an error to the client. In previous versions of MongoDB, these operations would successfully insert or modify a document but the index or indexes would not include references to the document.
For migration purposes and other temporary scenarios you can downgrade to 2.4 handling of this use case where this exception would not be triggered via setting this mongoDB server flag:
db.getSiblingDB('admin').runCommand( { setParameter: 1, failIndexKeyTooLong: false } )
This however is not recommended.
Also consider that creating indexes for every attribute of your documents may not be the optimal solution at all.
Have you examined how you query your documents and on which fields you key on? Have you used explain to view the query plan? It would be an exception to the rule if you tell us that you query on all fields all the time.
Here are the recommended MongoDB indexing strategies.
Excessive indexing has a price as well and should be avoided.

Dealing with mongodb unique, sparse, compound indexes

Because mongodb will index sparse, compound indexes that contain 1 or more of the indexed fields, it is causing my unique, sparse index to fail because one of those fields is optional, and is being coerced to null by mongodb for the purpose of the index.
I need database-level ensurance of uniqueness for the combination of this field and a few others, and having to manage this at the application level via some concatenated string worries me.
As an alternative, I considered setting the default value of the possibly null indexed field to 'null ' + anObjectId, because it would allow me to keep the index without causing errors. Does this seem like a sensisble (although hacky) solution? Does anyone know of a better way I could enforce database-level uniqueness on a compound index?
Edit: I was asked to elaborate on the actual problem domain a bit more, so here it goes.
We get large data feeds from our customers that we need to integrate into our database. These feeds include various (3) unique identifiers supplied by the customer that we use for updating the versions we store in our database when the data feeds refresh. I need to tie uniqueness of these identifiers to the customer, because the same identifier could appear from multiple sources, and we want to allow that.
The document structure looks like this:
{
"identifiers": {
"identifierA": ...,
"identifierB": ...,
"identifierC": ...
},
"client": ...
}
Because the each individual identifier is optional (at least one of the three is required), I need to uniquely index the combination of the index with the client (e.g. one index is the combination of client plus identifierA). However, this index must only occur when the identifier exists, but this is not supported my mongodb (see the hyperlink above).
I was considering the above solution, but I would like to hear if anyone else has solved this or has suggestions.
https://docs.mongodb.org/manual/core/index-partial/
As of mongoDB 3.2 you can create partial index to support this as well.
db.users.createIndex(
{ name: 1, email: 1 },
{ unique: true, partialFilterExpression: { email: { $exists: true } } }
)
A sparse index avoids indexing a field that doesn't exist.
A unique index avoid documents being inserted that have the same field values.
Unfortunately as of MongoDB 2.6.7, the unique constraint is always enforced even when creating a compound index (indexing two or more fields) with the sparse and unique properties.
Example:
db = db.connect("test");
db.a.drop();
db.a.insert([
{},
{a : 1},
{b : 1},
{a : 1, b : 1}
]);
db.a.ensureIndex({a:1,b:1}, { sparse: true, unique: true } );
db.a.insert({a : 1}); // throws Error but wanted insert to be valid.
However, it works as expected for a single index field with sparse and unique properties.
I feel like this is a bug that will get fixed in future releases.
Anyhow, here are two solutions to get around this problem.
1) Add a non-null hash field to each document that is only computed when all the required fields for checking the uniqueness are supplied.
Then create a sparse unique index on the hash field.
function createHashForUniqueCheck(obj){
if( obj.firstName && obj.id){
return MD5( String( obj.firstName) + String(obj.id) );
}
return null;
}
2) On the application side, check for uniqueness before insertion into Mongodb. :-)
sparse index doc
A hash index ended up being sufficient for this

MongoDB - Unique index vs compound index

Assume a hypothetical document with 3 fields:
_id : ObjectId
emailAddress : string
account : string
Now, given a query on emailAddress AND account, which of the following two indexes will perform better:
Unique index on emailAddress alone (assume it is a unique field)
Compound index on account and emailAddress
In terms of performance the difference will be small at best. Due to the fact that your e-mail addresses are unique any compound index that has an e-mail field will not ever be more helpful than an index on e-mail address alone. The reason for this is that your e-mail field already has maximal cardinality for your collection and any further index fields will not help the database to filter records more quickly since it will always arrive on the correct documents with just the e-mail field.
In terms of memory usage (which is very important for databases like MongoDB) the e-mail index alone is much smaller as well.
TL;DR : Use the index on e-mail address alone.
When it comes to Indexes, the goal is to create a single index with highest possible cardinality (or "selectivity"). Try to write queries that use 1 (compounded) index per query. Unique indexes have maximum cardinality. Compounding unique indexes with less selective fields can not further increase that maximum. Adding more indexes just slows down find(), update() and remove() queries. So be "lean and mean".
However, if you are using sort() on the account field, while doing a find() on the email field, then you should use a compound index:
it's common to query on multiple keys and to sort the
results. For these situations, compound indexes are best.
http://www.mongodb.org/display/DOCS/Indexing+Advice+and+FAQ
So think it through! If you need to sort data by another field, then you usually need a compound index.