Creating unique index for CosmosDB with MongoDB API fails - mongodb

I'm using Azure CosmosDB with the MongoDB API. I'm trying to execute the following:
db.createCollection('test')
db.test.createIndex({key: 1}, {name: 'key_1', unique: true})
However, doing so fails with the following error:
The unique index cannot be modified. To change the unique index, remove the collection and re-create a new one.
When reading about it in the documentation and on Stack Overflow, it's mentioned that you can only create a unique index for empty collections.
However, when I try the following command, it seems my collection is empty, so this apparantly isn't the reason why it isn't working:
db.test.find()
I tried to recreate the collection several times, but to no avail.

As per the format of the query mentioned, it seems to be like wildcard indexing style and unfortunately if it is true, then wildcard indexing is having limitation in creation of unique index. In Azure Cosmos DB API for MongoDB cannot use wildcard index. Because creating a wildcard index is like executing multiple specific fields.
https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/mongodb-indexing
Refer the above-mentioned link for reference.

In my case, the problem was caused by a limitation of the continuous backup of CosmosDb.
As stated in the documentation:
Azure Cosmos DB API for SQL or MongoDB accounts that create unique index after the container is created aren't supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using extension commands.
I had to create my unique index using the custom commands.

Related

Failed to target upsert by query :: could not extract exact shard key', details={}}.; nested exception is com.mongodb.MongoWriteException

We have recently moved to mongo java driver core/sync 4.4.0 from 3.12.1 with spring data mongo db from 2.2.5.RELEASE to 3.3.0 also spring boot version 2.6.2 and mongo server version 4.2.5
We are getting exceptions while hitting upsert queries on sharded collection with above mentioned error
There is way to insert shard key in query filter but that is not feasible for us, hence we tried adding #Sharded annotations to our DTO as we have different shard keys for different collections
Still we are getting the above mentioned error also we are unable to get exact meaning of what is full copy of entity meant in below statement
update/upsert operations replacing/upserting a single existing document as long as the given UpdateDefinition holds a full copy of the entity.
Other queries are working fine also upsert queries are working fine on addition of shard key in query filter but that change is not feasible for us we need quick solution
Please help as not able to find any solution on any platform. Thanks in advance!
So here's the deal:
You cannot upsert on Sharded Collection in MongoDB UNLESS you include the shard key in the filter provided for update operation.
To quote MongoDB Docs:
For a db.collection.update() operation that includes upsert: true and is on a sharded collection, you must include the full shard key in the filter:
For an update operation.
For a replace document operation (starting
in MongoDB 4.2).
If you are using MongoDB 4.4+ then you have a workaround as mentioned below:
However, starting in version 4.4, documents in a sharded collection can be missing the shard key fields. To target a document that is missing the shard key, you can use the null equality match in conjunction with another filter condition (such as on the _id field). For example:
{ _id: <value>, <shardkeyfield>: null } // _id of the document missing shard key
Ref: - https://docs.mongodb.com/manual/reference/method/db.collection.update/#behavior

Azure CosmosDb with mongo error - "MongoError: query in command must target a single shard key"

I have a cosmosdb database with Sharedkey. When my service run, it remove all documents by sharedkey field and insert the next. But during tests I had duplicateds inserts without errors notifys.I't have permission to directly delete the collection. This's customer envirronment and has process to change database.
I'd like to remove all collection documents. But has duplicated SharedKey fields and throw this error message:
MongoError: query in command must target a single shard key
When list by shared key
When try to remove by shared key
Hey friends somebody can "give me a hand"?
Obrigado!!!
Just like what some engineer said in the website I mentioned in the comment,
We have got the same feedback from PG team that you got on the support
ticket that you drop the entire collection using drop() instead of
deletemany(). So please follow the same for getting this issue
resolved.
So the solution to your error is use drop() instead.
To remove all collection documents, you can use the 'drop()' command. For more information on this, you can go through the MongoDB Documentation on drop()
It was a bug. To fix i had to remove all data and move to another collection with once sharedkey.
The best solutions was to change to Atlas MongoDb 😋

How to create a collection specific search index in Atlas mongoDB?

I am trying to create a collection specific search index in the atlas MongoDB but not able to find a way.
Found this documentation:
https://docs.atlas.mongodb.com/reference/api/atlas-search/#atlas-search-api-ref
but the API they mentioned will create indexed on cluster level
/groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/fts/indexes/
Can someone guide me on how I will create a collection specific search index via script?
When you use the api endpoint you must provide the collection name.

mongodb duplicate a collection within the same database

I want to clone an existing collection, including data and indexes, to a new collection with another name within the same database, using mongodb JSON interface (not the command-line interface).
I've tried:
cloneCollection - didn't work. is for cloning across databases.
aggregate with an $out operator - that copies just the data but not the indexes.
The aggregate command I've tried:
{"aggregate":"orig_coll", "pipeline":[{"$out":"orig_clone"}]}
There is no way to do this in one JSON query.
So, two solutions here :
Using mongodump/mongorestore as proposed in What's the fastest way to copy a collection within the same database?
Using two queries : one to create the destination table with the index and the aggregate query that you already have. I understand that it's not a perfect solution as you need to maintain the query to create the index on the destination table and the index on the source table but there's no other way to do this.
What you need to understand is that, the JSON interface as you told it is not a database interface but a database JavaScript query language. So you can pass query to it not command. In fact, it's not an interface just a query DSL. The interface is the mongo shell or any of the mongo drivers (java, perl, ...) or any of the mongo admin tools ...

Mongodb database with multiple unique indexes in sharded config

I have a "users" collection in a database in mongodb. I will be using sharding.
Other than the standard _id, there are 3 other fields that need to be indexed and unique: username, email, and account number. All 3 of these fields will not necessarily exist for every user document, in some cases none will exist.
I'd like the fields to be indexed because users will be looked up frequently by one of these fields. I'd like the fields to be unique because this is a requirement and I'd rather not handle this logic client-side.
I understand that mongodb does have limitations, just like any other database, but I'm hoping there's a solution because this is a fairly common setup for web applications.
Is there an elegant solution for this scenario?
Not sure if it matters for this question (because the question pertains to database structure), but I am using the official mongodb C# driver.
Mongodb official documentation says, that sharded collection must have only one unique index, and requires the unique field(s) to exist. But it also says that you also have the option to have other unique indices if and only if the shard key is a prefix of their attributes. So you can try this, but aware, that unique key must always exist.
I don't understand your business logic where no information about the user would exist. In this case you can shard by _id and perform uniqueness checks manually.