Unique compound index in MongoDB throwing DuplicateKeyException on first time insertion - mongodb

I have created a compound index on a collection in mongo db using mongo-java-driver(3.6.3).
IndexOptions indexOptions = new IndexOptions().unique(true);
mongoCollection.createIndex(Indexes.ascending("f1","f2"), indexOptions);
On insertion of a record,
mongoCollection.insertOne(dBObject);
The driver is throwing a DuplicateKeyException even though there is no record in the database with same f1 and f2.
E11000 duplicate key error collection: db-test.test1 index: f1_1_f2_1 dup key: { : "testf1", : "testf2" }
If I check the record in the database through mongodb-client, the record is inserted in the database.
Yet the application code gets the exception.

Related

Mongo's bulkWrite with updateOne + upsert works the first time but gives duplicate key error subsequent times

I'm using the Mongo-php-library to insert many documents into my collection (using bulkWrite). I want the documents to be updated if it already exists or to be inserted if it doesn't, so I'm using "upsert = true".
The code works fine the first time I run it (it inserts the documents), but the second time I run it it gives me this error:
Fatal error: Uncaught MongoDB\Driver\Exception\BulkWriteException: E11000 duplicate key error collection: accounts.posts index: postid dup key: { id: "2338...
I can't see anything wrong with my code. I have already gone through all SO posts but none helped.
This is my code:
// I prepare the array $post_operations with all updateOne operations
// where $data is an object that contains all the document elements I want to insert
$posts_operations = array();
foreach ($this->posts as $id => $data) {
array_push($posts_operations, array('updateOne' => [['id' => $id], ['$set' => $data], ['upsert' => true]]));
}
// Then I execute the method bulkWrite to run all the updateOne operations
$insertPosts = $account_posts->bulkWrite($posts_operations);
It works the first time (when it inserts), but then it doesn't the second time (when it should update).
I have a unique index set up in the collection for 'id'.
Thanks so much for your help.
Ok I was able to fix it. I believe this might be a bug and I've reported it already in the Github repo.
The problem occurred only when "id" was a string of numbers. Once I converted "id" (the field that I was indexing) to an integer it works perfectly.

Why would this upsert fail with a duplicate id exception?

This one keep cropping up from time to time. I have the operation done as an upsert, but every so often, the service crashes because it runs into this error and I don't understand how it's even possible. I try to do the upsert using the SurveyId as the key on which to match:
await _surveyRepository.DatabaseCollection.UpdateOneAsync(
Builders<SurveyData>.Filter.Eq(survey => survey.SurveyId, surveyData.SurveyId),
Builders<SurveyData>.Update
.Set(survey => survey.SurveyLink, surveyData.SurveyLink)
.Set(survey => survey.ClientId, surveyData.ClientId)
.Set(survey => survey.CustomerFirstName, surveyData.CustomerFirstName)
.Set(survey => survey.CustomerLastName, surveyData.CustomerLastName)
.Set(survey => survey.SurveyGenerationDateUtc, surveyData.SurveyGenerationDateUtc)
.Set(survey => survey.PortalUserId, surveyData.PortalUserId)
.Set(survey => survey.PortalUserFirst, surveyData.PortalUserFirst)
.Set(survey => survey.PortalUserLast, surveyData.PortalUserLast)
.Set(survey => survey.Tags, surveyData.Tags),
new UpdateOptions { IsUpsert = true })
.ConfigureAwait(false);
And I'll occasionally get this error:
Message: A write operation resulted in an error. E11000 duplicate
key error collection: surveys.surveys index: SurveyId dup key: { :
"" }
The id is a string representation of a Guid and is set to unique in mongo.
So why would this happen? It is my understanding that if it finds the key, it'll update the defined properties, and if not, it'll insert. Is that not correct? Because, that is the effect that I need.
C# driver version is 2.4.1.18
This happens because according to this Jira ticket:
During an update with upsert:true option, two (or more) threads may attempt an upsert operation using the same query predicate and, upon not finding a match, the threads will attempt to insert a new document. Both inserts will (and should) succeed, unless the second causes a unique constraint violation.
It is my understanding that if it finds the key, it'll update the
defined properties, and if not, it'll insert. Is that not correct
Yes that's what upsert does. And newly inserted document will contain all fields from criteria part(in your case surveyId) as well as update modification part(all other specified fields) of your update query.
You need to set upsert=false in your query. Then it will only update documents with matching criteria, and update will fail if no match is found.

mongo_connector.errors.OperationFailed: insertDocument :: caused by :: 11000 E11000 duplicate key error index

I have an existing python method which is doing the update operation properly in mongodb. I got a requirement where if any document modified in mongodb, I need to create a new document in same collection for audit purpose. So I added below piece of code under existing method to perform insert operation.
self.mongo[db][coll].insert(update_spec)
As i'm passing same ObjectId of existing document to my insert operation its failing with below exception,
mongo_connector.errors.OperationFailed: insertDocument :: caused by :: 11000 E11000 duplicate key error index: mongotest1.test.$_id_ dup key: { : ObjectId('57dc1ef45cc819b6645af91d') }
Is there a way to ignore the ObjectId of existing document, so that I can insert the other existing values in my new inserted document? Kindly suggest. Below is the complete code.
def update(self, document_id, update_spec, namespace, timestamp):
"""Apply updates given in update_spec to the document whose id
matches that of doc.
"""
db, coll = self._db_and_collection(namespace
self.mongo[db][coll].insert(update_spec)
self.meta_database[meta_collection_name].replace_one(
{self.id_field: document_id, "ns": namespace},
{self.id_field: document_id,
"_ts": timestamp,
"ns": namespace},
upsert=True)
no_obj_error = "No matching object found"
updated = self.mongo[db].command(
SON([('findAndModify', coll),
('query', {'_id': document_id}),
('update', update_spec),
('new', True)]),
allowable_errors=[no_obj_error])['value']
return updated
Delete the id field from the update_spec object before inserting. The specific way to do this depends on which field of update_spec is the id, and what object type it is.

ReplaceOne throws duplicate key exception

My app receives data from a remote server and calls ReplaceOne to either insert new or replace existing document with a given key with Upsert = true. (the key is made anonymous with *) The code only runs in a single thread.
However, occasionally, the app crashes with the following error:
Unhandled Exception: MongoDB.Driver.MongoWriteException: A write operation resulted in an error.
E11000 duplicate key error collection: ****.orders index: _id_ dup key: { : "****-********-********-************" } ---> MongoDB.Driver.MongoBulkWriteException`1[MongoDB.Bson.BsonDocument]: A bulk write operation resulted in one or more errors.
E11000 duplicate key error collection: ****.orders index: _id_ dup key: { : "****-********-********-************" }
at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)
at MongoDB.Driver.MongoCollectionBase`1.ReplaceOne(FilterDefinition`1 filter, TDocument replacement, UpdateOptions options, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at MongoDB.Driver.MongoCollectionBase`1.ReplaceOne(FilterDefinition`1 filter, TDocument replacement, UpdateOptions options, CancellationToken cancellationToken)
at Dashboard.Backend.AccountMonitor.ProcessOrder(OrderField& order)
at Dashboard.Backend.AccountMonitor.OnRtnOrder(Object sender, OrderField& order)
at XAPI.Callback.XApi._OnRtnOrder(IntPtr ptr1, Int32 size1)
at XAPI.Callback.XApi.OnRespone(Byte type, IntPtr pApi1, IntPtr pApi2, Double double1, Double double2, IntPtr ptr1, Int32 size1, IntPtr ptr2, Int32 size2, IntPtr ptr3, Int32 size3)
Aborted (core dumped)
My question is, why is it possible to have dup key when I use ReplaceOne with Upsert = true options?
The app is working in the following environment and runtime:
.NET Command Line Tools (1.0.0-preview2-003121)
Product Information:
Version: 1.0.0-preview2-003121
Commit SHA-1 hash: 1e9d529bc5
Runtime Environment:
OS Name: ubuntu
OS Version: 16.04
OS Platform: Linux
RID: ubuntu.16.04-x64
And MongoDB.Driver 2.3.0-rc1.
Upsert works based on the filter query. If the filter query doesn't match, it will try to insert the document.
If the filter query finds the document, it will replace the document.
In your case, it could have gone in either way i.e. insert/update. Please check the data to analyze the scenario.
Insert scenario:-
The actual _id is created automatically by upsert if _id is not present in filter criteria. So, _id shouldn't create uniqueness issue. If some other fields are part of unique index, it would create uniqueness issue.
Replace scenario:-
The field that you are trying to update should have unique index defined on it. Please check the indexes on the collection and its attributes.
Optional. When true, replaceOne() either: Inserts the document from
the replacement parameter if no document matches the filter. Replaces
the document that matches the filter with the replacement document.
To avoid multiple upserts, ensure that the query fields are uniquely
indexed.
Defaults to false.
MongoDB will add the _id field to the replacement document if it is
not specified in either the filter or replacement documents. If _id is
present in both, the values must be equal.
I could not get IsUpsert = true to work correctly due to a unique index on the same field used for the filter, leading to this error: E11000 duplicate key error collection A retry, as suggested in this Jira ticket, is not a great workaround.
What did seem to work was a Try/Catch block with InsertOne and then ReplaceOne without any options.
try
{
// insert into MongoDB
BsonDocument document = BsonDocument.Parse(obj.ToString());
collection.InsertOne(document);
}
catch
{
BsonDocument document = BsonDocument.Parse(obj.ToString());
var filter = Builders<BsonDocument>.Filter.Eq("data.order_no", obj.data.order_no);
collection.ReplaceOne(filter, document);
}
There is not enough information from you, but probably the scenario is the following:
You receive data from server, replaceOne command doesn't match any record and try to insert new one, but probably you have a key in a document that is unique and already exists in a collection. Review and make some changes in your data before trying to update or insert it.
I can co-sign on this one:
public async Task ReplaceOneAsync(T item)
{
try
{
await _mongoCollection.ReplaceOneAsync(x => x.Id.Equals(item.Id), item, new UpdateOptions { IsUpsert = true });
}
catch (MongoWriteException)
{
var count = await _mongoCollection.CountAsync(x => x.Id.Equals(item.Id)); // lands here - and count == 1 !!!
}
}
There is a bug in older MongoDB drivers, for example v2.8.1 has this problem. Update your MongoDB driver and the problem will go away. Please note when you use a new driver the DB version also needs to be updated and be compatible.

prevent duplicates of multiplefields with index

My document:
{
"age":"20",
"name":"leandro"
}
I need prevent inserting new documents if another exists with same age and email.
Can I do this using index?
Yes you can do it by creating index with unique=true as follows. After creating index, if you try to insert a document with same age & name then you will get duplicate key exception.
db.myObject.ensureIndex({age:1, name:1}, {unique : true})
For details you can read Create a Unique Index document.