Remove duplicates from MongoDB 4.0 - mongodb

I am trying to remove duplicates from MongoDB but all solutions find fail. Given the current JSON structure:
{
"_id": { "$oid": "5cee31bbca8a185b76a692db" },
"date": { "$date": "2018-10-07T19:11:38.000Z" },
"id": "1049014405130858496",
"username": "chrisoldcorn",
"text": "“The #UK can rest now. The Orange Buffoon is back in his xenophobic #WhiteHouse!” #news #politics #trump #populist #uspoli #ukpolitics #ukpoli #london #scotland #TrumpBaby #usa #america #canada #eu #europe #brexit #maga #msm #gop #elections #election2018 https://medium.com/#chrisoldcorn/trump-babys-uk-visit-a-reflection-1c2aa4ad942 …pic.twitter.com/Y6Yihs9g6K",
"retweets": 1,
"favorites": 0,
"mentions": "#chrisoldcorn",
"hashtags": "#UK #WhiteHouse #news #politics #trump #populist #uspoli #ukpolitics #ukpoli #london #scotland #TrumpBaby #usa #america #canada #eu #europe #brexit #maga #msm #gop #elections #election2018",
"geo": "",
"replies": 0,
"to": null,
"lan": "en"
}
I need to remove all duplicates based on field "id" in the file.
I have tried db.tweets.ensureIndex( { id:1 }, { unique:true, dropDups:true } ) but I am not sure this is the correct way. I obtain this output:
Can anyone help me?

It looks like you are running a MongoDB with version >3.0 and hence cannot remove dups by ensuring an index
According to the docs:
Changed in version 3.0: The dropDups option is no longer available.
The fastest way to do this would be to
Create a Dump
Drop the collection
Create the new Index
Restore the Dump
All duplicate documents will be dropped during the restore insert
The next best solution will be to run a script to collect all duplicate Ids and remove them

Related

MongoDB not using Index on simple find

I have a collection called "EN" and I created an index as follow:
db.EN.createIndex( { "Prod_id": 1 } );
When I run db.EN.getIndexes() I get this:
[{ "v": 2, "key": {
"_id": 1 }, "name": "_id_" }, { "v": 2, "key": {
"Prod_id": 1 }, "name": "Prod_id_1" }]
However, when I run the following query:
db.EN.find({'Icecat-interface.Product.#Prod_id':'ABCD'})
.explain()
I get this:
{ "explainVersion": "1", "queryPlanner": {
"namespace": "Icecat.EN",
"indexFilterSet": false,
"parsedQuery": {
"ICECAT-interface.Product.Prod_id": {
"$eq": "ABCD"
}
},
"queryHash": "D12BE22E",
"planCacheKey": "9F077ED2",
"maxIndexedOrSolutionsReached": false,
"maxIndexedAndSolutionsReached": false,
"maxScansToExplodeReached": false,
"winningPlan": {
"stage": "COLLSCAN",
"filter": {
"ICECAT-interface.Product.Prod_id": {
"$eq": "ABCD"
}
},
"direction": "forward"
},
"rejectedPlans": [] }, "command": {
"find": "EN",
"filter": {
"ICECAT-interface.Product.Prod_id": "ABCD"
},
"batchSize": 1000,
"projection": {},
"$readPreference": {
"mode": "primary"
},
"$db": "Icecat" }, "serverInfo": {
It's using COLLSCAN instead of the index, why is this happening?
MongoDB version is 5.0.9-8
Thanks
EDIT (and solution)
It turns that the field name has "#" in front and the index was created without this character so was not picking it up at all.
Once I created a new index using the field name as it was supposed to be it worked OK.
It was interesting though to see how indexing works and best practices
Your find operation is defined as
.find({'Icecat-interface.Product.#Prod_id':'ABCD'})
What is Icecat-interface.Product.#?
The parsedQuery in the explain output confirms that MongoDB is attempting to look for a document that has has a value of "ABCD" for a different field name than the one you have aindexed. From the explain you've provided, that field name is "ICECAT-interface.Product.Prod_id". As the field name being queried and the one that is indexed are different, MongoDB cannot use the index to perform the operation.
Marginally related, the # character that is used in the find is absent in the explain output. This appears to because the actual operation that was used to generate the explain was slightly different. This is also noticeable by the fact that the explain include a batchSize of 1000 which is absent in the operation that was shown as the one being explained.
Depending on what the Icecat-interface.Product.# prefix is supposed to be, the solution is probably to simply remove that from the query predicate in the find itself.
Edit to respond to the comment and the edit to the question. Regarding the comment first:
When I run this: .find({'Prod_id':'ABCD'}) it uses COLLSCAN which to me is wrong, as I have an index on that field, unless I'm missing something here
MongoDB will look to use an index if its first key is used by the query. So an index on { y: 1 } would not be eligible for use by a query of .find({ x: 1}). Similarly to a generic x and y example, Icecat-interface.Product.Prod_id and Prod_id are different field names. So if you query on one but only an index on the other exists, then a collection scan is the only way for the database to execute the query.
This then overlaps some with the edit to the question. In the edited question the new explain plan shows the database successfully using an index. However, that index is { "ICECAT-interface.Product.Prod_id": 1 } which is not the index that you originally show being created or present on the collection ({ "Prod_id": 1 }).
Moreover, you also mention that you "don't get any result back, even with products I know are in the DB". Which field in the database contains the value that you are searching on ('ABCD')? This is going to directly inform what results you get back and what index is used to find the results. Remember that you can search on any arbitrary field in MongoDB, even if it doesn't exist in the database.
I would recommend some extra attention be paid to the namespaces and field names that are being used. Unless this { "ICECAT-interface.Product.Prod_id": 1 } index was created after the db.EN.getIndexes() output was gathered, you may be inadvertently connecting to different systems or namespaces since that index is definitely present somewhere.
Based on your live comments while I'm writing this, seems like you've solved the field name mystery.

Update multiple fields -Partial update- in MongoDB document

I have a document like below in mongo database and I have corresponding Model class Flight.java with all possible fields that are come in any time as update ?
{
"flight": {
"event": "Leg",
"version": "2",
"key": {
"fltNum": "1111",
"fltOrgDate": "2021-01-12",
"depSta": "BBB",
"dupDepStaNum": "0"
},
"leg": {
"stations": {
"arr": "VVV",
"dupArrStaNum": "0"
},
"times": {
"STD": "2021-01-12T20:30:00",
"STA": "2021-01-12T23:21:00",
"LTD": "2021-01-12T20:30:00",
"LTA": "2021-01-12T23:05:00",
"PTA": "2021-01-12T23:05:00"
},
"status": {
"leg": " ",
"dep": "S",
"arr": "S"
}
}
}
}
So I will get updates for this document continuously with existing fields or some new fields also. I am not sure which filed will get update.
So How can I update document using java MongoDB or Spring Data MongoDB?
Well, you have pojo which has all possible fields.
Every update requires three parts.
Find command to identify the doc to be updated
Doc which specifies what will be updated
Options specifies upsert, multi
You can iterate each fields in pojo and check for not null fields then add it to updateDoc
example update in java

Is updating Embedded Documents in MongoDB a Manual process?

I am not overly familiar with Mongodb yet , but I have a question about embedded documents.
I have seen a number of posts which show you how to update embedded documents through some update query.
My question is this: If I have a collection with embedded documents - which is denormalised for performance ; and one of the embedded documents changes, then do I need to manually update all the embedded documents or is there some way of specifying the link in MongoDB to Auto-Update?
For Example:
An Order record might look like the structure below. Note there is a Product item in one of the rows.
Lets say the ItemName field changed to "Product1a" in the product from a different collection and I want to update the product in every single order where this exists. Is that a manual process - or is there a way od setting it up in Mongodb to auto-update embedded documents?
{
"id": "ccc1beb1-e022-11e9-97f0-e7e789106ab2",
"type": "order",
"orderNumber": "ORD-100209857x",
"orderDate": "2019-09-26T17:42:31.000+12:00",
"orderItems": [
{
"discount": 0,
"price": 24.4944,
"product": {
"id": "ccc1beb1-e022-11e9-97f0-e7e789106ab2",
"itemNumber": "prd1",
"itemName": "Product1"
},
"qty": 4,
"rowTotal": 97.96,
"taxAmount": 9.8
},
{
"discount": 0,
"price": 3.21,
"itemName": "Shipping",
"qty": 1,
"rowTotal": 3.21,
"taxAmount": 0
}
]
}
Not sure what you mean by manual process, but here is some sample code to update all the documents
db.collection.updateMany({}, {$set:{"orderItems.product.itemName": "updatedProductName"}})
Let me know if this is not what you are looking for.

MongoDB document setup and aggregation

I'm pretty new to MongoDB and while preparing data to be consumed I got into Aggregation... what a powerful little thing this database has! I got really excited and started to test some things :)
I'm saving time entries for a companyId and employeeId ... that can have many entries... those are normally sorted by date, but one date can have several entries (multiple registrations in the same day)
I'm trying to come up with a good schema so I could easily get my data exactly how I need and as a newbie, I would rather ask for guidance and check if I'm in the right path
my output should be as
[{
"company": "474A5D39-C87F-440C-BE99-D441371BF88C",
"employee": "BA75621E-5D46-4487-8C9F-C0CE0B2A7DE2",
"name": "Bruno Alexandre":
"registrations": [{
"id": 1448364,
"spanned": false,
"spannedDay": 0,
"date": "2019-01-17",
"timeStart": "09:00:00",
"timeEnd": "12:00:00",
"amount": {
"days": 0.4,
"hours": 2,
"km": null,
"unit": "days and hours",
"normHours": 5
},
"dateDetails": {
"week": 3,
"weekDay": 4,
"weekDayEnglish": "Thursday",
"holiday": false
},
"jobCode": {
"id": null,
"isPayroll": true,
"isFlex": false
},
"payroll": {
"guid": null
},
"type": "Sick",
"subType": "Sick",
"status": "APP",
"reason": "IS",
"group": "LeaveAndAbsence",
"note": null,
"createdTimeStamp": "2019-01-17T15:53:55.423Z"
}, /* more date entries */ ]
}, /* other employees */ ]
what is the best way to add the data into a collection?
Is it more efficient if I create a document per company/employee and add all registration entries inside that document (it could get really big as time passes)... or is it better to have one document per company/employee/date and add all daily events in that document instead?
regarding aggregation, I'm still new to all this, but I'm imagining I could simply call
RegistrationsModel.aggregate([
{
$match: {
date: { $gte: new Date('2019-01-01'), $lte: new Date('2019-01-31') },
company: '474A5D39-C87F-440C-BE99-D441371BF88C'
}
},
{
$group: {
_id: '$employee',
name: { '$first': '$name' }
}
},
{
// ... get all registrations as an Array ...
},
{
$sort: {
'registrations.date': -1
}
}
]);
P.S. I'm taken the Aggregation course to start familiarized with all of it
Is it more efficient if I create a document per company/employee and
add all registration entries inside that document (it could get really
big as time passes)... or is it better to have one document per
company/employee/date and add all daily events in that document
instead?
From what I understand of document oriented databases, I would say the aim is to have all the data you need, in a specific context, grouped inside one document.
So what you need to do is identify what data you're going to need (getting close to the features you want to implement) and build your data structure according to that. Be sure to identify future features, cause the more you prepare your data structure to it, the less it will be tricky to scale your database to your needs.
Your aggregation query looks ok !

MongoDB: Can a $inc increment a value inside a $addToSet

I'm relatively new to MongoDB and having trouble with a more advanced upsert. I've been Googling and reading the documentation but having trouble knowing exactly what I'm looking for. Basically I'm creating a hit counter which will store data for multiple domains.
My document structure is:
{
"domain": "example.com",
"hitCount": 1,
"urls": [
{
"url": "/the-url",
"hitCount": 1,
"hits": [
{
"date": "2011-10-30T04:50:01.090Z",
"IP": "123.123.123.123"
}
]
}
]
}
My upsert code so far is:
{
$set: {"domain": "example.com"},
$inc: {"hitCount": 1},
$addToSet: {"urls": {"url": "/the-url"} }
}
This bits working great but as you can see its only the first part of the upsert. I'm having trouble inserting the rest of the data inside "urls" such as incrementing the "hitCount" and adding the "date" and "IP" of the hit.
I was wondering if this document structure is possible in one upsert? I'm starting to think I need to do multiple queries to achieve this?
You must perform multiple queries.