Extract data lists from Mongo Documents - mongodb

As a mongo/nosql newbie with a RDBMS background I wondered what's the best way to proceed.
Currently I've got a large set of documents, containing in some fields, what I consider as "reference datas".
My need is to display in a search interface summarizing the possible values of those "reference fields" to further proceed a filter on my documents set.
Let's take a very simple and stupid example about nourishment.
Here is an extract of some mongo documents:
{ "_id": 1, "name": "apple", "category": "fruit"}
{ "_id": 1, "name": "orange", "category": "fruit"}
{ "_id": 1, "name": "cucumber", "category": "vegetable"}
In the appplication I'd like to have a selectbox displaying all the possible values for "category". Here it would display "fruit" and "vegetable".
What's the best way to proceed ?
extract datas from the existing documents ?
create some reference documents listing unique possible values (as I would do in RDBMS )
store reference data in a rdbms and programatically link mongo and rdbms...
something else ?

The first option is the easiest to implement and should be efficient if you have indexes properly set (see distinct command), so I would go with this.
You could also choose the second option (linking to a reference collection - RDBMS way) which trades performance (you will need more queries for fetching data) for space (you will need less space). Also, this option is preferred if the category is used in other collections as well.
I would advise against using a mixed system (NoSQL + RDBMS) in this case as the other options are better.
You could also store category values directly in application code - depends on your use case. Sometimes it makes sense, although any RDBMS fanatic would burst into tears (or worse) if you tell him that. YMMV. ;)

Related

How MongoDB works for this case?

I have a doubt about MongoDB, I know 'what is mongo' but I am not sure if this database is good for a requirement that I need to do. Well, here I go.
Description:
I need to store some data from devices (200 devices more o less) and those devices will report every 30 seconds geolocalization data (lat, long), so it will be 576.000 objects/day (2880 request = 1 device per day)
I thought this structure for my documents inside of 'locations' collection
{
"mac": "dc:a6:32:d4:b6:dc",
"company_id": 5,
"locations": [
{
"date": "2021-02-23 10:00:02",
"value": "-32.955465, -60.661143"
}
]
}
where 'locations' is an array that will store all locations every 30 seconds.
Questions:
Is able MongoDB database to do this?
Is correctly my document structure to solve this?
When this array will be a very big month after, What will happen?
There is a better way to do this? (database, framework, etc)
TIA !!
Is able MongoDB database to do this?
Yes, this will be fine.
Is correctly my document structure to solve this?
No, not at all!
Never store date/time values as sting, it's a design flaw. Use always proper Date object. (This applies for any database).
Similar statement applies for the coordinate, don't store it as string. I recommend a GeoJSON Objects, then you can also create index on it and run spatial queries. Example: location: { type: "Point", coordinates: [ -32.955465, -60.661143 ] }
When this array will be a very big month after, What will happen?
The document size in MongoDB cannot exceed 16MiByte, it's a fixed limit. So, it does not look like a good design. Maybe store locations per day or even one document per report.
There is a better way to do this? (database, framework, etc)
Well, ask 5 people and you will get 6 answers. At least your approach is not wrong.
Is able MongoDB database to do this? Yes
Is correctly my document structure to solve this? No
When this array will be a very big month after, What will happen? The maximum BSON document size is 16 megabytes.
Is there a better way to do this?
(database, framework, etc) Yes. The Bucket Pattern is a great solution for when needing to manage Internet of Things (IoT) applications.
You can have one document per device per hour, and a location document where the keys are the lapse each 30 seconds.
{
"mac": "dc:a6:32:d4:b6:dc",
"company_id": 5,
"date": ISODate("2021-02-23T10"),
"locations": {
"0": "-32.955465, -60.661143",
"1": "-33.514655, -60.664143",
"2": "-33.122435, -59.675685"
}
}
Adjust this solution considering your workload and main queries of your system.

Is keeping a log of all document relationships an anti-pattern in CouchDB?

When we return each document in our database to be consumed by the client we also must to add a property "isInUse" to that document's response payload to indicate if a given documented is referenced by other documents .
This is needed because referenced documents cannot be deleted and so a trash bin button should not be displayed next to it's listing entry in the client-side app.
So basically we have relationships where a document can reference another link this:
{
"_id": "factor:1I9JTM97D",
"someProp": 1,
"otherProp": 2,
"defaultBank": <id of some bank document>
}
Previously we have used views and selectors to query for each documents references in other documents, however this proved to be non-trivial.
So here's how someone in our team has implemented this now: We register all relationships in dedicated "relationship" documents like the one below and update them every time a document created/updated/deleted by the server, to reflect anything new references or de-references:
{
"_id": "docInUse:bank",
"_rev": "7-f30ffb403549a00f63c6425376c99427",
"items": [
{
"id": "bank:1S36U3FDD",
"usedBy": [
"factor:1I9JTM97D"
]
},
{
"id": "bank:M6FXX6UA5",
"usedBy": [
"salesCharge:VDHV2M9I1",
"salesCharge:7GA3BH32K"
]
}
]
}
The question is whether this solution is an anti-pattern and what are the potential drawbacks.
I would say using a single document to record the relationships between all other documents could be problematic because
the document "docInUse:bank" could end up being updated frequently. Cloudant allows you to update documents but when you get to many thousands of revisions, then the document size becomes none trivial, because all the previous revision tokens are retained
updating a central document invites the problem of document conflicts if two processes attempt to update the document at the same time. You are allowed to have have conflicts, but it is your app's responsibility to manage them see here
if you have lots of relationships, this document could get very large (I don't know enough about your app to judge)
Another solution is to keep your bank:*, factor:* & salesCharge:* documents the same and create a document per relationship e.g.
{
"_id": "1251251921251251",
"type": "relationship",
"doc": "bank:1S36U3FDD",
"usedby": "factor:1I9JTM97D"
}
You can then find out documents on either side of the "join" by querying documents by the value of doc or usedby with a suitable index.
I've also seen implementations, where the document's _id field contains all of the information:
{
"_id": "bank:1S36U3FDD:factor:1I9JTM97D"
"added": "2018-02-28 10:24:22"
}
and the primary key helpfully sorts the document ids for you allowing you to use judicious use of GET /db/_all_docs?startkey=x&endkey=y to fetch the relationships for the given bank id.
If you need to undo a relationship, just delete the document!
By building a cache of relationships on every document create/update/delete as you currently implemented it, you are basically recreating an index manually in the database. This is the reason why I would lean towards calling it an antipattern.
One great way to improve your design is to store each relation as a separate document as Glynn suggested.
If your concern is consistency (which I think might be the case, judging by looking at the document types you mentioned), try to put all information about a transaction into a single document. You can define the relationships in a consistent place in your documents, so updating the views would not be necessary:
{
"_id":"salesCharge:VDHV2M9I1",
"relations": [
{ "type": "bank", "id": "bank:M6FXX6UA5" },
{ "type": "whatever", "id": "whatever:xy" }
]
}
Then you can keep your views consistent, and you can rely on CouchDB to keep the "relation cache" up to date.

MongoDB collections - which way will be more efficient?

I am more used to MySQL but I decided to go MongoDB for this project.
Basically it's a social network.
I have a posts collection where documents currently look like this:
{
"text": "Some post...",
"user": "3j219dj21h18skd2" // User's "_id"
}
I am looking to implement a replies system. Will it be better to simply add an array of liking users, like so:
{
"text": "Some post...",
"user": "3j219dj21h18skd2", // User's "_id"
"replies": [
{
"user": "3j219dj200928smd81",
"text": "Nice one!"
},
{
"user": "3j219dj2321md81zb3",
"text": "Wow, this is amazing!"
}
]
}
Or will it be better to have a whole separate "replies" collection with a unique ID for each reply, and then "link" to it by ID in the posts collection?
I am not sure, but feels like the 1st way is more "NoSQL-like", and the 2nd way is the way I would go for MySQL.
Any inputs are welcome.
This is a typical data modeling question in MongoDB. Since you are planning to store just the _id of the user the answer is definitely to embed it because those replies are part of the post object.
If those replies can number in the hundreds or thousands and you are not going to show them by default (for example, you are going to have the users click to load those comments) then it would make more sense to store the replies in a separate collection.
Finally, if you need to store more than the user _id (such as the name) you have to think about maintaining the name in two places (here and in the user maintenance page) as you are duplicating data. This can be manageable or too much work. You have to decide.

Mongodb real basic use case

I'm approaching the noSQL world.
I studied a little bit around the web (not the best way to study!) and I read the Mongodb documentation.
Around the web I wasn't able to find a real case example (only fancy flights on big architectures not well explained or too basic to be real world examples).
So I have still some huge holes in my understanding of a noSQL and Mongodb.
I try to summarise one of them, the worst one actually, here below:
Let's imagine the data structure for a post of a simple blog structure:
{
"_id": ObjectId(),
"title": "Title here",
"body": "text of the post here",
"date": ISODate("2010-09-24"),
"author": "author_of_the_post_name",
"comments": [
{
"author": "comment_author_name",
"text": "comment text",
"date": ISODate("date")
},
{
"author": "comment_author_name2",
"text": "comment text",
"date": ISODate("date")
},
...
]
}
So far so good.
All works fine if the author_of_the_post does not change his name (not considering profile picture and description).
The same for all comment_authors.
So if I want to consider this situation I have to use relationships:
"authorID": <author_of_the_post_id>,
for post's author and
"authorID": <comment_author_id>,
for comments authors.
But MongoDB does not allow joins when querying. So there will be a different query for each authorID.
So what happens if I have 100 comments on my blog post?
1 query for the post
1 query to retrieve authors informations
100 queries to retrieve comments' authors informations
**total of 102 queries!!!**
Am I right?
Where is the advantage of using a noSQL here?
In my understanding 102 queries VS 1 bigger query using joins.
Or am I missing something and there is a different way to model this situation?
Thanks for your contribution!
Have you seen this?
http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
It sounds like what you are doing is NOT a good use case for NoSQL. Use relational database for basic data storage to back applications, use NoSQL for caching and the like.
NoSQL databases are used for storage of non-sensitive data for instance posts, comments..
You are able to retrieve all data with one query. Example: Don't care about outdated fields as author_name, profile_picture_url or whatever because it's just a post and in the future this post will not be visible as newer ones. But if you want to have updated fields you have two options:
First option is to use some kind of worker service. If some user change his username or profile picture you will give some kind of signal to your service to traverse all posts and comments and update all fields his new username.
Second option use authorId instead of author name, and instead of 2 query you will make N+2 queries to query for comment_author_profile. But use pagination, instead of querying for 100 comments take 10 and show "load more" button/link, so you will make 12 queries.
Hope this helps.

Schema design in MongoDB — to replicate data or not

I have a Share collection which stores a document for every time a user has shared a Link in my application. The schema looks like this:
{
"userId": String
"linkId": String,
"dateCreated": Date
}
In my application I am making requests for these documents, but my application requires that the information referenced by the userId and linkId properties is fully resolved/populated/joined (not sure on the terminology) in order to display the information as needed. Thus, every request for a Share document results in a lookup for the subsequent User and Link documents. Furthermore, each Link has a parent Feed document which must also be looked up. This means I have some spagehetti-like code to perform each find operation in a series (3 in total). Yet, the application only needs some of the data found in these calls (one or two properties). That said, the application does need the entire Link document.
This is very slow, and I am wondering whether I should just be replicating the data in the Share document itself. In my head, this is fine because most of the data will not change, but some of it might (i.e. a User's username). This is suggesting of a Share schema design like so:
{
"userId": String,
"user": {
"username": String,
"name": String,
},
"linkId": String,
"link": {}, // all of the `Link` data
"feed": {
"title": String
}
"dateCreated": Date
}
What is the consensus on optimising data for the application with regards to this? Do you recommend that I replicate the data and write some glue code to ensure the replicated username gets updated if it changes (for example), or can you recommend a better solution (with details on why)? My other worry about replicating data in this manner is, what if I needed more data in the Share document further down the line?