Statistical Datacollection with MongoDB - mongodb

I'm building an application with Nodejs and Mongodb to scan Stackoverflow for new content, and find hot and trending topic, and I need to know what way to do this, because I'm not sure I'm doing it correctly as I come form MySQL and my gut feeling tells me there is something different here.
I'm not actually scanning Stackoverflow, it's just easy to use as an analogy, but nonetheless I have Posts, I have Comments, and Users who posted the thread (disregarding users who posted comments atm).
My initial solution was to create three tables (collections):
Posts - where I store all the information about the post
Post Stats - where I store all the dynamic information about post (number of comments, overall score, etc') once every X minutes
Users - where I store information about the users who have posted the Posts
Essentially I want to be able to query the database with "Give me the top Users of today", and "give me the history of this post" to create a sort of graph how this post behaved (ranked, scored, commented, etc') over time.
What's the correct way of doing something like this with Mongodb? Should I store the Post Stats as part of the Posts documents?

I would personally go for a hybrid solution here.
It is inevitable that you want some kind of aggregated data on the post for all time. So within the post I would house an extra subdocument that contains stats for all time:
stats: {
views: 456, // Just an example
vote_ups: 5,
vote_downs: 4,
rank: 1, // vote ups minus vote downs
comments: 5,
answers: 6
}
Then for individual periods of time I would use post_stats the way you explain creating a document like:
{
post_id: 45,
// etcera for minute by minute changes
time: ISODate()
}
Using the post_id (or _id rather) to query for the graph you wish to make. Since MongoDB is good at scaling horizontally you will be taking full advantage of it here.

Related

Nosql database design - MongoDB

I am trying to build an app where I just have these 3 models:
topic (has just a title (max 100 chars.))
comment (has text (may be very long), author_id, topic_id, createdDate)
author (has just a username)
Actually a very simple db structure. A Topic may have many comments, which are created by authors. And an author may have many comments.
I am still trying to figure out the best way of designing the database structure (documents). First I though to put everything to its own schema like above. 3 Documents. But since this is a nosql db, I should actually try to eliminate the needs for a join. And now I am really thinking of putting everything to a single document, which also sounds crazy.
These are my actually queries from ui:
Homepage query: Listing all the topics, which have received the most comments today (will run very often)
Auto suggestion list for search field: Listing all the topics, whose title contains string "X"
Main page of a topic query: Listing all the comments of a topic, with their authors' username.
Since most of my queries need data from at least 2 documents, should I really just use them all together in a single document like this:
Comment (text, username, topic_title, createdDate)
This way I will not need any join, but also save i.e. the title of topics multiple times.. in every comment..
I just could not decide.
I appreciate any help.
You can do the second design you suggested but it all comes down to how you want to use the data. I assume you’re going to be using it for a website.
If you want the comments to be clickable, in such that clicking on the topic name will redirect to the topic’s page or clicking the username will redirect to the user’s page where you can see all his comments, i suggest you keep them as IDs. Since you can later use .populate(“field1 field2”) and you can select the fields you would like to get from that ID.
Alternatively you can store both the topic_name and username and their IDs in the same document to reduce queries, but you would end up storing more redundant data.
Revised design:
The three queries (in the question post) are likely to be like this (pseudo-code):
select all topics from comments, where date is today, group by topic and count comments, order by count (desc)
select topics from comments, where topic matches search, group by topic.
select all from comments, where topic matches topic_param, order by comment_date (desc).
So, as you had intended (in your question post) it is likely there will be one main collection, comments.
comments:
date
author
text
topic
The user and topic collections with one field each, are optional, to maintain uniqueness.
Note the group-by queries will be aggregation queries, for example, the main query will be like this:
db.comments.aggregate( [
{ $match: { date: ISODate("2019-11-15") } },
{ $group: { _id: "$topic", count: { $sum: 1 } } },
{ $sort: { count: -1 } }
] )
This will give you all the topics names, today and with highest counted topics first.
You could also take a bit different approach. Storing information redundant is not a bad thing in all cases.
1. Homepage query: Listing all the topics, which have received the most comments today (will run very often)
You could implement this as two extra fields in your Topic entity. One describing the last date a comment was added and the second to count the amount of comments added that day. By doing so you do not need to join but can write a query that only looks at the Topic collection.
You could also store these statistics independently of the other data and update it when required. Think of this as having a document that describes your database its current state (at least those parts relevant to you).
This might give you a time penalty on storing information but it improves reading times.
2. Auto suggestion list for search field: Listing all the topics, whose title contains string "X"
Far as I understand this one you only need the topic title. Meaning you can query the database once and retrieve all titles. If the collection grows so big this becomes slow you could trigger a refresh of the retrieval query that only returns a subset (a user is not likely to go through 100 possible topics).
3. Main page of a topic query: Listing all the comments of a topic, with their authors' username.
This is actually the tricky one. If this is really what it is you want to do then you are most likely best off storing all data in one document. However I would ask you: what is the problem making more than one query? I doubt you will be showing all comments at once when there are thousands (as you say). Instead of storing each in a separate document or throwing all in one document, you could also bucket them and retrieve only the 20 most recent ones (if you would create buckets of size 20). Read more about the bucket pattern here and update the ones shown when required.
You said:
"Since most of my queries need data from at least 2 documents, should I really just use them all together in a single document like this..."
I"ll make an argument from a 'domain driven design' point of view.
Given that all your data exists within the same bounded context (business domain). Then it is acceptable to encapsulate it all within the same document!

How to implement unread/new posts/comments in NoSQL document store like Mongodb?

I've searched and didn't find any exact answer to this common problem.
I would like to show to users new/unread posts in a way that they get for example list of topics with unreaded posts.
If user decide and open any of that topics then automatically it's marked as read and will not show inside that list again when he click on unread posts. Plus the possibility to mark all as read.
I was thinking about maybe showing unread posts only from last 30 days, so data would not be so big.
The obvious and best solution would be making this inside objects embedded inside arrays, every array would have userid and timestamp of last view of specific topic, then i would just compare timestamp of the last post in thread to timestamp of last view of that thread by the user.
Only 2-3 queries then would be needed to show results to the user.
So for example it would look like this:
{
_id: uniqueObjectid,
id: topic_id,
topic: topic of the thread,
last_update: timestamp of last reply to that topic,
reads: [
{id: userid, last_view: timestamp of last view on this topic by user},
...
]
}
I would delete all threads from this collection that have last_update field older than 30 days.
Showing then unread posts for users would be very easy, just compare last_update with last_view for certain userid.
But it's not a good solution, from what i've read the way arrays are implemented in mongodb make that solution very slow. Imagine having last view of some topic for 1000 users, it means 1000 indexed array elements.
So it can't be done in arrays.
Here Asya from MongoDB describes why big embedded arrays should not be used link
I am having difficulties to think of any other efficient way of solving this issue.

Mongodb schema design for swipe card style application

What would be the good approach to design following swipe card style app with skip functionality?
Core functionality of the app I'm working on is as follows.
On the main page, a user first make a query for the list of the posts.
list should be sorted by date in reverse chronological order or some kind of internal score that determines the active post (with large number of votes or comments etc)
A each post is shown to user one by one in the form of a card like tinder or jelly style feed.
For each card, user can either skip or vote for it.
when user consumes all cards fetched and make query again for next items, skipped or already voted card by the current user should not appear again.
Here, the point is that a user could have huge number of skipped or voted post since user only can skip or vote for a post on the main page.(user can browse these already processed items on his/her profile)
The approaches I simply thought about are
1.to store the list of skipped or voted post ids for each user in somewhere and use them in the query with $nin operator.
db.posts.find({ _id: {$nin: [postid1,...,postid999]} }).sort({ date: -1 })
2.to embed all the userId of users that voted or skipped the post to an array and query using $ne operator
{
_id: 'postid',
skipOrVoteUser: ['user1', 'user2' ...... 'user999'],
date: 1429286816366
}
db.posts.find({ skipOrVoteUser: {$ne: 'user1'} }).sort({ date: -1 })
3.Maintaining feedCache for each user and fanout on write.
FeedCache
{
userId: 'user1',
posts: [{id:1, data: {..}}, {id:2, data: {...}},.... {id:3, data: {...}}]
}
Operations:
-When a user create a post, write copy of the post to all user's feed cache in the system.
-Fetch posts from the user's feed cache.
-When the user vote or skip a post, delete the post from his/her feed cache.
But since the list of the posts that user skipped or voted is ever growing and could be really large as time goes. I'm concern that this query would be too slow with large number of list for $nin for approach 1.
Also with approach 2, since all user on the system(or many depending on the filtering) could either vote or skip for a post, embedded user array of each post could be really large( max with number of all user) and performance of the query with $ne will be poor.
With approach 3, for every post created, there will be too much write operation and It won't be efficient.
What would be the good approach to design schema to support this kind of functionality? I've tried come up with good solution and could not think of better solutions. Please help me to solve this problem. Thanks!
On a relational database I would use approach 1. It's and obvious choice as you have good SQL operators for the task and you can easily optimize the query.
With document databases I would choose approach 2. In this case there is a good chance the vote/skip list remaining relatively small as the system grows.

MongoDB schema design to support editing subdocuments within an array in multi-user environment?

Let's suppose I have a basic blog web app using the following document schema for a blog post.
{
_id: ObjectId(...),
title: "Blog Post #1",
text: "<p>This is my blog post!</p>",
comments: [
{
user: "username1",
time: Date(...),
text: "This is a great blog post!"
},
{
user: "username2",
time: Date(...),
text: "This is even better than sliced bread!"
}
]
}
That's all well and good, but now let's suppose that a user can edit or delete his comment. On top of that, it's a web app, so there could be multiple people editing or deleting their comments at the same time. Now suppose I am logged in as "username2" and try to edit my comment, which is the 2nd item in the comments array - index position 1. Just before I click "save", user1 logs in and deletes his comment which is the 1st item in the array. If my code tries to delete user 2's comment by index position, it will fail because there are no longer 2 items in the array.
Two ideas came to mind, but I'm not crazy about either one.
create some sort of id on each comment
create a "lastModified" timestamp on the parent document, and only save the edit if nothing has changed on the document.
What is the best way to handle this type of situation? If I really need an id on each comment, will I have to generate it myself? What data type should it be? Or would it be best to use both of my ideas together? Or is there another option I'm not even thinking about?
Having different writers is a key downside of embedding documents in my opinion. You might want to take a look at this discussion that presents different solutions. I'd try to avoid different writers to one document and use a separate Comments collection instead, where each comment is owned by its author. You can fetch all comments on a post by an indexed field postId reasonably fast. Then the comments simply have a regular _id field. It makes sense to use an ObjectId because that automatically stores the time the comment was created, and it's a monotonic index by default.
create a "lastModified" timestamp on the parent document, and only save the edit if nothing has changed on the document.
This is called 'optimistic locking' and it's generally not good if there is a high probability of concurrent operations. In the case of blog posts, it's likely that newer posts receive a lot more comments than older ones, so I'd say the collision proability is kinda high.
There's yet another nasty side-effect: let's say the blog post author wants to modify the text but someone adds or removes a comment in the mean time. Now even the blog author wouldn't be able to change the text unless you use the atomic $set operation on the text and bypass the version check.

Does MongoDB fit here? Modelling event registration with arbitrary extra data

I'm writing a basic event registration web application and I'm wondering whether MongoDB would be a good choice for the datastore and if so, how to model my domain. The app will be very small, so performance and scalability is not a concern, however when I started to think out the model in RDBMS third-normal terms it sounded quite complicated for what it is and from the bits and pieces I'm picking up about Mongo, sounded like a typical use-case. Is it?
The Application
The app allows creation of events, and for attendees to sign up to those events, giving their name, date of birth, etc. Easy, two tables with n:n join. The tricky part is that the organisers wish to be able to ask attendees of certain events for information particular to that event, for example on one event there might be a question about their accomodation preference. I narrowed it down to two types of question: those that require to select from certain options (will be an HTML select list) and questions which allow free-text answers. By the way it's a Rails app in case that matters.
Traditional RDBMS
In an RDBMS I would need perhaps a table for Constrained Question (where answers are from a list), a table for Answer Options, a table for Free Text Question and Free Text Answers; and to appropriately link this all up to the event and the atendee via a Signup. If you think about it the links between the tables are rather complicated!
Mongo
Would this be simpler to model in Mongo? I thought that perhaps besides the Attendee and Event collections, there could be a Question collection which has its allowed answers embedded, if there are no answers then it's free text. A Signup collection that relates an Attendee to an Event and references the id of the relevant Question, and embeds the text of the answer? If the text of an answer option ever changes it might get complicated... but I guess that's the tradeoff of Mongo.
Is this a good use case for Mongo on should I stick with Postgres? Can you suggest a (or improve my) schema?
Mongodb is a awesome tool for this job. You can pretty much utilize the embedded collection here to maximize the performance.
Your current schema is perfectly fine. By tweaking this a little bit with embedded collections , it will be a blast.
For instance, instead of keeping the Question collection seperately, you can have this inside Attendee. This will let you store all relevant info about the attendee in single place.
- Attendee
- Info
- Event_id
- Questions {
-Question id
- Answers [ {
- answer id 1
- or answer text
},{
- answer id 2
- or answer text
}],
}
Also you can cache the frequently used data about the attendees inside the Event collection.
This will be immensely useful for quickly displaying home page data.
For example, you may need to display the users who are attending the event and their count in the event home page. To do that you need to first query the event and query the Attendees.
But i suggest you to do store the attendee_id/name inside the Event as an array, which looks
Event :
- Info
- attendees {
attendee_id : 'xx'
name : 'Fletch'
}
So you can populate the event home page with a single db call to Event collection. Because you can get a minimum info to be displayed about the user and total user count for an event will be retrieved here itself. You can query the attendee when you need to display more info about the user like his question/answers.
Hope this helps