I need to set a scheme for a multi room chat which uses mongodb for storage. I'm currently using mongoose v2 and I've thought of the following methods:
Method 1
Each chat log (each room) has its own mongo collection
Each chat log collection is populated by documents (schema) message with from, to, message and time.
There is the user collection
The user collection is populated by documents (schema) user (with info regarding the user)
Doubts:
1 How exactly can I retrieve the documents from a specific collection (the chat room)?
Method 2
There is a collection (chat_logs)
The collection chat_logs is pop. by documents (schema) message with from, to (chatroom), user, etc...
There is the user collection, as above.
Doubts:
1 Is there a max size for collections?
Any advice is welcome... thank you for your help.
There is no good reason to have a separate collection per chatroom. The size of collections is unlimited and MongoDB offers no way to query data from more than one collection. So when you distribute your data over multiple collections, you won't be able to analyze data across more than one chatroom.
There isn't a limit on normal collections. However, do you really want to save every word ever written in any chat forever? Most likely not. You would like to save say the last 1000 or 10000 messages written. Or even 100.000. Lets make it 1.000.000. Given the average size of a chat message, this shouldn't be more than 20MB. So let's make it really safe and multiply that by 10.
What I would do is to use a capped collection per chat room and use tailable cursors. You don't need to be afraid as for too many connections. The average mongo server can take a couple of hundred of them. The queries can be made tailable quite easily as shown in the Mongoose docs.
This approach has some advantages:
Capped collections are fast - they are basically fifo buffers for BSON data.
The data is returned exactly in insertion order for free - no sorting, no complex queries, no extra indices
There is no need to maintain the individual rooms. Simply set a cap on creation and mongodb will take care of the rest.
As for how to do it: Simply create a connection per chat room, save them on an application level in an associative array with the chat room as the key name. Use the connections for creating a new tailable cursors per request. Use XHR to request the chat data. Respond as stream. Process accordingly.
Related
I have a collection in mongo that stores every user action of my application, and its very huge in size (3Million documents per day). On UI I have a requirement to show the user actions for max. 6months period.
And the queries on this collection are becoming very slow with all the historic data, though there are indexes in place. So, I want to move the documents that are older than 6months to a separate collection.
Is it the right way to handle my issue?
Following are some of the techniques you can use to manage data growth in MongoDB:
Using capped collection
Using TTLs
Using mulitple collections for months
Using different databases on same host
I'm developing a chat app with node.js, redis, socket.io and mongodb. MongoDB comes the last and for persisting the messages.
My question is what would be the best approach for this last step?
I'm afraid a collection with all the messages like
{
id,
from,
to,
datetime,
message
}
can get too big too soon, and is going to get very slow for reading purposes, what do you think?
Is there a better approach you already worked with?
In MongoDB, you store your data in the format you will want to read them later.
If what you read from the database is a list of messages filtered on the 'to' field and with a dynamic datetime filter, then this schema is the perfect fit.
Don't forget to add an index on the fields you will be querying on, then it will be reasonable fast to query them, even over millions of records.
If you would, for example, always show a full history of a full day, you would store all messages for a single day in one document. If both types of queries occur a lot, you would even store your messages in both formats.
If storage is an issue, you could also use capped collection, which will automatically delete messages of e.g. over 1 year old.
I think the db structure is fine, the way you mentioned in your question.
You may assign some unique id for chat between each pair and keep it in each record of chat. Retrieve based on that when you want to show it.
Say 12 is the unique id for chat between A and B, retrieve should be based on 12 when you want to show chat for A and B.
So your db structure can be like:-
{
id,
from,
to,
datetime,
message,
uid
}
Remember, you can optimize your retrieve, if you will give some limit(say 100 at a time) for retrieve. If user is scrolling beyond 100 retrieve more 100 chats. Which will solve lots of retrieve.
When using limit, retrieve based on date created and use sort with find query as well.
Just a thought here, are the messages plain text or are you allowed to share images and videos as well ?
If it's the latter then storing all the chats for a single day in one collection might not work out.
Actually if you have images and videos shares allowed then you need to take into account the. 16mb document restriction as well.
We have a service that allow people to open a room and play YouTube songs while others are listening in real-time.
Among other collections in our MongoDB we have one to store songs user adding to the room's playlists, it calls: userSong.
This collection holds records for all songs added for the combination of: user-room-song.
The code makes frequent queries to the collection in those major operations:
Loading current playlist (regular find with a trivial condition)
Loading random song for a room (using Mongo aggregation FW)
Loading room top songs (using Mongo aggregation FW)
Now, this table become big (+1m records) and things start become slow, AWS start sending us CPU utilization notifications more often and follow by mongotop the userSong collection makes the CPU high consumption mostly in READ operations.
We made some modifications in the collection indexes and it helps a bit but it's still not a solution, we need to find some other way to arrange the data cause it exponentially growing.
We tought about to split the userSong data into a low level segmentation, instead of by user-room-song to do it by collection of user-song for each room in the system, this will short the time to fetching data from the DB, now we need to decide how to do that:
Make a new collection for each room (roomUserSong) that will hold all user-song records for a particula room. this might be good for quick fetching but will create an unlimited new collectons in the database (roomusersong-1,roomusersong-2, ..., roomusersong-n) and we dont know if it's a good in practice or there are some others Mongo limitations in that kind of solution.
Create just 1 more collection in the DB with the following fields:
{room: <roomId>, userSongs: [{userSong1, userSong2, ..., userSongN}], so each room will have it's own document and inside it a sub document (an Array) that holds all user-song records for this room. this will solve the previous issue (to create unlimited collections) but it'll be very hard to work with Mongoose (our ODM) alter cause (as far as i know) we cannot define a schema in advanced for this such data structure. also this is may tak us to the sub-document size limitation that is 16MB as far as understood.
It'll be nice to hear some advices from people who have Mongo experience with those kind situations:
Is +1m is really consider big and supposed to make this CPU utilization issues? (using AWS m3.medium, one core)
What is the better solution approach form what introduced?
Any other ideas to make smart cache without change too much the code?
Thanks for helpers!
I have a chatroom system, and I want to use MongoDB as the backend database. The following are the entities:
Room - a chatroom (room_id)
User - a chatting user in chatroom (room_id, user_name)
Msg - a message in chatroom (room_id, user_name, message)
For designing the schema, I have some ideas: First, 3 collections - room, user and msgs - and there is a parent reference in user and msg documents.
Another idea is to create collections for each room. Such as
db.chatroom.victor
db.chatroom.victor.users
db.chatroom.victor.msgs
db.chatroom.john
db.chatroom.john.users
db.chatroom.john.msgs
db.chatroom.tom
db.chatroom.tom.users
db.chatroom.tom.msgs
...
I think if I can divide the documents into different collections, it would be much more efficient to query. Also, I can use capped collections to limit the count of messages in each room. However, I am not familiar with MongoDB. I'm not sure if there is any side effect to doing that, or is there any performance problem to create lots of collections? Is there any guideline for designing a MongoDB schema?
Thanks.
you should always design your schema by answering 2 questions:
what data should i store? (temporary/permanently)
how will i access that data? (lots of reads on this, lots of writes on that, random rw here)
you don't want to embed high access rate data into document(like chat messages that are accessed by every user every second or so), it's better to have it as separate collection.
on the other hand - collection of users in chat room changes rather rarely - you can definitely embed that.
just design using common sense and you'll be fine
You definitely do not want to embed messages inside of other documents. They need to be stored as individual documents.
I say this because MongoDB allocates a certain amount of space for every document it writes. When it writes a document, it takes its current size and adds some empty space (padding) to the document so that if it is actually 1k large, it may become 1.5k large to leave space for the document to grow in size.
Chat messages will almost definitely each be larger than the allocated free space. Multiple messages will absolutely be larger than the free space.
The problem is that when a document doesnt fit in its current location on disk\memory when you try to embed another document inside of it (via an update) the database must read that document from the disk\memory and rewrite the entire thing at the tail end of the data file.
This causes alot of disk activity that should otherwise not exist - that added I/O will destroy the performance of the database.
Think about all of the use cases. What kind of queries do you want to execute?
"Show me the rooms where a user is chatting". This won't work with the current schema and you have to add a list of rooms to the user or a in a separate collection to make it work.
"Show me all the messages a user sent in all of the rooms". This, again, won't work.
"Delete all the idle users from all the rooms". With the proposed schema you have to run this query on every room.users collection.
Also, do some approximation for the size of your collections. If you have 100 rooms with max 1000 users, that's 100000 entries for a collection where you store all these mappings. With an index on room and user that shouldn't be an issue, you don't need separate collections.
With 100 users you can even embed this to the room object as an array. Just make sure there is an index.
I have this schema for support of in-site messaging:
When I send a message to another member, the message is saved to Message table; a record is added to MessageSent table and a record per recipient is added to MessageInbox table. MessageCount is being used to keep track of number of messages in the inbox/send folders and is filled using insert/delete triggers on MessageInbox/MessageSent - this way I can always know how many messages a member has without making an expensive "select count(*)" query.
Also, when I query member's messages, I join to Member table to get member's FirstName/LastName.
Now, I will be moving the application to MongoDB, and I'm not quite sure what should be the collection schema. Because there are no joins available in MongoDB, I have to completely denormalize it, so I woudl have MessageInbox, MessageDraft and MessageSent collections with full message information, right?
Then I'm not sure about following:
What if a user changes his First/LastName? It will be stored denormalized as sender in some messages, as a part of Recipients in other messages - how do I update it in optimal ways?
How do I get message counts? There will be tons of requests at the same time, so it has to be performing well.
Any ideas, comments and suggestions are highly appreciated!
I can offer you some insight as to what I have done to simulate JOINs in MongoDB.
In cases like this, I store the ID of a corresponding user (or multiple users) in a given object, such as your message object in the messages collection.
(Im not suggesting this be your schema, just using it as an example of my approach)
{
_id: "msg1234",
from: "user1234",
to: "user5678",
subject: "This is the subject",
body: "This is the body"
}
I would query the database to get all the messages I need then in my application I would iterate the results and build an array of user IDs. I would filter this array to be unique and then query the database a second time using the $in operator to find any user in the given array.
Then in my application, I would join the results back to the object.
It requires two queries to the database (or potentially more if you want to join other collections) but this illustrates something that many people have been advocating for a long time: Do your JOINs in your application layer. Let the database spend its time querying data, not processing it. You can probably scale your application servers quicker and cheaper than your database anyway.
I am using this pattern to create real time activity feeds in my application and it works flawlessly and fast. I prefer this to denormalizing things that could change like user information because when writing to the database, MongoDB may need to re-write the entire object if the new data doesnt fit in the old data's place. If I needed to rewrite hundreds (or thousands) of activity items in my database, then it would be a disaster.
Additionally, writes on MongoDB are blocking so if a scenario like I've just described were to happen, all reads and writes would be blocked until the write operation is complete. I believe this is scheduled to be addressed in some capacity for the 2.x series but its still not going to be perfect.
Indexed queries, on the other hand, are super fast, even if you need to do two of them to get the data.