I have an issue with designing a database in mongo db.
So in general, the system will continuously gather insight user data (e.g. likes, retweets, views) from different social websites apis (twitter api , instagram api , fb api) with with different rate of each channel. While also saving each insight every hour as historical data . These current real time insights should be viewed by users in the website.
Should I save the insight data in cache and the historical insight data in document ?
What is the expected write rate and query rate?
What rate will the dataset grow at? These are key questions that will determine the size and topology of your MongoDB Cluster. If your write rate does not exceed the write capacity of a single node then you should be able to host your data on a single replica set. However, this assumes that your data set is not large (>1TB). At that size recovery from a single node failure can be time-consuming (it will not cause an outage but the longer a single node is down the higher the risk of a second node failing).
In both cases (write capacity exceeds a single node or dataset is larger than 1TB) the rough guidance is that this is the time to consider a [sharded cluster][2]. Design of a sharded cluster is beyond the scope of a single StackOverflow answer.
Related
I will describe the data and case.
record {
customerId: "id", <---- indexed
binaryData: "data" <---- not indexed
}
Expectations:
customerId is random 10 digit number
Average size of binary record data - 1-2 kilobytes
There may be up to 100 records per one customerId
Overall number of records - 500M
Write pattern #1: insert one record at a time
Write pattern #2: batch, maybe in parallel, with speed of at least 20M record per hour
Search pattern #1: find all records by customerId
Search pattern #2: find of all records by customerId group, in parallel, at a rate of at least 10M customerId per hour
Data is not too important, we can trade some aspects of reliability for speed
We suppose to work in AWS / GCP - it's best we key-value store is administered by the cloud
We want to spend no more that 1K USD per month on cloud costs for this solution
What we have tried:
We have this approach implemented in relational database, in AWS RDS MariaDB. Server is 32GB RAM, 2TB GP2 SSD, 8 CPU. I found that IOPS usage was high and insert speed was not satisfactory. After investigation I concluded that due to random nature of customerId there is high rate of different writes to index. After this I did the following:
input data is sorted by customerId ASC
Additional trade was made to reduce index size with little degradation of single record read speed. For this I did some sort of buckets where records 1111111185 and 1111111186 go to same "bucket" 11111111. This way bucket can't contain more than 100 customerIds so read speed will be ok, and write speed improves.
Even like this, I could not make more than 1-3M record writes per hour. Different write concurrencies were tested, current value is 4 concurrent writers. After all modifications it's not clear what else we can improve:
IOPS is not at the top use (~4K per second),
CPU use is not high,
Network is not fully utilized,
Write and read throughputs are not capped.
Apparently, ACID principles are holding us back. I am in look for flatly scalable key-value store and will be glad to hear any ideas and roughly estimations.
So if I understand you...
2kb * 500m records ≈ 1 TB of data
20m writes/hr ≈ 5.5k writes/sec
That's quite doable in NoSQL.
The scale is not the issue. It's your cost.
$1k a month for 1 TB of data sounds like a reasonable goal. I just don't think that the public clouds are quite there yet.
Let me give an example with my recommendation: Scylla Cloud and Scylla Open Source. (Disclosure: I work for ScyllaDB.)
I will caution you that your $1k/month capitation on costs might cause you to consider and make some tradeoffs.
As is typical in high availability deployments, to ensure data redundancy in case of node failure, you could use 3x i3.2xlarge instances on AWS (can store 1.9 TB per instance).
You want the extra capacity to run compactions. We use incremental compaction, which saves on space amplification, but you don't want to go with the i3.xlarge (0.9 tb each), which is under your 1 tb limit unless really pressed for costs. In which case you'll have to do some sort of data eviction (like a TTL) to keep your data to around <600 gb.
Even with annual reserved pricing for Scylla Cloud (see here: https://www.scylladb.com/product/scylla-cloud/#pricing) of $764.60/server, to run the three i3.2xlarge would be $2,293.80/month. More than twice your budget.
Now, if you eschew managed services, and want to run self-service, you could go Scylla Open Source, and just look at the on-demand instance pricing (see here: https://aws.amazon.com/ec2/pricing/on-demand/). For 3x i3.2xlarge, you are running each at $0.624/hour. That's a raw on-demand cost of $449.28 each, which doesn't include incidentals like backups, data transfer, etc. But you could get three instances for $1,347.84. Open Source. Not managed.
Still over your budget, but closer. If you could get reserved pricing, that might just make it.
Edit: Found the reserve pricing:
3x i3.2xlarge is going to cost you
At monthly pricing $312.44 x 3 = $937.32, or
1 year up-front $3,482 annual/12 = $290.17/month/server x 3 = $870.50.
So, again, backups, monitoring, and other costs are above that. But you should be able to bring the raw server cost <$1,000 to meet your needs using Scylla Open Source.
But the admin burden is on your team (and their time isn't exactly zero cost).
For example, if you want monitoring on your system, you'll need to set up something like Prometheus, Grafana or Datadog. That will be other servers or services, and they aren't free. (The cost of backups and monitoring by our team are covered with Scylla Cloud. Part of the premium for the service.)
Another way to save money is to only do 2x replication. Which puts your data in a real risky place in case you lose a server. It is not recommended.
All of this was based on maximal assumptions of your data. That your records are all around 2k (not 1k). That you're not getting much utility out of data compression, which ScyllaDB has built in – see part one (https://www.scylladb.com/2019/10/04/compression-in-scylla-part-one/) and part two (https://www.scylladb.com/2019/10/07/compression-in-scylla-part-two/).
To my mind, you should be able to squeak through with your $1k/month budget if you go reserved pricing and open source. Though adding on monitoring and backups and other incidental costs (which I haven't calculated here) may end you up back over that number again.
Otherwise, $2.3k/month in a fully-managed-cloud enterprise package and you can sleep easy at night.
Last time I get alert from MongoDB Atlas:
Disk I/O % utilization on Data Partition has gone above 70 on nvme2n1
But I have no any ideas how can I localize / query / index / part of code / problematic collection.
In what way can I perform any analyze to find out problem root-cause?
Not answer, but just seen that many people faced with similar problem.
In My case root cause was: we had collection with huge documents that contain array of data (in fact - list of coordinates with some metadata), and update it as many times, as coordinates we have (when adding new coordinates). + some additional operations.
As I know MongoDB cannot fetch just part of document, it fetch full document, and when we fetch many different and big documents, they are not fit into MongoDB in-memory cache, and each time access into hard disc, that lead to this issue.
So, we just split up this document on several, and this fixed issue. While we need frequent access to update/add this data, we keep it into different documents, and finally, after process done, we gather back all this documents into one big document, for "history check" purpose.
Recently, we met this alert on MongoDB Atlas Disk I/O % utilization on Data Partition has gone above 90 after the instance reboots maintenance. After a discussion with Atlas support guys, we clearly understand this metric.
Understanding Disk I/O % Utilization
The definition of Disk I/O % Utilization and Disk I/O % utilization on Data Partition per doc
Disk I/O % Utilization alerts indicate that the percentage of time during which requests are being issued reaches a specified threshold.
Disk I/O % utilization on Data Partition occurs if the percentage of time during which requests are being issued to any partition that contains the MongoDB collection data meets or exceeds the threshold.
Two traps in iostat: %util and svctm
Device saturation occurs when this value is close to 100% for devices serving requests serially. But for devices serving requests in parallel, such as RAID arrays and modern SSDs, this number does not reflect their performance limits.
This means if there was even just one I/O operation in progress for a given time period, the operating system would report 100% Disk Util, as the disk was in use 100% of that time.
Thus, the disk utilization percentage by itself is NOT an indicator of stress on the disk relative to its maximum IOPS capacity.
Having disk utilization at 100% does not in itself imply there is an issue. Disk utilization is the percentage of time requests are issued to any partition containing the MongoDB collection data. This includes requests from any process, not just MongoDB processes. Modern disk storage can sustain multiple I/O operations simultaneously, so having a ~100% utilization is not unusual, because it just means that the disk is constantly processing at least one operation during the 100% interval.
Conclusion
We should look at a combination of all the available disk-related metrics, as well as IOWait in the System CPU when diagnosing potential disk performance-related issues.
Possible actions to help resolve Disk Utilization % alerts
Optimize your queries
Create an Index to Support Read Operations
Pay attention to Query Selectivity and Covered Query
Use the Atlas Performance Advisor to view slow queries and suggested indexes.
Review Indexing Strategies for possible further indexing improvements.
Analyze Query Performance to review how your queries are using your indexes.
Analyze Profile to optimize the long execution time query
Increase hardware resources, such as instance size and IOPS on Atlas
Source: Mongo Doc
As the alert says, it is due to the high utilization of the disk. The most common cause of it is unoptimized queries with poor Query Targeting Ratio, or simply reading/writing a lot of documents from/to the disk in a relatively shorter time window.
In order to identify these queries, start with the Profiler and look for the operations with a poor Examined:Returned ratio. You can also refer to the Performance Advisor to see if it suggests any indexes on the inefficient operations. Since Profiler's window is limited to the last 24 hours, you can also refer to your logs to identify the Slow Queries.
Ultimately, the effort to solve this is tri-directional:
Optimizing the query execution with efficient indexing and filtering strategies
Keep a check on the volume of data being read/written in one go.
Increase the IOPS of the cluster
For official reference, checkout the documentation here.
our software design assumes a database per user in an attempt to partition the data and later be able to distribute and load balance per user.
We noticed that the mongod process is taking a lot of memory even when no user has ever logged in.
So I would like to know how/when the loading occurs, if there is a setting that could do some lazy loading or if there is a better strategy to achieve what we want.
Thank you
I believe got the answer digging into MongoDB documentation. Databases are always "loaded" regardless if active or idle.
To calculate how much RAM you need, you must calculate your working set size, or the portion of your data that clients use most often. This depends on your access patterns, what indexes you have, and the size of your documents. Because MongoDB uses a thread per connection model, each database connection also will need up to 1 MB of RAM, whether active or idle.
So for a very high number of databases even when only a few are active, memory could be a big concern and our strategy seems need to be changed.
Does anybody know (from personal experience or official documentation) how many concurrent requests can a single MongoDb server handle before sharding is advised?
If your working set exceeds the RAM you can afford for a single server, or your disk I/O requirements exceed what you can provide on a single server, or (less likely) your CPU requirements exceed what you can get on one server, then you'll need to shard. All these depend tremendously on your specific workload. See http://docs.mongodb.org/manual/faq/storage/#what-is-the-working-set
One factor is hardware. Although for this you have replica sets. They reduce the load from the master server by answering read-only queries with replicated data. Another option would be memcaching for very frequent and repetitive queries, which would be even faster.
A factor for whether sharding is necessary is the data size & variation. When you have a wide range of varying data you need to access, which would render a server's cache uneffective by distributing the access to the data to the wide range, then you would consider using sharding. Off-loading work is merely a side-effect of this.
I have been reading about MongoDB and Cassandra. MongoDB is a master/slave where as Cassandra is masterless (all nodes are equal). My doubt is about how the data is stored in these both.
Let's say a user is writing a request to MongoDB(a cluster with master and different slaves each in a separate machine). This means the master will decide(or through some application implementation) to which slave this update should be written to . That is same data will not be available in all the nodes in MongoDB. Each node size may vary. Am i right ? Also when queried will the master know to which node this request should be sent ?
In the case of cassandra, the same data will be written to all the nodes ie) effectively if one node size is 10GB, then the other nodes size is also 10GB. Because if only this is the case, then when one node fails, the user will not lose any data by querying in another node. Am i right here ? If I am right, the same data is available in all the nodes, then what is the advantage of using map/reduce function in Cassandra ? If I am wrong, then how availability is maintained in Cassandra since the same data will not be available in the other node ?
I was searching in stackoverflow about MongoDB vs cassandra and have read about some 10 posts but my questions could not be cleared with the answers in those posts. Please clear my doubts and If I had assumed wrongly, also correct me.
Regarding MongoDB, yep you're right, there is only one primary.
Any secondary can become primary as long as everything is in sync as this will mean the secondary has all the data. Each node doesn't have to be the same on-disk size and this can vary depending on when the replication was done, however, they do have the same data (as long as they're in sync).
I don't know much about Cassandra, sorry!
I've written a thesis about NoSQL stores and therefor I hope that I remember the most parts correctly for Cassandra:
Cassandra is a mixture of Amazon Dynamo, from which it inherit the replication and sharding, and Googles BigTable from which it got the datamodel. So Cassandra basically shards your data, while keeping copies of it on other nodes. Let's have a five node cluster, with nodes called A to E. Your keys are hashed to the keyring through consistent hashing, where continuous areas of your keyring are stored on a given node. So if we have a value range from 1 to 100, per default each node will get 1/5 of the ring. A will range from [1,20), B from [20,40) and so on.
An important Concept for Dynamo is the triple (R,W,N) which tells how many nodes have to read, write, and keep a given value.
Per default you have 3 (N) copies of your data, which is stored on the primary node and two following nodes, which hold backups. When I remember it right from the Dynamo paper your writes go per Default to the first W nodes of your N copies, the other nodes are updated through an Gossip Protocol eventually.
As long as everything is going fine you'll get consistent results, if your primary node is down for some time another node takes your data, through a hinted hand-off. Once the primary comes back your data will be merged, or tried to be merged (this part I can't really remember but check those Vector Clocks which are used to tell the update history).
So if not too big parts of your cluster go down, you'll have a consistent view on your data. If bigger parts of your node are down or you request from only a small parts of your copies you may see inconsistencies, which (may) eventually be consistent.
Hope that helped, I can highly recommend to read those original papers about Amazon Dynamo and Google BigTable, but I think you're mostly interested in Amazon Dynamo. Additionally this post from Werner Vogels may come handy as well.
As for the sharding size I think that those can vary depending on your machine and how hot given areas of your keyring are.
Cassandra does not, typically, keep all data on all nodes. As you suggest, this would defeat some of the advantages offered by it's distributed data model (in particular fast writes would be hampered). The amount of replication desired (how many nodes should keep copies of your data) is customizable by the client at write time. As such, you can set it up to replicate across all nodes, or just keep your data at a single node with no replication. It's up to you. The specific node(s) to which the data gets written is determined by the hash value of the key. Each node is assigned a range of hash values it will store, so when you go to look up a value, again the key is hashed and that indicates on which node to find the data.