On EC2, why do I need individual EBS volumes for journal, log, and data? - mongodb

According to this MongoDB tutorial which explains how to manually deploy MongoDB on EC2, one of the steps states that you should have:
"Individual PIOPS EBS volumes for data (1000 IOPS), journal (250 IOPS), and log (100 IOPS)."
Why do I need individual EBS volumes for journal, log, and data?
Can I just combine these into one EBS volume?

MongoDB team may have experienced that IOPS needs for data is highest, log is the lowest and journal is somewhere in the middle. Although I am less familiar with MongoDB, I suspect that some of the reasons why they might be suggesting different EBS volumes include:
cost saving: provision right amount of IOPS based on needs will save $. If it was all on a single partition, you'd use maximum IOPS of 1000 and end up paying more
snapshot: you could snapshot data at a different (more frequent?) interval
contention: data, journaling and logging will not contend with each other if they are on different volumes
scaling: you could scale data volume separately from journal and log volumes
risk reduction: if data volume has troubles you could restore from backup and reapply journal (I assume you can), and analyze at logs too

The reason for separating your deployment storage across 3 volumes is that database journal files and log files are sequential in nature, and as such, have different access patterns compared to data files. Separating data files from journal and/or log files, particularly with a write intensive workload, will provide an increase in performance by reducing I/O contention. Depending on your workload, and if you are experiencing high I/O wait times, you may be able to benefit from separate disks for your data files, journal, and log files.
The answer was taken from https://www.mongodb.com/blog/post/maximizing-mongodb-performance-on-aws

Related

Is there any drawback to MongoDb data being on on Amazon EFS?

I have a relatively low traffic system, but I want to keep the data safe. The data are stored in a single MongoDb instance. I don't want to run multiple replicas and manage them. So, I'm planning to change the data directory to EFS path to take advantage of its replication and other benefits. Periodic snapshots can cause data loss, and recovery is manual.
Is there any drawback of storing the data and the journal files on EFS caused by the additional latency?
As you alluded to, EFS objects are replicated across availability zones. To contrast, EBS volumes are only replicated within a single availability zone. The difference in pricing is significant with EFS currently starting at $0.30/GB and EBS starting at $0.10/GB. Typical EFS use-cases are for data that needs to be shared across instances, like user home directories and application data. EBS is also capable of providing the lowest-latency.
With those points in mind, I do not recommend EFS for MongoDB data. If EFS's multi-AZ replication is your major desire, you could achieve it with EBS by taking periodic snapshots (which are stored in S3) of the EBS volume. I think EBS will give you better performance and lower cost.
Using EFS is not really an alternative to running multiple MongoDB instances. Replication and sharding are not things that EFS can help achieve.

Performance benchmarks for attaching read-only disks to google compute engine

Has anyone benchmarked the performance of attaching a singular, read-only disk to multiple Google Compute Engine instances (i.e., the same disk in read-only mode)?
The Google documentation ( https://cloud.google.com/compute/docs/disks/persistent-disks#use_multi_instances ) indicates that it is OK to attach multiple instances to the same disk, and personal experience has shown it to work at a small scale (5 to 10 instances), but soon we will be running a job across 500+ machines (GCE instances). We would like to know how performance scales out as the number of parallel attachments grows, and as the bandwidth of those attachments grows. We currently pull down large blocks of data (read-only) from Google Cloud Storage Buckets, and are wondering about the merits of switching to a Standard Persistent Disk configuration. This involves Terabytes of data, so we don't want to change course, willy-nilly.
One important consideration: It is likely that code on each of the 500+ machines will try to access the same file (400MB) at the same time. How do buckets and attached drives compare in that case? Maybe the answer is obvious - and it would save having to set up a rigorous benchmarking system (across 500 machines) ourselves. Thanks.
Persistent disks on GCE should have consistent performance. Currently that is 12MB/s and 30IOPS per 100GB of volume size for a standard persistent disk:
https://cloud.google.com/compute/docs/disks/persistent-disks#pdperformance
Using it on multiple instances should not change the disk's overall performance. It will however make it easier to use those limits since you don't need to worry about using the instance's maximum read speed. However, accessing the same data many times at once might. I do know how either persistent disks or GCS handle contention.
If it is only a 400MB file that are in contention, it may make sense to just benchmark the fastest method to deliver this separately. One possible solution is to make duplicates of your critical file and pick which one you access at random. This should cause less nodes to contend for each file.
Duplicating the critical file means a bigger disk and therefore also contributes to your IO performance. If you already intended to increase your volume size for better performance, the copies are free.

Do we need Provisioned IOPS for RDS instance that's using 60 IOPS according to monitoring?

We have PostgreSQL instance serving tens of r/w queries per second.
Instance type: db.m3.2xlarge
Instance Provisioned IOPS (SSD): 1000
Instance storage size: 100GB , Database size is about 5-10GB.
It is serving 100s of simultaneous clients with read-write queries. Yet, when we look at Cloudwatch Monitoring it shows IOPS in range of 20-60.
And Read iOPS is around 0!
This can't be right with 100s of connections and clients performing read/write queries all the time?
The Postgres configuration is standard, we did not turn off fsync.
Is the cache so effective that IOPS is not a factor with database size of 5GB?
Or AWS monitoring console wrong?
Paying for 1000 IOPS cost extra $300 for this db instance.
And minimum IOPS you can buy is 1000.
I am wondering if we can do without IOPS?
Or AWS monitoring is not correct?
Or 20 IOPS we're having now will kill the server performance if we have non-IOPS server?
Or with 5GB database it mostly fits in cache and IOPS is not a factor?
#CraigRinger is correct. If your dataset is small enough to fit entirely in memory, you won't need provisioned IOPS since insert/update traffic and logs are the only consuming IOPS.
But in case someone finds this topic, here's what CloudWatch looks like when you've exhausted your GP2 credits. As you can see there the Read and Write IOPS charts don't tell us much, but the read/write latency charts show massive spikes.
For context, these are 2 weeks of a PostgreSQL read replica used for analytics. The switch from 100GB GP2 (300 Base IOPS, $11.50/mo) to 100GB io1 (1000 IOPS, $112.50/mo) happens about 2/3 way through these charts (no more latency spikes). The cheaper option would've been to just up the quantity of GP2 storage. Provisioned IOPS are outrageously overpriced, but predictable behavior during heavy workloads in this instance made sense.
Your DB is almost entirely cached in RAM. (You can confirm this with use of the pg_buffercache extension). Those IOPS numbers are entirely to be expected. I would expect this server to be just fine without provisioned IOPS.
If you restart the instance it'll be slow for a little while as it builds the cache back up, but 5GB isn't much for that. Also, having provisioned iops actually makes this worse, because as well as setting a minimum I/O rate, piops sets the maximum too. It's a target rate not a minimum.
By contrast, regular volumes can burst to much higher read rates than piops volumes, so they'll perform better when you're warming the cache back up after a restart.
BTW:
Restarting the database won't slow it much, as it only has to read data from the OS's disk cache back into shared_buffers. It's only if you restart the whole machine that you'll see a slowdown for a while. If you want to simulate this without a restart, you can use Linux's drop_caches feature:
echo 1 | sudo tee -a /proc/sys/vm/drop_caches
This is actually worse than the situation after a restart because it evicts binaries and libraries from memory too. The system will chug very heavily at first, as it reads the very frequently accessed binaries and libraries it's executing back into RAM. Then you'll start to see cache recovery behaviour like you would after a restart.
Also, you have too many connections configured. Install pgbouncer, put it in front of the database, and reduce your max_connections. You'll get better performance.

MongoDB replication and EBS vs ephemeral

I've read all of the MongoDB related documentation talking about the recommended practices for deploying Mongo on AWS, but I don't understand the recommendation to install on EBS with RAID-10 (pdf) to avoid data loss.
This seems like admitting that replication doesn't work. Why shouldn't one run Mongo using ephemeral drives and a cluster of 5 servers doing replication?
Performance is much greater and latency is predictable on local disks.
If a server goes down, the EBS backed store would have to be resynced with the replica anyway. Sure you have the data, but it is already out of date.
Using EBS makes for a much more complicated setup. You need to use LVM or some other layer if you want to take snapshots, since EBS snapshots won't work across RAID. You need to monitor and manage your RAID array and rebuild in the case of failure or if one of the EBS volumes has performance issues.
What exactly does using EBS protect against if one has backups and a large replica set? It's almost admitting that replica sets won't protect you against dataloss. (ignoring for the moment the race condition when writes have been sent to secondaries and a failure on the master happens before acknowledgements have been sent).
Why shouldn't one run Mongo using ephemeral drives and a cluster of 5 servers doing replication?
AWS is not perfect, it can have a network failure which results in the entire set being down. with ephemeral memory you would lose all your data. Plus block devices survive restarts of nodes.
That is a few things, I am sure there are more.
If a server goes down, the EBS backed store would have to be resynced with the replica anyway.
Only after the point it went down, if that is a considerable amount of time then yes, it might be easier to copy the directory frm one replica to the other.
Using EBS makes for a much more complicated setup. You need to use LVM or some other layer if you want to take snapshots, since EBS snapshots won't work across RAID.
You don't really need RAID within AWS itself, I mean they RAID each of your block devices and replica sets are good as throw away sets. You can get by with one block device per node.
What exactly does using EBS protect against if one has backups and a large replica set?
It safe guards your sanity, restoring a backup of sizeable data across 10 odd members and resetting all the firewall/user permissions and OS etc etc could be...well...nasty.
I mean imagine having to re-setup your OS every single time you restart it.
It's almost admitting that replica sets won't protect you against dataloss.
Hmm, you must have misread some where brecaue THAT is not what they guarantee. It is true that it is harder to lose data with repilica sets (if they are setup right) but they are actually designed to give High Availability (HA).
Backups and jornalling and other consistentcy methods are designed to not lose data.
So where do you see the recommendation to run RAID10 on EBS for mongodb? Their docs list it as an option but specifically recommend only EBS and Provisioned IOPS.
For almost all deployments EBS will be the better choice. For production systems we recommend using
EBS-optimized EC2 instances
Provisioned IOPS (PIOPS) EBS volumes
http://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
We run all of our mongodb instances at EC2 and all of them use EBS storage volumes with production instances using provisioned IO. Here's why:
Bringing back a failed member is faster. If an instance fails and needs to be stopped and restarted (not that frequent but it does happen) we can just detach the storage and re-attach it to another instance. Mongod comes up fine, recovers via the journal and then re-syncs with the primary for only the delta in data since the failure. This is a big deal when you have large data sets that may take many hours to restore from scratch. Storing the data on an ephemeral drive does not provide this capability.
Backups are easier (at least for replica sets under 1 TB). With a single EBS storage volume (up to 1 TB) we can take snapshots of a live secondary. As long as the journal is on the same storage volume the backup will be consistent. No need for a dedicated secondary for backups that has to be brought offline to backup.
No need for RAID except for multiple TB replica sets or for performance. EBS is already RAID behind the scenes for redundancy. We do use RAID when a replica set grows beyond 1 TB in storage but that's it and have not yet hit a point where a high IOPS EBS volume provides sufficient performance.
Provisioned IOPS give decent control of performance vs. cost. Being able to select EBS storage rated up to 4000 IOPS has allowed us to scale up performance for production systems (at higher cost) while still gaining the benefits of EBS storage. We use regular EBS volumes at lower cost for test systems.
Copying production data off for use in a test environment is much easier for large data sets. Snapshot the volumes, create a new storage volume from the snapshot and you're up and running.
I certainly can imagine future deployments using ephemeral storage (particularly as SSD costs drop) for certain high performance situations but EBS has been fairly reliable and dependable for us. Of course your experience and needs can and will differ but for us following the recommendation from MongoDB has served us well. In fact it's been reliable enough that for some environments we've moved to 1 Primary, 1 Secondary and an Arbiter, which helps with cost savings. Probably would not have done that without the ease of recovery and overall reliability of using EBS volumes on the Primary and Secondary.

Does mongo replication split data or duplicate it

I am creating a mongoDB/nodejs based CMS and I am using GridFS to store all the uploaded docs. The question I have is this:
Does MongoDB replication sets allow increased amount of DB Storage, or
simply duplicates of the database. For Instance, if I have 5 servers
with 1TB of storage each, if I replica mongo across all of them, would
my GridFS system have theoretically 5TB of storage (minus caching and
padding) or 1TB of storage duplicated several times for better read
performance?
Thanks!
Informal description:
Replication = The same copy of the data on multiple nodes, i.e., 5 nodes with 1TB each provide 1TB overall.
Sharding / Partitioning = Fraction of the data goes to the nodes, i.e., 5 nodes with 1TB each provide 5TB overall.
Each approach has certain advantages and disadvantages, e.g., replication can help with read throughput and is good as backup, but slows down inserts (depending on commit level), whereas partitioning can help with insert throughput and distributed lookups.
Again, details left to the storage system implementor.
Sharding means splitting your data across multiple nodes, this is useful when you have a huge amount of data.
Replication means copying the data from a node to another node, and it's useful when your application is read heavy or you want to backup your data for example.
Resources:
http://www.mongodb.org/display/DOCS/Sharding
http://www.mongodb.org/display/DOCS/Replication
http://nosql-exp.blogspot.com/2010/09/mongodb-sharding-and-replication-with.html
Does MongoDB replication sets allow increased amount of DB Storage, or
simply duplicates of the database.
Mongo can do both.
The first case is called sharding.
The second case is called replication.