MongoDB in the cloud hosting, benefits - mongodb

Im still fighting with mongoDB and I think this war will end is not soon.
My database has a size of 15.95 Gb;
Objects - 9963099;
Data Size - 4.65g;
Storage Size - 7.21g;
Extents - 269;
Indexes - 19;
Index Size - 1.68g;
Powered by:
Quad Xeon E3-1220 4 × 3.10 GHz / 8Gb
For me to pay dearly for a dedicated server.
On VPS 6GB memory, database is not imported.
Migrate to the cloud service?
https://www.dotcloud.com/pricing.html
I try to pick up the rate but there max 4Gb memory mongoDB (USD 552.96/month o_0), I even import your base can not, not enough memory.
Or something I do not know about cloud services (no experience with)?
Cloud services are not available to a large database mongoDB?
2 x Xeon 3.60 GHz, 2M Cache, 800 MHz FSB / 12Gb
http://support.dell.com/support/edocs/systems/pe1850/en/UG/p1295aa.htm
Will work my database on that server?
This is of course all the fun and get the experience in the development, but already beginning to pall ... =]

You shouldn't have an issue with a db of this size. We were running a mongodb instance on Dotcloud with 100's of GB of data. It may just be because Dotcloud only allow 10GB of HDD space by default per service.
We were able to backup and restore that instance on 4GB of RAM - albeit that it took several hours
I would suggest you email them directly support#dotcloud.com to get help increasing the HDD allocation of your instance.

You can also consider using ObjectRocket which is a MOngoDB as a service. For a 20Gb database the price is $149 per month - http://www.objectrocket.com/pricing

Related

Mongodb on the cloud

I'm preparing my production environment on the Hetzner cloud, but I have some doubts (I'm more a developer than a devops).
I will get 3 servers for the replicaset with 8 core, 32 Gb ram and 240 gb ssd. I'm a bit worried about the size of the ssd the server comes with and Hetzner has the possibility to create volumes to be attached to the servers. Since mongodb uses a single folder for the db data, I was wondering how can I use the 240 gb that comes with the server in combination with external volumes. At the beginning I can use the 240 gb, but then I will have to move the data folder to a volume when it reaches capacity. Im fine with this, but it looks to me that when I will move to volumes, this 240gb will not be used anymore (yes I can use them to save the mongo journaling as they suggest to store it in a separate partition).
So, my noob question is, how can I use both the disk that comes with the server and the external volumes?
Thank you

Cassandra + Redis Project Server setup

Currently i'm having a Dedicated VPS Server with 4GB Ram , 50GB Hard-disk , i have a SAAS solution running on the server with more than 1500 customers. Now i'm going to upgrade the projects business plan,There will be 25000 customers and about 500 - 1000 customers using the project realtime . For now it takes 5 Seconds to fetch cassandra database records from the server to the application.Then i came through redis and it says that saving a copy to redis will help to fetch the data much faster and lowers server overhead.
Am i right about this ?
If i need to improve the overall performance , Can anybody tell me what are the things i need to upgrade ?
Can a server with configuration said above can handle cassandra and redis together ?
Thanks in advance .
A machine with 4GB of RAM will probably only be single-core so it's too small for any production workload and only suitable for dev usage where you run 1 or 2 transactions per second, mostly for functional testing.
We generally recommend deploying Cassandra on machines with at least 2 cores + 8GB allocated to the heap (so need at least 16GB of RAM) for low production loads. For moderate loads, 4 cores + 32GB RAM is ideal so you can allocate 16GB to the heap.
If you're just at the proof-of-concept stage, there's a tier on DataStax Astra that's free forever and doesn't require a credit card to create an account. I recommend it to most people because you can launch a cluster in a few clicks and you can quickly focus on developing your app. Cheers!

Why did upgrading RDS from Postgres 10.11 to 11.6 cause a storage-full error?

I have a RDS Postgres database on version 9.6.6, and was tasked with upgrading it to version 11.
To do this, I created a snapshot and then provisioned a new 9.6.6 database. Next, I clicked "modify" in the RDS console and upgraded this new instance to version 10.11. Once it was on major version 10, RDS then allowed me to upgrade again to version 11.6.
However, within an hour of completing this process, the new 11.6 database seized up and entered into the storage-full status. The following CloudWatch event was produced right before this happened...
The free storage capacity for DB Instance: my_new_db is low at 2% of the provisioned storage [Provisioned Storage: 4.78 GB, Free Storage: 77.81 MB]. You may want to increase the provisioned storage to address this issue.
Both the old and the new database are allotted 5GiB of disk space. I don't believe the size of my database tables are a factor, since the following query returns 330MB, which is only 7% of the allotted space.
SELECT pg_size_pretty(pg_database_size('my_old_db'));
Since the size of my database tables clearly isn't the culprit, what could have possibly caused my RDS instance to run out of disk space?
Specs:
Instance class: db.t2.medium
Ram: 4GB
vCPU: 2
Encryption: Enabled
Storage type: General Purpose (SSD)
Storage: 5GiB
I had a similar issue recently when upgrading a 9.6.22 DB instance in-place to 13.3.
The instance had only 5 GB storage, with a DB that was roughly 2.5 GB, and disk space decreased rapidly after upgrade to about 1 GB (within an hour or so), then stabilised. There was no disk-full error but storage was about 80% used.
I went through the RDS 'disk full' troubleshooting page which didn't help as the extra storage was not within the Postgres database cluster (i.e. set of DBs in this instance).
The mitigation was to resize this into a DB instance with over 20 GB storage.
I believe both issues may be down to the increased minimum size of storage for RDS instances - at some point, AWS must have increased this from 5 GB to 20 GB. See this section of AWS docs and the AWS Console for RDS when creating a new instance.
I would say there is an RDS issue here, as the console or API should tell you that you need to increase the size of storage before upgrading - similarly to how it may tell you that current instance type is not supported for upgrade.

Cassandra and MongoDB minimum system requirements for Windows 10 Pro

RAM- 4GB,
PROCESSOR-i3 5010ucpu #2.10 GHz
64 bit OS
can Cassandra and MongoDB be installed in such a laptop? Will it run successfully?
The hardware configuration proposed does not meet the minimum requirements. For Cassandra, the documentation requests a minimum of 8GB of RAM and at least 2 cores.
MongoDB's documentation also states that it will need at least 2 real cores or one multi-core physical CPU. With 4GB in RAM, the WiredTiger will allocate 1.5GB for the cache. Please also note that MongoDB will require changes in BIOS to allow memory interleaving to enable Non-Uniform Access Memory, a.k.a. NUMA, such changes will impact the performance of the laptop for other processes.
Will it run successfully?
This will depend on the workload expected to be executed; there are documented examples where Cassandra was installed on a Raspberry Pi array, which since the design it was expected to have slow performance and have a limited amount of data that can be held in the cluster.
If you are looking to have a small sandbox to start using these databases there are other options, MongoDB has a service named Atlas, with a model of a database as a service, it offers a free tier for a 3-node replica and up to 512Mb of storage. For Cassandra there are similar options, AWS offers in the free tier a small cluster of their Managed Cassandra Service (MCS), Datastax is also planning to offer similar services with Constellation

MongoDB Response Slower on Amazon EC2 Server Than Localhost

I'm loading the same amount of data (~100kb) on both my local server and a test Amazon EC2 server, but the response is 2x slower on EC2. Both are running Apache 2 and MongoDB on the same machine. On my local server, the response is about 209ms versus approximately 455ms on EC2.
I've setup a simple query and AJAX call that grabs point data to display on the map based on the current viewport of the device.
How can I debug this issue? How can I make it as faster as my local server? I even tried experimenting with different types of instances to make sure the specs are the same, but no luck. I also realize it could be because of network latency.
Local computer specs:
Intel Core i5 # 3.30GHz
8GB RAM
64-bit Windows 8
Amazon EC2 specs (m4.large):
2.4 GHz Intel Xeon Haswell (2 vCPUs)
8GB RAM
Amazon Linux
A remote query to EC2 is unlikely to return the result of the AJAX query as fast as your local server because it has network latency, while your local server does not. Measure the time in your AJAX handler from the start of the query to the point where it is ready to return data to get a meaningful baseline for comparison.
MongoDB is very sensitive to data being in RAM vs on disk. Depending on how you configured your EC2 instance, and on your local hardware, chances are pretty good that your local hardware is faster. EC2 instances can be configured to use SSD storage, and you can configure a guaranteed IOPS figure.
Is the 100KB the size of the result set, or the amount of data needed to form the result set? If you process 4GB of data down to get a 100KB result set, there's a good chance that disk IO is involved. If the amount of data you need to pull is small, repeat the test a few times to ensure data is entirely in RAM.
Finally, if both local and EC2 are pulling data from RAM, there's a good chance that your local CPU core is just faster than the EC2 CPU core, and that your RAM access is faster as well. EC2 is designed to provide low-cost commodity hardware. Developer setups are often much faster.
If you cannot account for the speed differences given the factors above, update your question with the time measurements that exclude network latency and provide more detailed specifications about your hardware. Update the question to indicate whether the data you are retrieving from MongoDB should be entirely in RAM, given it's size and the amount of RAM on your instance.