I am an avid user of Amazon AWS but I am not sure about the RDS as compared to Google's Cloud SQL. In this site - it is mentioned that Per Use Billing Plan exists.
How is that calculated? It is mentioned 'charged for periods of continuous use, rounded up to the nearest hour'.
How does it go? If there are no visitors to my site there are no charges, right? What if I say I have 100 continuous users for 30 days. Will I still be billed $0.025 per hour (excluding the network usage charges)?
How do I upload my present SQL database to Google Cloud service? Is it the same way as Amazon using Oracle Workbench?
Thank you
Using the per use billing, if your database isn't access for 15 minutes then it is taken offline and you are only charged for data storage ($0.24 per GB per month). Its brought back online the next time it's accessed, which typically takes around a second for a D1 instance. The number of users doesn't affect the charge: you are charged for the database instance, not the user.
More details here
https://developers.google.com/cloud-sql/faq#how_usage_calculated
More information on importing data here:
https://developers.google.com/cloud-sql/docs/import-export
For Google Cloud SQL, I think we need to differentiate the MySQL 1st generation and the 2nd generation. In this FAQ link (answered by Joe Faith), https://developers.google.com/cloud-sql/faq#how_usage_calculated, it is about the 1st generation with activation policy of ON_DEMAND, meaning that you are charged per minute of usage.
However, with MySQL 2nd generation (as answered by Se Song), it will charge you entirely every minute (24 h per day) regardless whether you have active connections or not. The reason is that it uses the instance with activation policy = ALWAYS. You can read more the pricing details in here: https://cloud.google.com/sql/pricing/#2nd-gen-pricing
You can manually stop and restart your database instance, and hence it could be possible to write a script that activates it under particular circumstances, but this is not provided within GCP's features.
Watch the default settings carefully or you risk $350/month fees. Here's my experience: https://medium.com/#the-bumbling-developer/can-you-use-google-cloud-platform-gcp-cheaply-and-safely-86284e04b332
Related
Is AWS RDS billing purely based on RAM/IO and storage? or is there any additional per database charges?
For my RDS deploy, If I have 1 PostgreSQL DB that has all my data but only receives 2000 queries per day vs if I have 4 PostgreSQL DBs that have the same relations as the 1 DB but those relations are split up on the 4 DBs and the 4 DBs will collectively receive the same 2000 queries per day... will the bill between the two different setups be essentially the same amount? The assumption being that the "size" of the data in 1DB vs 4DBs is exactly the same.
I want to split the data across multiple databases to make reporting for different modules in my system easier.
You are billed based on instance size and some additional criteria (disk size, outbound traffic, etc.) If these are the same, the number of databases doesn't matter. So you can split your application across multiple databases within an instance without impact to the billing.
In the future - this is a question better suited to Server Exchange than to Stack Overflow.
AWS RDS charges based on size, Data Transfer, backup, Storage etc.
In your case if you are going to keep the size of the instance same then it is better to have only one instance as the costing for instance is more than the Data Transfer and storage.
It makes no sense to have 4 same size of instances as the base billing will be 4 times. If you use small instance size then it may make some difference.
I would request you to refer to the below links:
https://aws.amazon.com/rds/postgresql/pricing/
https://calculator.aws/#/
With this you can understand how much you are billed for instances based on your usage also instance size.
Also you can choose different options to reduce the billing like Reserved instance.
Since there will be only one instance I think the charges will be the same, as long as the parameters on which it charges is the same.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I want to use Google Colab free TPU with a custom dataset, that's why I need to upload it to GCS. I created bucket in GCS and uploaded dataset.
Also I read that there are two classes of operations with data in GCS: operation class A and operation class B [reference].
My questions are: does accessing dataset from GCS in Google Colab fall in one of these operation classes? What is average price you pay for using GCS for Colab TPU?
Yes, accessing to the Objects (files) in your GCS bucket will result in possible charges to your Billing Account but there are some other factors that you might need to consider. Let me explain (sorry in advance for the very long answer):
Google Cloud Platform services use APIs behind the scene to perform multiple actions such as show, create, delete or edit certain resources.
Cloud Storage is not the Exception. As mentioned in the Cloud Storage docs operations can be cataloged in two: the ones performed by the JSON API and the ones done by the XML API.
All operations performed on the Cloud Console or Client libraries (the ones used to interact via code with languages like Python, Java, PHP, etc.), the Operations will be charged using the JSON API by default. Let's focus on this one.
I want you to pay attention at the name of the methods under each Operations column:
The structure can be read as follows:
service.resource.action
Since all these methods are related to the Cloud Storage service, it is normal to see the storage service in all of them.
In Operations B column, the first method is storage.*.get. There is no other get method in the other columns which means that retrieving information from a bucket (read metadata) or objects (read a file via code, download files, etc.) inside a bucket will be considered as part of this method.
Before talking about how to calculate costs let me add: Google Cloud Storage not only charges you for the action itself but also for the size of the file traveling among the Network. Here are the 2 most common scenarios:
You are interacting with the files from another GCP service. Since it uses the internal GCP network, charges are not that big. If you decide to go with this, I would recommend to use resources (App Engine, Compute Engine, Kubernetes Engine, etc.) in the same location to avoid additional charges. Please check the Network egress charges within GCP.
You are interacting from an environment outside GCP. This is the scenario where you are interacting with other services like Google Colab (even when it is a Google service, it is outside the Cloud Platform). Please see the General network usage pricing for Cloud Storage.
Now, let's talk about the Storage classes, which can also affect the object's availability and pricing. Depending on where the bucket is created, you will be charged for the amount of stored Data as mentioned in the docs.
Even when the Nearline, Coldline and Archive classes is the cheapest ones regarding storage, they charge you an extra for retrieving data. This is because these classes are meant to be used to store data that is infrequently.
I think we have covered everything and we can move now to the important question: How much all of this will cost? It depends on your files' size, the times you interact with them and the Storage class of your bucket.
Let's say that you have 1 Standard bucket in North America with your Dataset of 20 GB and you read it from Google Colab 10 times a day we can calculate the following:
Standard Storage: $0.020 per GB
$0.020 * 20 = $0.4USD
Class B operations (per 10,000 operations) for Standard operations: $0.004
Given that you are only charged $0.004 per 10,000 we can say that each operation
costs $0.0000004 USD so 10 operations will be $0.000004 USD.
Egress to Worldwide Destinations (excluding Asia & Australia): $0.12 per GB
$0.12 * 20 because it is the size of our file = $2.4 USD
10 times we are reading this doc per day: 2.4 * 10 = $24 USD
Given this example, you would pay per day: 0.4 + 0.000004 + 24 = $24.400004 USD. Another example can be found in the Pricing overview section
And finally the good news, Google Cloud Storage offers Always Free usage limits that reset every month. I am attaching the table from that link below:
This means that: if during a whole month you store less than 5 GBs in a Standard class bucket, you perform less than 50,000 Class B operations, less than 5,000 Class A Operations and you sent less than 1GB over the Network, you won't pay a thing.
Once you pass those limits, the charges will start i.e. If you have a Dataset of 15GB, you will only be charged for 10GB.
I have a MongoDB Atlas cluster that serves many customers. Each customer has its own database on the cluster.
I would like to reduce my application's impact on MongoDB data transfer costs, which have been increasing for the last few days, but the billing info provided by Atlas does not break down prices per database. Therefore, I have no way of knowing which customers are costly and what are the most costly queries in terms of data transfer.
Moreover, using the prices on a daily basis and a few queries, I cannot correlate insertion of resources in my application with prices. For example, let's say my resources are Cats, one day it will cost 5$ of data transfer with 5000 Cats inserted in total in the databases, but the next day, it's going to cost 13$ with 1500 Cats inserted.
Do you know of tools or something in the Atlas dashboard I might've missed that could help me better track costs per customer, or say, a cost per Cat (in my example) so that I build a pricing model for my customers?
Thank you
You are most likely going to need separate projects and deployments.
A MongoDB client instance is generally capable of using any database on the server (subject to authorization rules and APIs provided in the language in question), therefore to get a breakdown of data transfer by database would require the server to track bytes transferred per operation and then aggregate those counts. As far as I know this isn't a feature that currently exists.
The most practical way of tracking this today is probably writing a layer on top of the driver on the client side that would look at data actually received.
I have a postgres database running locally that I'd like to migrate to AWS Aurora (or AWS postgres).
I've pg_dump'd the database that I want, and it's ~30gb compressed.
How should I upload this file and get the AWS RDS instance to pg_restore from it?
Requirements:
There's no one else using the DB so we're ok with a lot of downtime and an exclusive lock on the db. We want it to be as cheap as possible to migrate
What I've tried/looked at so far:
Running pg_restore on the local file with the remote target - unknown pricing total
I'd also like to do this as cheaply as possible, and I'm not sure I understand their pricing strategy.
Their pricing says:
Storage Rate $0.10 per GB-month
I/O Rate $0.20 per 1 million requests
Replicated Write I/Os $0.20 per million replicated write I/Os
Would pg_restore count as one request? The database has about 2.2 billion entries, and if each one is 1 request does that come out to $440 to just recreate the database?
AWS Database Migration Service - it looks like this would be the cheapest (as it's free?) but it only works by connecting to the local database. Uncompressed the data is about 200gb, and I'm not sure it makes sense to do a one for one copy using DMS
I've read this article but I'm still not clear on the best way of doing the migration.
We're ok with this taking a while, we'd just like to do it as cheap as possible.
Thanks in advance!
There are some points you should note when migrating
AWS Database Migration Service - it looks like this would be the cheapest (as it's free?)
The service they provide for free is a Virtual machine ( with softwares included ) that provide the computing power and functionality to move Databases to some of their RDS service.
Even when that service is free, you would be charged normal fee for any RDS usage
The number they provided is roughly related to EBS (the underlying disks ) they use to serve your data. A very big and complex query can take some I/O, the two are not equal to eachother.
The estimation for EBS usage can be seen here
As an example, a medium sized website database might be 100 GB in size and expect to average 100 I/Os per second over the course of a month. This would translate to $10 per month in storage costs (100 GB x $0.10/month), and approximately $26 per month in request costs (~2.6 million seconds/month x 100 I/O per second * $0.10 per million I/O).
My personal advice: Make a clone of your DB with only part of the set ( 5% maybe). Use DMS on that piece. You can see how the bills work out for you in a few minutes. Then you can estimate the price on a full DB migration
I have a mongoDb instance provisioned on Azure cloud used as IAAS. There is a load balancer behind which there is a shard cluser, each shard has 2 replicas. Each replica is a VM. So I can go inside that VM and check the storage space, RAM etc and check on the hardware details for that VM.
Now, I have cosmosDb provisioned as well which is actually a managed service and I have no control over what it uses behind the hoods. For example, I would not know how much RAM, what storage space etc is used.
So if I have to compare the performance of mongoDb and cosmosDb on azure cloud, I am not sure how to compare apples to apples if I don't have the exact information about the underlying hardware.
Can someone suggest a way I can compare the performance of the two ?
Why not compare on price?
Take the direct Azure charges for your IAAS mongodb and allocate the same budget to purchase an allowance of CosmosDb request units. This would represent a very basic comparison.
Next fine tune your comparison to genuinely reflect some advantages of PaaS CosmosDb.
Assume you could dial down allocated RU by 30% for 10 hours per day.
Enable the new add-on provisioning for request units per minute. 20% cost savings have been cited by Microsoft when this feature is enabled.
Finally, add 10% of the salary of a Database Administrator to your total IAAS cost.