Downtime if increasing memory for Google Cloud SQL - google-cloud-sql

I would like to increase memory for my Google Cloud SQL instance but I am unable to find the downtime for the same. On this page https://cloud.google.com/sql/faq I can find downtime for storage (zero downtime) and CPU (few minutes) but nothing for memory.
Can somebody help me understand if there is downtime associated if I want to increase my RAM for the instance.

So I upgraded my RAM and it seems that the downtime is about 2 minutes, roughly the amount to restart my underlying machine.
I hope this helps somebody

Related

GCP CloudSQL (PostgreSQL) Crash During Stored Procedure Execution and Failover

I have a stored procedure in GCP CloudSQL (PostgreSQL v9.0.23). It works find in lower environments; but when it runs in Production (with significantly more volume), it crashes the DB itself which results in a Failover.
When we checked the metrics, what we found out is that the memory is more than 90% just before it crashes (15 GB out of the 16GB instance memory). Also the Read / Writes are very high >1000 Ops per second.
The SP does some select and insert statements. Any suggestions to improve this situation helps.
Thanks in advance.
As you have mentioned that the Cloud SQL instance is running smoothly with a small amount of workload but crashing with the Production environment where more intensive workloads are there, it seems the issue is with the instance size. So I would suggest you increase the instance size as per your need.
Also you have mentioned that the memory usage is 15 GB out of 16 GB which amounts to nearly 94%. As per this document your Cloud SQL instance will not be covered under Cloud SQL SLA if memory usage is over 90% for more than 6 hours of duration. So I would suggest you keep the memory usage within 90%. Also I would suggest keeping the CPU utilization as mentioned in this document. To know when your instance reaches any threshold I will suggest you set a monitoring alert for that metrics as mentioned here.
If increasing your instance size doesn’t help I would recommend you to create a support ticket with Google Cloud Support so that they can investigate in detail.

mongodb limit server memory consumption

I'm testing running monbodb on the kubernetes platform where I can limit the resources used by the running container.
Say I set the memory limit to 256Mb. The problem is that for example while making backup memory consumption increases to the limit and container gets restarted by kubernetes.
So the question is is there a way to limit mongodb memory consumption for my case so that it would not cause the crush by exeeding memory limit set by platform.
I could of course increase the limit but I'm interested in a principal solution and would like to understand this process better because I don't really now how memory consumed by mongodb and container os. Is it possible to tune mongodb/underlying linux os to work inside existing limits.
The limits that you have set are good enough for a monogodb pod, these are the limits used by the community as well.
The only way I think you can get around this for backups is to increase the memory limits, but still it might fail, because in other places on stackoverflow people have experienced OOM killing on VMs with memory of giga bytes. MongoDB basically tries to eat any and every memory that is made available to it.
Also there are other ways to backup mongodb: https://dba.stackexchange.com/questions/76130/how-to-backup-large-mongodb-database
I am not sure how this aligns in the k8s world.

What happens when AWS RDS CPU utilisation reaches100%?

I would like to know what what happens when AWS RDS CPU utilisation is 100%?
Do the database requests fail or are the requests put on hold until the CPU utilisation drops below 100%?
I'm using RDS Postgres and thanks in advance for your help.
Your query performance will degrade. Further queries will fail.
If your RDS is the sole database instance for your application, your entire application could come to a stand still.
You will need to figure out if the CPU is peaking due to high load or if there is one such query that is consuming all the resources.
If its under heavy load, adding another replica might help if its read heavy. If its write heavy, you may need to scale up to the next instance or probably think about sharding your datasets.
This could lead to a lot of issues namely:
There is a good chance that if the CPU remains at 100% consistently, your instance will crash. Now, if this a Multi AZ instance, an automatic failover can reduce the downtime incurred due to any unexpected reboot to around 2min. However, if this is a Single AZ instance, the downtime can be significantly longer.
Your Db instance won't accept any new connections despite not hitting the value of 'max_connections'. There is a good chance that some of the existing transactions will roll back due to performance degradation.
Continuous spike to 100% CPU may lead to memory pressure, ie, very high swap usage and low freeable memory and an eventual instance crash.
Workload will reach the threshold and read and write IOPS won't go further.

Do we need Provisioned IOPS for RDS instance that's using 60 IOPS according to monitoring?

We have PostgreSQL instance serving tens of r/w queries per second.
Instance type: db.m3.2xlarge
Instance Provisioned IOPS (SSD): 1000
Instance storage size: 100GB , Database size is about 5-10GB.
It is serving 100s of simultaneous clients with read-write queries. Yet, when we look at Cloudwatch Monitoring it shows IOPS in range of 20-60.
And Read iOPS is around 0!
This can't be right with 100s of connections and clients performing read/write queries all the time?
The Postgres configuration is standard, we did not turn off fsync.
Is the cache so effective that IOPS is not a factor with database size of 5GB?
Or AWS monitoring console wrong?
Paying for 1000 IOPS cost extra $300 for this db instance.
And minimum IOPS you can buy is 1000.
I am wondering if we can do without IOPS?
Or AWS monitoring is not correct?
Or 20 IOPS we're having now will kill the server performance if we have non-IOPS server?
Or with 5GB database it mostly fits in cache and IOPS is not a factor?
#CraigRinger is correct. If your dataset is small enough to fit entirely in memory, you won't need provisioned IOPS since insert/update traffic and logs are the only consuming IOPS.
But in case someone finds this topic, here's what CloudWatch looks like when you've exhausted your GP2 credits. As you can see there the Read and Write IOPS charts don't tell us much, but the read/write latency charts show massive spikes.
For context, these are 2 weeks of a PostgreSQL read replica used for analytics. The switch from 100GB GP2 (300 Base IOPS, $11.50/mo) to 100GB io1 (1000 IOPS, $112.50/mo) happens about 2/3 way through these charts (no more latency spikes). The cheaper option would've been to just up the quantity of GP2 storage. Provisioned IOPS are outrageously overpriced, but predictable behavior during heavy workloads in this instance made sense.
Your DB is almost entirely cached in RAM. (You can confirm this with use of the pg_buffercache extension). Those IOPS numbers are entirely to be expected. I would expect this server to be just fine without provisioned IOPS.
If you restart the instance it'll be slow for a little while as it builds the cache back up, but 5GB isn't much for that. Also, having provisioned iops actually makes this worse, because as well as setting a minimum I/O rate, piops sets the maximum too. It's a target rate not a minimum.
By contrast, regular volumes can burst to much higher read rates than piops volumes, so they'll perform better when you're warming the cache back up after a restart.
BTW:
Restarting the database won't slow it much, as it only has to read data from the OS's disk cache back into shared_buffers. It's only if you restart the whole machine that you'll see a slowdown for a while. If you want to simulate this without a restart, you can use Linux's drop_caches feature:
echo 1 | sudo tee -a /proc/sys/vm/drop_caches
This is actually worse than the situation after a restart because it evicts binaries and libraries from memory too. The system will chug very heavily at first, as it reads the very frequently accessed binaries and libraries it's executing back into RAM. Then you'll start to see cache recovery behaviour like you would after a restart.
Also, you have too many connections configured. Install pgbouncer, put it in front of the database, and reduce your max_connections. You'll get better performance.

Reduce Membase quota per bucket to 5 MB

In Heroku, I notice that they limit my free Memcached Bucket (actually Membase) to 5MB. However, I tried it on my own server and cannot set Bucket quota to less than 64MB (per node, and for Memcached bucket type). For Membase bucket type, it's even more: 100MB.
Hmm, my server have a humble amount of RAM. And I need to allocate a very small amount of Memcached. Please advice.
Heroku is running a slightly modified version of our memcached software that lets them keep the bucket overhead very low. Unfortunately the "productized" version has some limits imposed to prevent the software from getting itself into trouble.
Especially for Membase buckets, we need at least 100mb in order to safely run.
You may be able to reduce/eliminate these limits if you recompile the source, but that wouldn't be a supported configuration.
Perry
Sorry for the delay in getting back to this...
As with any piece of software, there are internal data structures that need RAM to run...that's what gets allocated immediately with Membase.
If you install memcached, it will use as much RAM as you configure it to use...no more, no less.