Unable to Increase Cloud SQL Storage - google-cloud-sql

Currently we have 4 Cloud SQL instance. We have problem with 1 latest instance, we apply changes to increase our instance Storage but it is not applied. The other instance is normal, we can directly increase the storage.
Anyone also have this kind of issue ?
All help are really appreciated.
Thank you.

Related

How do we create our own scalable storage buckets with Kubernetes?

Instead of using Google Cloud or AWS Storage buckets; how do we create our own scalable storage bucket?
For example; if someone was to hit a photo 1 billion times a day. What would be the options here? Saying that the photo is user generated and not image/app generated.
If I have asked this in the wrong place, please redirect me.
As an alternative to GKE or AWS objects storage, you could consider using something like MinIO.
It's easy to set up, it could run in Kubernetes. All you need is some PersistentVolumeClaim, to write your data. Although you could use emptyDirs to evaluate the solution, with ephemeral storage.
A less obvious alternative would be something like Ceph. It's more complicated to setup, although it goes beyond objects storage. If you need to implement block storage as well, for your Kubernetes cluster, then Ceph could do this (Rados Block Devices), whilst offering with object storage (Rados Gateways).

Updating AWS Elasticsearch cluster settings

By default in Elasticsearch, the maximum number of open scrolls is 500 but I need to increase this number. There s no problem in updating "search.max_open_scroll_context" in local machine but AWS Elasticsearch has not allowed to make changes.
While trying to update with answer given in this thread configure-search-max-open-scroll-context, the response is: {"Message":"Your request: '/_cluster/settings' payload is not allowed."} while I can perform such operation in my local Elasticsearch but AWS Elasticsearch doesn't seems to allow such operation. Does anyone has answer to this for AWS Elasticsearch or have faced similar?
This is restricted in AWS ES for customer end.
You need to reach out to AWS Support Team for this. Just let them know the value of "search.max_open_scroll_context" that you are looking for and they will update it from the backend.
Here the link to AWS-supported operations on elasticsearch.
Currently, AWS doesn't support updating "search.max_open_scroll_context" as of now. You can definitely contact AWS support to increase the scroll context count. Alternatively, you can use Search-After API instead of scroll.

PostgresQL data_directory on Google Cloud Storage, possible?

I am new to google cloud and was wondering if it is possible to run PostgresQL container on Cloud Run but the data_directory of PostgresQL was pointed to Cloud Storage?
If possible, then please could you point me to some tutorials/guides on this topic. And also what are the downsides of this approach?
Edit-0: Just to clarify what I am trying to achieve:
I am learning google cloud and want to write simple application to work along with it. I have decided that the backend code will run as a container under Cloud Run and the persistent data(i.e the database file) will reside on Cloud Storage. Because this is a small app for learning purpose, I am trying to use as less moving parts as possible on the backend(and also ones that are always free). And also both PostgresQL and the backend code will reside in the same container except for the actual data file, which will reside under Cloud Storage. Is this approach correct? Are there better approaches to achieve the same minimalism?
Edit-1: Okay, I got the answer! The Google documentation here mentions the following:
"Don't run a database over Cloud Storage FUSE!"
The buckets are not meant to store database information, some of the limits are the following:
There is no limit to writes across multiple objects, which includes uploading, updating, and deleting objects. Buckets initially support roughly 1000 writes per second and then scale as needed.
There is no limit to reads of objects in a bucket, which includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed.
One alternative to separate persistent disk for your PostgreSQL database, is to use Google Compute Engine. You can follow the “How to Set Up a New Persistent Disk for PostgreSQL Data” Community Tutorial.

Google Cloud SQL Postgres - randomly slow queries from Google Compute / Kubernetes

I've been testing Google Cloud SQL with Postgresql, but I have random queries taking ~3s instead of a few ms.
Troubleshooting I did:
The queries themselves aren't problems, rerunning the same query will work.
Indexes are properly set. The database is also very very small, it shouldn't do this, even if there weren't any index.
The Kubernetes container is connecting to the database through SQL Proxy (I followed this https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine). It is not the problem though as I tried to connect directly to the database, with the same issue.
I configured net.ipv4.tcp_keepalive_time to 60 to make sure the connection weren't dropping.
I also have a pool of connection that are never disconnected to make sure it wasn't from that.
When I run queries directly through my local Postgresql client, I never have the problem.
I don't have this issue when developing locally either and connecting to my local database.
What I'm getting at is: I feel there's some weird connection/link issue between my Google Compute instances and my Google SQL instance that I can't seem to figure out.
Any idea?
Edit:
I also noticed these logs in my SQL Cloud instance every 30s:
ERROR: recovery is not in progress
HINT: Recovery control functions can only be executed during recovery.
STATEMENT: SELECT pg_is_xlog_replay_paused(), current_timestamp
That's an interesting problem you are facing. So my knowledge on Kubernetes isn't that great, but I do have a general understanding so let's see if I can provide some suggestions.
To start with, the API that you linked to in your question does mention that it is still in beta. So I do believe there would still be issues to patch in maximizing speed performance.
Secondly, from what I understand, Kubernetes is a great tool for handling stateless workloads. Thus, handling data where state is required for queries would be a slow operation. This article (although not entirely related) does explain some of the pitfalls of Kubernetes (not all the questions are relevant)
Thirdly, could you explain your use case a little bit? Do you really need to use Kubernetes or will another tool like a powerful Compute Engine Instance or or a Dataflow job resolve the the issue? Are you making your database queries through a programming language or an application call?
Thanks, and do let me know!

Upgrading amazon EC2 m1.large instance to m3.large with mongodb installed

If I were to upgrade an amazon instance, I'd create a snapshot of the image and create the new instance from this image and then upgrade that instance.
My question(s) is related to mongodb and the best way upgrade from a m1.large to a m3.large instance - basically m3's are cheaper and more powerful than the old m1's.
I currently have mongodb running on the m1.large instance backed by 3 EBS Volumes for storage, journalling and logs (essentially the mongodb image config from the MarketPlace).
When i've gone through to setup the new m3.large instance, I noticed that it's not EBS Optimized.
Working with mongodb and the current config, I assume for optimal performance, it's desirable to go the EBS Optimized route - if that's the case, the best upgrade path is to go for m3.xlarge? Would I hit a big performance penalty if I went with a m3.large?
And lastly....after taking a snapshot of an image (specifically an image backed with EBS Volumes), does the new image take that same config setup? I.E The new image will be backed by the same volumes?
I know I can stop and start the current instance, but I want to minimise any downtime.
Any help appreciated!
Firstly, you don't need to create an entire new instance, snap the EBS volumes of the old one, and attach the copies. If you're doing this to try to avoid service interruption, what happens when you switch the EIP from the old to the new instance? Yep - service interruption.
Just stop the m1, reset it to m3, and start. There will be an outage, of course, but you'll be back in less than 5 minutes and you've saved yourself a chunk of work replicating volumes.
As for EBS Optimised - do you really need that? Do you understand what it means, and what the consequences of NOT having it on the new instance are? If the answers to both are YES, then of course pick an m3 (or larger) instance type that supports it. If NO, research until you know what the feature gives you and whether you actually need it (you pay more with it active - don't spend more than you actually need to).