Can't create cloud object storage bucket under lite plan - ibm-cloud

I am unable to create any cloud object storage bucket -- I need one.
I Keep getting this error when creating one:
[409, Conflict] The account already has an instance created with the Lite plan.
I don't have any existing bucket.

The error above is indicating that creating the COS instance is failing because there is already an instance created with the Lite plan.
Background:
In the IBM cloud you must first create an instance of the Cloud Object Storage, COS, service. Then you can start creating buckets in the instance.
A plan is a very high level feature set and associated price point. For COS there is a lite and standard plan. I think the lite plan allows you to kick the tires for free and probably has the expected free tier limitations.

Related

Running periodic queries on google cloud sql instance

I have a google cloud postgre instance and I'd like to run periodic sql queries on it and use the monitoring system to alert the user with the results.
How can I accomplish just using the gcp platform? Without having to develop a separate app.
As far as I am aware of, There is no Built-in feature for recurring queries in Cloud SQL at the moment.
So you have to implement your own. You can Use Cloud Scheduler to trigger a Cloud function (via HTTP/S endpoint) that runs the query on Cloud SQL and then notify the user in the way that suits your needs (I would recommend using pub/sub).
and you might want to save the result in a GCS bucket and the user is to pull the result from there.
Also, you might want to check BigQuery. It has a built-in feature of Scheduling queries.

How do we create our own scalable storage buckets with Kubernetes?

Instead of using Google Cloud or AWS Storage buckets; how do we create our own scalable storage bucket?
For example; if someone was to hit a photo 1 billion times a day. What would be the options here? Saying that the photo is user generated and not image/app generated.
If I have asked this in the wrong place, please redirect me.
As an alternative to GKE or AWS objects storage, you could consider using something like MinIO.
It's easy to set up, it could run in Kubernetes. All you need is some PersistentVolumeClaim, to write your data. Although you could use emptyDirs to evaluate the solution, with ephemeral storage.
A less obvious alternative would be something like Ceph. It's more complicated to setup, although it goes beyond objects storage. If you need to implement block storage as well, for your Kubernetes cluster, then Ceph could do this (Rados Block Devices), whilst offering with object storage (Rados Gateways).

PostgresQL data_directory on Google Cloud Storage, possible?

I am new to google cloud and was wondering if it is possible to run PostgresQL container on Cloud Run but the data_directory of PostgresQL was pointed to Cloud Storage?
If possible, then please could you point me to some tutorials/guides on this topic. And also what are the downsides of this approach?
Edit-0: Just to clarify what I am trying to achieve:
I am learning google cloud and want to write simple application to work along with it. I have decided that the backend code will run as a container under Cloud Run and the persistent data(i.e the database file) will reside on Cloud Storage. Because this is a small app for learning purpose, I am trying to use as less moving parts as possible on the backend(and also ones that are always free). And also both PostgresQL and the backend code will reside in the same container except for the actual data file, which will reside under Cloud Storage. Is this approach correct? Are there better approaches to achieve the same minimalism?
Edit-1: Okay, I got the answer! The Google documentation here mentions the following:
"Don't run a database over Cloud Storage FUSE!"
The buckets are not meant to store database information, some of the limits are the following:
There is no limit to writes across multiple objects, which includes uploading, updating, and deleting objects. Buckets initially support roughly 1000 writes per second and then scale as needed.
There is no limit to reads of objects in a bucket, which includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed.
One alternative to separate persistent disk for your PostgreSQL database, is to use Google Compute Engine. You can follow the “How to Set Up a New Persistent Disk for PostgreSQL Data” Community Tutorial.

How to apply upload limit for google storage bucket per day/month/etc

Is there a way how to apply upload limit for google storage bucket per day/month/year?
Is there is a way how to apply limit on amount of Network traffic?
Is there is a way how to apply limit on Class A operations?
Is there is a way how to apply limit on Class B operations?
I found only Queries per 100 seconds per user and Queries per day using
https://cloud.google.com/docs/quota instructions, but this is JSON Api quotas
(I even not sure what kind of api is used inside of StorageClient c# client class)
For defining Quotas, and by the way SLO, you need to have SLI: Service level indicator. that means to have metrics on what you want to observe.
Here, it's not the case. Cloud Storage haven't indicator on the volume of data per day. Thus, you don't have built in indicator and metrics, ... and quotas.
If you want it, you have to build something by your own. To wrap all the Cloud Storage call in a service that count the volume of blob per days and then you will be able to apply your own rules on this personal indicator.
Of course, for preventing any by pass, you have to deny direct access to the buckets and only grant your "indicator service" to access them. Same things for the bucket creation, to register the new buckets in your service.
Not an easy task...

Integrating MongoDB cloud with AWS autoscaling

I am absolutely not buying Mongodb Atlas :) I want to make the best I can with MongoDB cloud. Today with MongoDB cloud, when I spin up an AWS instance the automation takes care of joining my MongoDB cloud account w/ the API and group key I've embedded in my system image. What I want to do now is take it to the next level and have that instance register itself as an additional replica of an -existing- replica set and sync up. I can't be the first person to want to do this but I'm coming up empty with Google. Can anyone point me to a gist, blog, rant or other example of how to do this?