How to apply upload limit for google storage bucket per day/month/etc - google-cloud-storage

Is there a way how to apply upload limit for google storage bucket per day/month/year?
Is there is a way how to apply limit on amount of Network traffic?
Is there is a way how to apply limit on Class A operations?
Is there is a way how to apply limit on Class B operations?
I found only Queries per 100 seconds per user and Queries per day using
https://cloud.google.com/docs/quota instructions, but this is JSON Api quotas
(I even not sure what kind of api is used inside of StorageClient c# client class)

For defining Quotas, and by the way SLO, you need to have SLI: Service level indicator. that means to have metrics on what you want to observe.
Here, it's not the case. Cloud Storage haven't indicator on the volume of data per day. Thus, you don't have built in indicator and metrics, ... and quotas.
If you want it, you have to build something by your own. To wrap all the Cloud Storage call in a service that count the volume of blob per days and then you will be able to apply your own rules on this personal indicator.
Of course, for preventing any by pass, you have to deny direct access to the buckets and only grant your "indicator service" to access them. Same things for the bucket creation, to register the new buckets in your service.
Not an easy task...

Related

How do we create our own scalable storage buckets with Kubernetes?

Instead of using Google Cloud or AWS Storage buckets; how do we create our own scalable storage bucket?
For example; if someone was to hit a photo 1 billion times a day. What would be the options here? Saying that the photo is user generated and not image/app generated.
If I have asked this in the wrong place, please redirect me.
As an alternative to GKE or AWS objects storage, you could consider using something like MinIO.
It's easy to set up, it could run in Kubernetes. All you need is some PersistentVolumeClaim, to write your data. Although you could use emptyDirs to evaluate the solution, with ephemeral storage.
A less obvious alternative would be something like Ceph. It's more complicated to setup, although it goes beyond objects storage. If you need to implement block storage as well, for your Kubernetes cluster, then Ceph could do this (Rados Block Devices), whilst offering with object storage (Rados Gateways).

Reveal GCS bucket at specific date and time

Having a Google Cloud Storage bucket I would like to reveal it (make it public) at specific date and time. How can it be achieved?
I have tried the permissions of bucket only to find out that with principal allUsers I cannot use any condition.
Another way that comes up is to script Google Compute instance with a startup script together with Google Scheduler, this however has a unpredictable delay which is my purpose cannot tolerate.
So is there any other way? I do not necessarily need to use GCS, any other service that will allow me to reveal a folder/files at specific time should be enough.
You can execute your function that makes objects public in Firebase Cloud Functions with functions.pubsub.schedule().onRun():
In Cloud Functions for Firebase, scheduling logic resides in your
functions code, with no special deploy-time requirements. To create a
scheduled function, use functions.pubsub.schedule('your schedule').onRun((context)).

how to plan google cloud storage bucket creation when working with users

I'm trying to figure out if anyone can offer advice around bucket creation for an app that will have users with an album of photos. I was initially thinking of creating a single bucket and then prefixing the filename by user id, since google cloud storage doesn't recognize subdirectories, like so: /bucket-name/user-id1/file.png
Alternatively, I was considering creating a bucket and naming it by user id like so: /user-id1-which-is-also-bucket-name/file.png
I was wondering what I should consider in terms of cost and organization when setting up my google cloud storage. Thank you!
There is no difference in term of cost. In term of organization, it's different:
For the deletion, it's simpler to delete a bucket and not a folder in the unique bucket.
For performances, sharding is better is you have separate bucket (you have less chance to create an hotspot)
At billing perspective, you can add labels on the buckets, and get them in the billing exported to BigQuery. You can know the cost of the bucket per user, and maybe do a rebill to them
The biggest advantage of 1 bucket per user model is the security. You can grant a user (if the users have direct access to the bucket and don't use a backend service to access it) on a bucket, without the use of legacy (and almost deprecated) ACL on object. In addition, if you use ACL, you can't set ACL per folder, ACL are per object. So, everytime that you add an object in the unique bucket, you have to set the ACL on it. It's harder to achieve.
IMO, 1 bucket per user is the best model.

Google Cloud Storage quota hit - how?

When my app is trying to access files in a bucket using a SignedURL, a 429 response is received:
<Error>
<Code>InsufficientQuota</Code>
<Message>
The App Engine application does not have enough quota.
</Message>
<Details>App s~[myappname] not have enough quota</Details>
</Error>
This error continues until the end of the day, when the quota is apparently reset, then I can use storage again. It's only a small app and does not have much usage. The project that contains the storage is set up to use billing. The files are being accessed from another project, which is also set up to use billing.
I'm not aware that Google Cloud Storage has any quotas that could be hit in this fashion. The only ones I know of are the ones here: https://cloud.google.com/storage/quotas but as far as I am aware, none of them apply.
Buckets are not being created or destroyed.
Updates are not being made to buckets.
There are only a couple of IAM identities.
There are no Pub/Sub notifications.
Objects stored in the buckets are small.
Is there any way I can find out why the quota is being exceeded?
It turns out it was because of a spending limit I had set on app engine. I didn't think those spending limits applied any more, but it turns out that's for new projects. Spending limits that have already been set on existing projects are effective, and I can personally attest that they do work!
Thanks for the comments #KevinQuinzel and #gso_gabriel.

Multiple pods and nodes management in Kubernetes

I've been digging the Kubernetes documentation to try to figure out what is the recommend approach for this case.
I have a private movie API with the follow microservices (pods).
- summary
- reviews
- popularity
Also I have accounts that can access these services.
How do restrict access to services per account e.g. account A can access all the services but account B can only access summary.
The account A could be doing 100x more requests than account B. It's possible to scale services for specific accounts?
Should I setup the accounts as Nodes?
I feel like I'm missing something basic here.
Any thoughts or animated gifs are very welcome.
It sounds like this is level of control should be implemented at the application level.
Access to particular parts of your application, in this case the services, should probably be controlled via user permissions. Similar line of thought for scaling out the services...allow everything to scale but rate limit up front, e.g., account A can get 10 requests per second and account B can do 100x. Designating accounts to nodes might also be possible, but should be avoided. You don't want to end up micromanaging the orchestration layer :)