The system i am building is currently storing videos with Google Cloud Storage, my server will return the link from Google Cloud Storage which is used to play the video on mobile platforms. Is there a limit for how many user can access that link at the same time? . Thank You!
All of the known limits for Cloud Storage are listed in the documentation. It says:
There is no limit to reads of objects in a bucket, which includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed.
So, no, there are effectively no limits to the number of concurrent downloads.
Related
I am new to google cloud and was wondering if it is possible to run PostgresQL container on Cloud Run but the data_directory of PostgresQL was pointed to Cloud Storage?
If possible, then please could you point me to some tutorials/guides on this topic. And also what are the downsides of this approach?
Edit-0: Just to clarify what I am trying to achieve:
I am learning google cloud and want to write simple application to work along with it. I have decided that the backend code will run as a container under Cloud Run and the persistent data(i.e the database file) will reside on Cloud Storage. Because this is a small app for learning purpose, I am trying to use as less moving parts as possible on the backend(and also ones that are always free). And also both PostgresQL and the backend code will reside in the same container except for the actual data file, which will reside under Cloud Storage. Is this approach correct? Are there better approaches to achieve the same minimalism?
Edit-1: Okay, I got the answer! The Google documentation here mentions the following:
"Don't run a database over Cloud Storage FUSE!"
The buckets are not meant to store database information, some of the limits are the following:
There is no limit to writes across multiple objects, which includes uploading, updating, and deleting objects. Buckets initially support roughly 1000 writes per second and then scale as needed.
There is no limit to reads of objects in a bucket, which includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed.
One alternative to separate persistent disk for your PostgreSQL database, is to use Google Compute Engine. You can follow the “How to Set Up a New Persistent Disk for PostgreSQL Data” Community Tutorial.
On AWS I use it with S3 + Lambda combination. As new image uploaded to a bucket, lambda is triggered and create 3 different sizes of image (small, medium, large). How can I do this with GCS + Function?
PS: I know that there's "getImageServingUrl()", but can this be used with GCE or it's for app engine only?
Would really appreciate any input.
Thanks.
Google Cloud Functions directly supports triggers for new objects being uploaded to GCS: https://cloud.google.com/functions/docs/calling/storage
For finer control, you can also configure a GCS bucket to publish object upload notifications to a Cloud Pub/Sub topic, and then set a subscription on that topic to trigger Google Cloud Functions: https://cloud.google.com/functions/docs/calling/pubsub
Note that there are some quotas on Cloud Functions uploading and downloading resources, so if you need to process more than to 1 Gigabyte of image data per 100 seconds or so, you may need to request a quota increase: https://cloud.google.com/functions/quotas
I would like to use QuickBlox service in my application. Users can send images/audios/videos to the dialog. QuickBlox supports it, but I can't find more information about limitations
What is the storage size for files?
How long files can store there?
What will happen with if storage is full?
storage is unlimited (you are subject to fair usage)
no expiration date for files
max file size is 100 mb per one file
What does eventual or strong mean in the context of Google Cloud Storage consistency?
From the Consistency section of the documentation:
Google Cloud Storage provides strong global consistency for all
read-after-write, read-after-update, and read-after-delete operations,
including both data and metadata. When you upload a file (PUT) to
Google Cloud Storage, and you receive a success response, the object
is immediately available for download (GET) and metadata (HEAD)
operations, from any location in Google's global network.
That means it will take time to replicate all over the networks and it will not be available until the replication is finished (to demonstrate strong consistency). It is more understandable by the statement from the doc that says, "When you upload an object, the object is not available until it is completely uploaded." And that is why, the latency for writing to a globally-consistent replicated store may be slightly higher than to a non-replicated or non-committed store because a success response is returned only when multiple writes complete, not just one. Here what it says more.
I want to add user metadata that is calculated from the stream as it is uploaded
I am using the Google Cloud Storage Client from inside a Servlet during a file upload.
The only solutions I can come up and tried are not really satisfactory for a couple of reasons.
Buffer the stream in memory, calculate the metadata as the stream is buffered then write the stream out to the Cloud Store after it has been completely read.
Write the stream to a bucket and calculate the metadata. Then read the object from the temporary bucket and write it to its final location with the calculated metadata.
Pre-calculate the metadata on the client and send it with the upload.
Why these aren't acceptable:
Doesn't work for large objects, which some of these will be.
Will cost a fortune if lots of objects are uploaded, which there will be.
Can't trust the clients, and some of the clients can't calculate some of what I need.
Is there any way to update the metadata of a Google Cloud Store object after the fact?
You are likely using the Google Cloud Storage Java Client for AppEngine library. This library is great for AppEngine users, but it offers only a subset of the features of Google Cloud Storage. It does not to my knowledge happen to support updating the metadata of existing objects. However, Google Cloud Storage definitely supports this.
You can use the Google API Java client library, which exposes the Google Cloud Storage's JSON API. With this library, you'll be able to use the storage.objects.update method or the storage.objects.patch method, both of which can update metadata (the difference is that update replaces any properties of the object that are already there, while patch just changes the specified fields). The code would look something like this:
StorageObject objectMetadata = new StorageObject();
.setName("OBJECT_NAME")
.setMetadata(ImmutableMap.of("key1", "value1", "key2", "value2"));
Storage.Objects.Patch patchObject = storage.objects().patch("mybucket", objectMetadata);
patchObject.execute();