I am currently structuring a web application to serve out segments of our database represented as html iframes. I need to host my Django app's static files (such as bootstrap) in a static file store on Google Cloud Storage in order to correctly represent the HTML elements. However, when I try to create a bucket called 'static', GCS replies with the following error:
Sorry, that name is not available. Please try a different one.
Not only that, it is not allowing me to access or modify the URI, displaying a "Forbidden" message when I attempt to.
Does anyone know how to change this default setting by Google? There is no documentation regarding this..
It seems that the bucket with the given name has been already created by someone else. You have to choose a globally unique name.
Bucket names reside in a single Google Cloud Storage namespace. As a consequence, every bucket name must be unique across the entire Google Cloud Storage namespace. If you try to create a bucket with a bucket name that is already taken, Google Cloud Storage responds with an error message.
Use another name or use the default bucket. If your app was created after the App Engine 1.9.0 release, it should have a default GCS bucket named [your-app-id].appspot.com available. You can create your static files in that bucket and mimic directory structure as follows.
[your-app-id].appspot.com/static/my-file-1
Related
I want to link multiple domains to one bucket with gcs
However, in an official document, the bucket name will be the domain as it is, so it seems that you can not associate multiple domains.
Do not you know someone?
GCS does not support this directly. Instead, to accomplish this, you'd likely need to make use of Google Cloud Load Balancing with your GCS bucket as a backing store. With it, you can obtain a dedicated, static IP address which you can map several domains to, and it also allows you to map static and dynamic content under the same domain, and it allows you to swap out which bucket is being served at the same path. The main downside to it is added complexity and cost.
It's the first time I used Google Cloud, so I might ask the question in the wrong place.
Information provider upload a new file to Google Cloud Storage every day.
The file contains the information of all my clients/departments.
I have to sort through information and create a new file/s containing the relevant information for each department in my company .so that everyone gets the relevant information to them (security).
I can't figure out what are the steps I need to follow, to complete the task.
Can you help me?
You want to have a process that starts automatically and subsequently generates a new file once you upload something to Google Cloud Storage.
The easiest way to handle this is using Object Change Notifications. You can set up Object Change Notifications per bucket and this will send a POST request to an URL that you can define.
You can then easily set up a server (or run it on app engine) that will execute an action based on the POST request that it receives.
There is an even simpler option (although still in alpha) named cloud functions. Cloud functions is a serverless service that provides event based microservices (e.g. 'do this' if a new file is uploaded on GCS). This means you only have to write the code that defines what needs to happen if a new file is uploaded and then Cloud Functions will take care of executing the code when you upload a file to GCS. See this tutorial on using cloud functions with Google Cloud Storage.
I am trying to understand the general architecture and components needed to link metadata with blob objects stored into the Cloud such as Azure Blob Storage or AWS.
Consider an application which allows users to upload a blob files to the cloud. With each file there would be a miriade of metadata describing the file, its cloud URL and perhaps emails of users the file is shared with.
In this case, the file gets save to the cloud and the metadata into some type of database somewhere else. How would you go about doing this transactionally so that it is guaranteed both the file was saved and the metadata? If one of the two fails the application would need to notify the user so that another attempt could be made.
There's no built-in mechanism to span transactions across two disparate systems, such as Neo4j/mongodb and Azure/AWS blob storage as you mentioned. This would be up to your app to manage. And how you go about that is really a matter of opinion/discussion.
I am trying to use the Google Cloud Storage bucket to serve static files from a web server on GCE. I see in the docs that I have to copy files manually but I am searching for a way to dynamically copy files on demand just like other CDN services. Is that possible?
If you're asking whether Google Cloud Storage will automatically and transparently cache frequently-accessed content from your web server, then the answer is no, you will have to copy files to your bucket explicitly yourself.
However, if you're asking if it's possible to copy files dynamically (i.e., programmatically) to your GCS bucket, rather than manually (e.g., via gsutil or the web UI), then yes, that's possible.
I imagine you would use something like the following process:
# pseudocode, not actual code in any language
HandleRequest(request) {
gcs_uri = computeGcsUrlForRequest(request)
if exists(gcs_uri) {
data = read(gcs_uri)
return data to user
} else {
new_data = computeDynamicData(request)
# important! serve data to user first, to ensure low latency
return new_data to user
storeToGcs(new_data) # asynchronously, don't block the request
}
}
If this matches what you're planning to do, then there are several ways to accomplish this, e.g.,
language-specific libraries (recommended)
JSON API
XML API
Note that to avoid filling up your Google Cloud Storage bucket indefinitely, you should configure a lifecycle management policy to automatically remove files after some time or set up some other process to regularly clean up your bucket.
I want to add user metadata that is calculated from the stream as it is uploaded
I am using the Google Cloud Storage Client from inside a Servlet during a file upload.
The only solutions I can come up and tried are not really satisfactory for a couple of reasons.
Buffer the stream in memory, calculate the metadata as the stream is buffered then write the stream out to the Cloud Store after it has been completely read.
Write the stream to a bucket and calculate the metadata. Then read the object from the temporary bucket and write it to its final location with the calculated metadata.
Pre-calculate the metadata on the client and send it with the upload.
Why these aren't acceptable:
Doesn't work for large objects, which some of these will be.
Will cost a fortune if lots of objects are uploaded, which there will be.
Can't trust the clients, and some of the clients can't calculate some of what I need.
Is there any way to update the metadata of a Google Cloud Store object after the fact?
You are likely using the Google Cloud Storage Java Client for AppEngine library. This library is great for AppEngine users, but it offers only a subset of the features of Google Cloud Storage. It does not to my knowledge happen to support updating the metadata of existing objects. However, Google Cloud Storage definitely supports this.
You can use the Google API Java client library, which exposes the Google Cloud Storage's JSON API. With this library, you'll be able to use the storage.objects.update method or the storage.objects.patch method, both of which can update metadata (the difference is that update replaces any properties of the object that are already there, while patch just changes the specified fields). The code would look something like this:
StorageObject objectMetadata = new StorageObject();
.setName("OBJECT_NAME")
.setMetadata(ImmutableMap.of("key1", "value1", "key2", "value2"));
Storage.Objects.Patch patchObject = storage.objects().patch("mybucket", objectMetadata);
patchObject.execute();