I want to link multiple domains to one bucket with gcs - google-cloud-storage

I want to link multiple domains to one bucket with gcs
However, in an official document, the bucket name will be the domain as it is, so it seems that you can not associate multiple domains.
Do not you know someone?

GCS does not support this directly. Instead, to accomplish this, you'd likely need to make use of Google Cloud Load Balancing with your GCS bucket as a backing store. With it, you can obtain a dedicated, static IP address which you can map several domains to, and it also allows you to map static and dynamic content under the same domain, and it allows you to swap out which bucket is being served at the same path. The main downside to it is added complexity and cost.

Related

how to plan google cloud storage bucket creation when working with users

I'm trying to figure out if anyone can offer advice around bucket creation for an app that will have users with an album of photos. I was initially thinking of creating a single bucket and then prefixing the filename by user id, since google cloud storage doesn't recognize subdirectories, like so: /bucket-name/user-id1/file.png
Alternatively, I was considering creating a bucket and naming it by user id like so: /user-id1-which-is-also-bucket-name/file.png
I was wondering what I should consider in terms of cost and organization when setting up my google cloud storage. Thank you!
There is no difference in term of cost. In term of organization, it's different:
For the deletion, it's simpler to delete a bucket and not a folder in the unique bucket.
For performances, sharding is better is you have separate bucket (you have less chance to create an hotspot)
At billing perspective, you can add labels on the buckets, and get them in the billing exported to BigQuery. You can know the cost of the bucket per user, and maybe do a rebill to them
The biggest advantage of 1 bucket per user model is the security. You can grant a user (if the users have direct access to the bucket and don't use a backend service to access it) on a bucket, without the use of legacy (and almost deprecated) ACL on object. In addition, if you use ACL, you can't set ACL per folder, ACL are per object. So, everytime that you add an object in the unique bucket, you have to set the ACL on it. It's harder to achieve.
IMO, 1 bucket per user is the best model.

While scaling up, how to make user uploaded files available accross multiple servers?

I have a website in which users would upload various and later access them.
The files are stored in a specific path in the server at this point. Now if I need to have multiple servers for the website, what is the best way to make the user uploaded files accessible across multiple servers. Amazon s3 is one option that has crossed my mind. What other options do I have?
First, you can try using a CDN (http://en.wikipedia.org/wiki/Content_delivery_network).
Also, you can make it in house, by having specialized servers setup for static content. You will need maybe a lookup server, to know for each file on what server can be found. It will also contain the logic to determine what is the best server to use to save the file. This is more complicated, as you will have to make the load balancing and take care of geographic location of users.

Using Google storage bucket as on-demand pull CDN

I am trying to use the Google Cloud Storage bucket to serve static files from a web server on GCE. I see in the docs that I have to copy files manually but I am searching for a way to dynamically copy files on demand just like other CDN services. Is that possible?
If you're asking whether Google Cloud Storage will automatically and transparently cache frequently-accessed content from your web server, then the answer is no, you will have to copy files to your bucket explicitly yourself.
However, if you're asking if it's possible to copy files dynamically (i.e., programmatically) to your GCS bucket, rather than manually (e.g., via gsutil or the web UI), then yes, that's possible.
I imagine you would use something like the following process:
# pseudocode, not actual code in any language
HandleRequest(request) {
gcs_uri = computeGcsUrlForRequest(request)
if exists(gcs_uri) {
data = read(gcs_uri)
return data to user
} else {
new_data = computeDynamicData(request)
# important! serve data to user first, to ensure low latency
return new_data to user
storeToGcs(new_data) # asynchronously, don't block the request
}
}
If this matches what you're planning to do, then there are several ways to accomplish this, e.g.,
language-specific libraries (recommended)
JSON API
XML API
Note that to avoid filling up your Google Cloud Storage bucket indefinitely, you should configure a lifecycle management policy to automatically remove files after some time or set up some other process to regularly clean up your bucket.

Google Cloud Storage 'static' bucketname not available

I am currently structuring a web application to serve out segments of our database represented as html iframes. I need to host my Django app's static files (such as bootstrap) in a static file store on Google Cloud Storage in order to correctly represent the HTML elements. However, when I try to create a bucket called 'static', GCS replies with the following error:
Sorry, that name is not available. Please try a different one.
Not only that, it is not allowing me to access or modify the URI, displaying a "Forbidden" message when I attempt to.
Does anyone know how to change this default setting by Google? There is no documentation regarding this..
It seems that the bucket with the given name has been already created by someone else. You have to choose a globally unique name.
Bucket names reside in a single Google Cloud Storage namespace. As a consequence, every bucket name must be unique across the entire Google Cloud Storage namespace. If you try to create a bucket with a bucket name that is already taken, Google Cloud Storage responds with an error message.
Use another name or use the default bucket. If your app was created after the App Engine 1.9.0 release, it should have a default GCS bucket named [your-app-id].appspot.com available. You can create your static files in that bucket and mimic directory structure as follows.
[your-app-id].appspot.com/static/my-file-1

Amazon S3 + CloudFront Queries

I am currently making a social sharing like app and I encounter a problem.
First off, S3 in my experience is slow, so I need to sync the data for multiple servers around the world to make it faster for multiple users.
So my question is, I need to create multiple buckets for each country right? Amazon has a list of their server locations. So for each user, I calculate the nearest server than upload there? How?
Next question, in my app people can subscribe to others and check for their updates. So realistically, this would not create a speed difference. If someone in Singapore uploaded a piece of text and has a subscriber in United States, it wouldn't be any quicker for this subscriber because he has to download a piece of text stored all the way in the Singapore.
All of this is making me confused! I personally find S3 very slow, which is why I am using CloudFront.
Any help? Am I misunderstanding the process? Thanks!
Buckets are not per country, they are per region (EU, US, Asia, etc.)
Secondly, you do not have to manage closest URL to your S3 buckets, that's what CloudFront is for, you just get a single URL for each bucket and CloudFront will manage routing the user's request to the closest edge location.
PS: In addition, Amazon replicates data uploaded to your bucket across all edge locations transparently.
Amazon in no way "automatically" replicates your content out to the edge locations. Instead, your content is copied to a single edge location, if (and only) if the content is not there (could be the first pull, could be it's expired) when a user tries to access it from that edge. It is a pull mechanism, not a push. See "Download Distributions for HTTP Delivery" section of http://aws.amazon.com/cloudfront/