I created a bucket for my root folder in the US Standard region but when I created my www. subdomain that could be redirected to the root folder I placed it in the Oregon region.
The redirect from the address bar is failing (I set it up using buckets>properties>redirect). AWS doesn't seem to allow this swapping between regions, so I deleted and tried to recreate the www. subdomain again, this time in the US Standard region, but it now gives the error, "A conflicting conditional operation is currently in progress against this resource. Please try again."
In short, is there a way to change the region, as AWS is apparently not allowing multiple buckets with the same name (even in separate regions)? I am planning to redirect from the domain name I registered online using Route 53 anyway, so does this issue matter (as I won't use the 'http://example.com.s3-website-us-east-1.amazonaws.com' or 'http://www.example.com.s3-website-us-east-1.amazonaws.com' because I will hopefully be using 'example.com' or 'www.example.com'.
Thank you all for the help; I hope this post is specific enough. Cheers from a first post.
AWS doesn't seem to allow this swapping between regions,
That's not correct. A bucket configured for redirection does not care where it's redirecting to -- it can redirect to any web site, and the destination doesn't have to be another bucket...so this is a misdiagnosis of the problem you were/are experiencing.
AWS is apparently not allowing multiple buckets with the same name (even in separate regions)?
Well... no:
“The bucket namespace is global - just like domain names”
— http://aws.amazon.com/articles/1109#02
Only one bucket of a given name can exist within S3 at any point in time. Because S3 is a massive distributed global system, it can take time (though it should typically only take a few minutes) before you can create the bucket again. That's your conflict -- the deletion hasn't globally propagated.
“After a bucket is deleted, the name becomes available to reuse, but the name might not be available for you to reuse for various reasons. For example, some other account could create a bucket with that name. Note, too, that it might take some time before the name can be reused.”
— http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Once you get it created, focus on fixing the redirect. If you haven't yet configured the DNS in Route 53, then that would be the reason the redirect didn't work -- it can't redirect to something that isn't working. S3 accomplishes this magic by sending a browser redirect -- which is why you can redirect anywhere -- it doesn't resolve the new bucket destination internally.
You should be able to redirect using Redirect all requests to another host name as long as you have Static Website enabled on the bucket where you are redirecting too.
No there's no way to change the region other than deleting the bucket in a region and recreating it in another region. Bucket names are unique across all of S3.
You can use Route 53 to create an alias for any bucket by adding a CNAME record that way www.yoursite.com maps to something like http://www.example.com.s3-website-us-east-1.amazonaws.com
Hope this helps.
Related
I created a basic website for a school project and I am trying to host it somewhere so that my teacher can access it. I came across a tutorial that allows you to host a static website on Google Compute Cloud Storage for free. https://web.archive.org/web/20211023134543/https://cloud.google.com/storage/docs/hosting-static-website-http
Steps from the tutorial that I followed (as requested by the comments):
In the Google Cloud Console, on the project selector page, select Google Cloud project.
Make sure that billing is enabled for your Cloud project.
Have a domain that you own or manage.
Verify that you own or manage the domain that you will be using.
Make sure you are verifying the top-level domain
Create a CNAME record (school.mydomain.com) that points to
c.storage.googleapis.com..
In the Google Cloud Console, go to the Cloud Storage Browser page.
Click Create bucket to open the bucket creation form.
Enter your bucket information and click Continue to complete each
step:
The Name of your bucket, which matches the hostname associated
with your CNAME record.
Select the Location type and Location of your bucket. For
example, Region and us-east1.
Select Standard Storage for the Storage class.
Select Uniform for Access control.
Click Create.
In the list of buckets, click on the name of the bucket that you
created.
Click the Upload files button in the Objects tab.
In the file dialog upload the html, js, xml, and css files that
make up the website
Select the Permissions tab near the top of the page.
Click the + Add button.
The Add principals dialog box appears.
In the New principals field, enter allUsers.
In the Select a role drop down, select the Cloud Storage sub-menu,
and click the Storage Object Viewer option.
Click Save.
Click Allow public access.
In the Google Cloud Console, go to the Cloud Storage Browser page.
In the list of buckets, find the bucket you created.
Click the Bucket overflow menu (...) associated with the bucket
and select Edit website configuration.
In the website configuration dialog, specify the main page.
Click Save.
Verify that content is served from the bucket by requesting the
domain name in a browser. You can do this with a path to an object
or with just the domain name, if you set the MainPageSuffix
property.
For example, if you have an object named test.html stored in a
bucket named www.example.com, check that it's accessible by going
to www.example.com/test.html in your browser.
Everything completed without error messages until step 25
I followed this tutorial and everything is accessible using direct links but, it doesn't work with domain forwarding. I set up a CNAME entry for school.mydomain.com and forwarded that to c.storage.googleapis.com. and I named the bucket school.mydomain.com as the tutorial states, but I keep getting an error and I am not sure why. I verified to google that I own the domain and the subdomain I am trying to use.
<Error>
<Code>InvalidBucketName</Code>
<Message>The specified bucket is not valid.</Message>
<Details>Bucket names must be at least 3 characters in length, got 1: 'c'</Details>
</Error>
After getting the error after following the tutorial I tried recreating the CNAME record with path forwarding on and off as well as both temporary and permanant redirects. The domain is managed by Google Domains on a seperate Google account. I tried using their domain forwarding wizard and just creating a standard CNAME rule.
Going to school.mydomain.com.storage.googleapis.com works. But if I forward to this, it doesnt follow the forwarding rules set for the bucket, namely the main page is at /5500/index.html.
I am totally new to this and I have no idea what to try next. I was able to turn in my assignment by providing a direct link to the file in the bucket, but I would like to get domain forwarding to work for any future projects.
I set up a Google Cloud Storage bucket with a domain I own, after going through the ownership verification step.
I was thinking I would be able to use the named bucket to provide links under my own domain. I can't really explain this clearly, so bear with me please. I'll give an example.
I uploaded a file with name test. After making it publicly available, I can get a URL so anybody can access the file. The URL looks like this:
https://storage.googleapis.com/media.example.com/test
What I thought I'd be able to do is have it generate a link that looks like this:
https://media.example.com/test
Is this possible at all? I realize I can set something up so requests to https://media.example.com/test redirect to https://storage.googleapis.com/media.example.com/test, but that's kind of messy, among other things.
Any advice would be appreciated.
You can do this by creating a CNAME record in the domain configuration for your domain.
Simply add a CNAME record for media.example.com with c.storage.googleapis.com as the canonical name and the redirection happens automagically.
There's an entire page dedicated to doing exactly this in the GCP docs here: https://cloud.google.com/storage/docs/hosting-static-website
I am trying to host a static website on S3 and Cloudfront like many others before me. I've got it mostly all set up, and have created LetsEncrypt certificates for both my naked domain (call it example.com) and www.example.com.
I have two S3 buckets, called www.example.com and example.com, and two corresponding Cloudfront distributions that simply point to their respective S3 buckets, and Route 53 is set up with two ALIAS A-records.
The example.com S3 bucket contains my website, and the www.example.com bucket is set to redirect to https://example.com.
This is working fine. However, the annoying thing is that when I need to update the LetsEncrypt certificates for both domains, I need to turn off HTTP-to-HTTPS redirect in Cloudfront, and then disable the redirect for the www.example.com S3 bucket so that the .well-known challenge can be accessed from the LetsEncrypt CA. This is annoying, as it means users hitting the www website won't get redirected to the naked domain during certificate renewal.
I was thinking of defining an S3 redirection rule on both buckets that would always redirect to https://example.com, but exclude the .well-known folder for the LE CA. This way I could let Cloudfront serve both HTTP and HTTPS, handle the redirect in S3, and LetsEncrypt renewal would be fully transparent. But this doesn't seem possible to express with the S3 routing grammar: https://docs.aws.amazon.com/AmazonS3/latest/dev/HowDoIWebsiteConfiguration.html
There doesn't seem to be a way to express an "if key prefix doesn't equal" condition, or any way to express "if condition does not match, do nothing" so any attempt at excluding a folder would seem to necessarily lead to a redirect loop.
Can someone tell me what I'm missing or if this is in fact not possible with S3? Seems too basic a feature to be missing!
Well, the feature is indeed missing but it should be possible all the same, because for objects that are't publicly accessible or are not present at all, the web site endpoint will want to return a 403 Forbidden response... but you can override that behavior with a routing rule, and redirect instead of returning that error.
<RoutingRules>
<RoutingRule>
<Condition>
<HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<Protocol>https</Protocol>
<HostName>www.example.com</HostName>
</Redirect>
</RoutingRule>
</RoutingRules>
Using this, any object that isn't present and readable results in a redirect, while objects that are present and publicly-readable are served normally... which is the behavior you're looking for. Be sure you don't have your bucket set to allow "Everyone" the "List" privilege.
You can also bypass the forced redirection for this specific path by creating a second cache behavior in the CloudFront distribution.
Set the Path Pattern in this new behavior to /.well-known/acme-challenge* (or whatever the appropriate pattern is) and set the Viewer Protocol Policy to HTTP and HTTPS. Then requests will forward to S3 without a forced redirect for a protocol change, but only for requests matching that path pattern.
Why can't I reuse names from endpoints that have been previously deleted? For example, if I create an endpoint named "acme-cdn1", delete it, and try to create a new endpoint with the same name I get the following message: "Error, that endpoint name already exists." Is it necessary delete the entire CDN profile in order to reuse old endpoint names?
No, you cannot.
CDN endpoint is reserved for sometime once created. This is to prevent other people create CDN endpoint right after you delete your endpoint and get your traffic due to CDN setup take 3 hours +.
For example, let's say I created a CDN endpoint called myendpoint.azureedge.net and I was using it to streaming my pictures. And I deleted myendpoint.azureedge.net. Suddenly, you created the endpoint called myendpoint.azureedge.net. When you visit the url, you can still see my pictures even you already set the different origin.
Such operation will not be completed for at least two hours. In this case your CDN endpoint is not usable and you will be billed on the traffic which is not acceptable.
A month or so ago I put up a static website using google cloud storage. Before I could create a public bucket, I was asked to verify that I actually owned the domain after which I was naming the bucket. I had to upload a file from google to the existing host in order for google to verify domain ownership.
I do understand the need to do this. However, if I had just bought a domain and had no other host, I don't see how I would have been able to prove that I owned the domain.
Did I miss a way around this limitation? Is there another, more user friendly way of creating public sites on google cloud storage?
There are three ways to verify domain ownership:
Adding a special Meta tag to a site's homepage.
Uploading a special HTML file to a site.
Adding a DNS TXT record to a domain's DNS configuration.
The first two require the domain to be hosted somewhere but the third method is purely DNS configuration, so it can be accomplished without hosting the domain. You can read more details about these methods here.
Add CName with the information that the google gives you. That would solve your problem of verifying your domain ownership.