S3 redirection rules to exclude one folder - redirect

I am trying to host a static website on S3 and Cloudfront like many others before me. I've got it mostly all set up, and have created LetsEncrypt certificates for both my naked domain (call it example.com) and www.example.com.
I have two S3 buckets, called www.example.com and example.com, and two corresponding Cloudfront distributions that simply point to their respective S3 buckets, and Route 53 is set up with two ALIAS A-records.
The example.com S3 bucket contains my website, and the www.example.com bucket is set to redirect to https://example.com.
This is working fine. However, the annoying thing is that when I need to update the LetsEncrypt certificates for both domains, I need to turn off HTTP-to-HTTPS redirect in Cloudfront, and then disable the redirect for the www.example.com S3 bucket so that the .well-known challenge can be accessed from the LetsEncrypt CA. This is annoying, as it means users hitting the www website won't get redirected to the naked domain during certificate renewal.
I was thinking of defining an S3 redirection rule on both buckets that would always redirect to https://example.com, but exclude the .well-known folder for the LE CA. This way I could let Cloudfront serve both HTTP and HTTPS, handle the redirect in S3, and LetsEncrypt renewal would be fully transparent. But this doesn't seem possible to express with the S3 routing grammar: https://docs.aws.amazon.com/AmazonS3/latest/dev/HowDoIWebsiteConfiguration.html
There doesn't seem to be a way to express an "if key prefix doesn't equal" condition, or any way to express "if condition does not match, do nothing" so any attempt at excluding a folder would seem to necessarily lead to a redirect loop.
Can someone tell me what I'm missing or if this is in fact not possible with S3? Seems too basic a feature to be missing!

Well, the feature is indeed missing but it should be possible all the same, because for objects that are't publicly accessible or are not present at all, the web site endpoint will want to return a 403 Forbidden response... but you can override that behavior with a routing rule, and redirect instead of returning that error.
<RoutingRules>
<RoutingRule>
<Condition>
<HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<Protocol>https</Protocol>
<HostName>www.example.com</HostName>
</Redirect>
</RoutingRule>
</RoutingRules>
Using this, any object that isn't present and readable results in a redirect, while objects that are present and publicly-readable are served normally... which is the behavior you're looking for. Be sure you don't have your bucket set to allow "Everyone" the "List" privilege.
You can also bypass the forced redirection for this specific path by creating a second cache behavior in the CloudFront distribution.
Set the Path Pattern in this new behavior to /.well-known/acme-challenge* (or whatever the appropriate pattern is) and set the Viewer Protocol Policy to HTTP and HTTPS. Then requests will forward to S3 without a forced redirect for a protocol change, but only for requests matching that path pattern.

Related

SSL DSN hidden redirect from sub1.domain1.com to sub2.domain2.com

I need to do a setup, where users would be able to access URL sub1.domain1.com that would be mapped by DSN to sub2.domain2.com, so all further communication would appear to be with sub1.domain1.com, however in reality it would just be "redirected" to sub2.domain2.com. HTTPS is required too, so simple CNAME wouldn't do it.
So far I have found out about SAN certificate. With that certificate it seems like it would be possible to accomplish this. However it has one drawback for me - with every new domain that is added to this certificate, all other domain owners must confirm this. And this is not very suitable for my case, because I expect new domains to be added on regular basis.
All domains would point to one certain subdomain (for example: sub1.domain1.com -> sub2.domain2.com,sub3.domain3.com ->sub2.domain2.com, sub4.domain4.com->sub2.domain2.com ..), so the certificate doesn't have to allow redirection between all domains mutually, but it would be enough to allow redirection from all domains to one certain domain (sub2.domain2.com)
Are there more suitable alternatives to accomplish this?
If, when user types https://sub4.domain4.com in their browser's address bar, you don't want (when the page is displayed) address in the bar to change to https://sub2.domain2.com then technically there is no HTTP redirection involved. You just have one website/webapp which is reachable via multiple hostnames (which is nothing unusual).
You need
CNAMEs to be in place
If you can't get (or it is complicated to maintain - which is expected, especially if you do not own the domains) one SSL/TLS cert with all hostnames, then you can always configure your webserver with multiple virtual hosts, each with their own certificate, and keep adding virtual hosts as needed. All virtual hosts can be configured to serve the same content (or just reverse proxy requests to the same one webapp running behind the proxy). Technical implementation depends on the platform used, but is typically not complicated.

whitelist api endpoint based on host or domain

I'm building an endpoint which returns images. I want to only allow requests from the same domain for this endpoint so that other people won't have access to it. I can't use CORS because you can essentially make the call inside an image tag and bypass any cors restrictions. Is there anyway to do this?
If your goal is to prevent simple hotlinking, you can do a referrer check: Check the Referer [sic!] header, make sure it contains a whitelisted domain.
Keep in mind that the Referer header is sometimes missing, e.g. because it has been removed by security software concerned about the user’s privacy.
Also, it is needless to say that referrer-based checks are easily circumvented by anybody who is determined to abuse your service.
Although you cannot (as far as I know) forge the referrer in a browser request (e.g. to download the image with AJAX), you could simply set up a proxy server which would download the images with a forged referrer header and deliver them to the actual client.
But, at least, it would take some energy to do so, and you could easily block such a server by IP address (unless it's a pool of IP addresses).

How to make Azure Traffic Manager work when 301 redirecting custom domains?

I have a couple of web apps in azure (same codebase, in different regions) that I need to set up as end points in Traffic Manager.
One of those sites is already Live. It is configured to support multiple domains, but all requests are 301 redirected to a specific domain, for SEO reasons. The other site needs to work in the same way of course, within the Traffic Manager setup.
The issue is that Traffic Manager needs to be able to ping the *.azurewebsites.net domain and receive a 200 response to work, but with the current redirect rule in place on the endpoints, this will not work.
If I remove the redirect rule then Traffic Manager will work, but it means that requests for the sites at *.azurewebsites.net will not be redirected (and so presents an SEO concern).
The solution I'm heading towards is serving up a different robots.txt file (with a Disallow: / rule) if the request is for the azurewebsites.net domain. Is this feasible? How might I go about doing this?
Are there any other ways I could make this work?
thanks
I'm going to rework the current redirect rule so that it doesn't redirect for one particular path on the azurewebsites.net domain (*.azurewebsites.net/favicon.ico), which should enable Traffic Manager to ping the site, whilst keeping SEO ok for the rest of the Urls.
7 years later and some months, the answer seems to be in the traffic manager's config under other expected codes, so you can add 301-302 to that list to make your host health show online.

S3 Bucket Region - www. subdomain issue

I created a bucket for my root folder in the US Standard region but when I created my www. subdomain that could be redirected to the root folder I placed it in the Oregon region.
The redirect from the address bar is failing (I set it up using buckets>properties>redirect). AWS doesn't seem to allow this swapping between regions, so I deleted and tried to recreate the www. subdomain again, this time in the US Standard region, but it now gives the error, "A conflicting conditional operation is currently in progress against this resource. Please try again."
In short, is there a way to change the region, as AWS is apparently not allowing multiple buckets with the same name (even in separate regions)? I am planning to redirect from the domain name I registered online using Route 53 anyway, so does this issue matter (as I won't use the 'http://example.com.s3-website-us-east-1.amazonaws.com' or 'http://www.example.com.s3-website-us-east-1.amazonaws.com' because I will hopefully be using 'example.com' or 'www.example.com'.
Thank you all for the help; I hope this post is specific enough. Cheers from a first post.
AWS doesn't seem to allow this swapping between regions,
That's not correct. A bucket configured for redirection does not care where it's redirecting to -- it can redirect to any web site, and the destination doesn't have to be another bucket...so this is a misdiagnosis of the problem you were/are experiencing.
AWS is apparently not allowing multiple buckets with the same name (even in separate regions)?
Well... no:
“The bucket namespace is global - just like domain names”
— http://aws.amazon.com/articles/1109#02
Only one bucket of a given name can exist within S3 at any point in time. Because S3 is a massive distributed global system, it can take time (though it should typically only take a few minutes) before you can create the bucket again. That's your conflict -- the deletion hasn't globally propagated.
“After a bucket is deleted, the name becomes available to reuse, but the name might not be available for you to reuse for various reasons. For example, some other account could create a bucket with that name. Note, too, that it might take some time before the name can be reused.”
— http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Once you get it created, focus on fixing the redirect. If you haven't yet configured the DNS in Route 53, then that would be the reason the redirect didn't work -- it can't redirect to something that isn't working. S3 accomplishes this magic by sending a browser redirect -- which is why you can redirect anywhere -- it doesn't resolve the new bucket destination internally.
You should be able to redirect using Redirect all requests to another host name as long as you have Static Website enabled on the bucket where you are redirecting too.
No there's no way to change the region other than deleting the bucket in a region and recreating it in another region. Bucket names are unique across all of S3.
You can use Route 53 to create an alias for any bucket by adding a CNAME record that way www.yoursite.com maps to something like http://www.example.com.s3-website-us-east-1.amazonaws.com
Hope this helps.

If a https domain is redirecting to a http domain. Is there any point in having it?

I looked around the internet to see if there was a clear answer to this, but it looks like there isn't. So, I work for a small company and one of the domains we have, has the SSL certification with it (https://hmc2agency.com); however, it redirects to the new "brand image" (http://www.wearehmc.com). I'm trying to figure out if we should even keep the certification, since it'll be expiring soon.
It's not like we sell things, or need the encryption stream (a term I could be pulling out of no where); however, we do host a few Facebook page tabs (I was told, that they need https domains) on the site. But, they don't use the HTTPS URL for "app."
Ehh I don't know, I just like to code, I'm no network administrator.
There is no difference except for the fact that you have encryption on that Domain.
The DNS servers have knowledge that https://hmc2agency.com has a SSL cert and therefore will do everything in its power to maintain the cert and domain but once you redirected the domain to another domain the cert losses its 'power'.
In this case nothing really happens. Its a simple 302 redirect...you should change this to a 301 redirect for SEO purposes.
This is a good article for how HTTPs works.