HAProxy to CloudFront - haproxy

I have two components to my application, an API server (which is shared between several versions of the app), and static asset servers for the different distributions (mobile/desktop). I am using HAproxy to make the API server and the static asset servers behave as though they are on the same domain (to prevent CORS nastiness). My static asset servers are on CloudFront. Eventually, the HTML will reference the cloudfront URLs for the assets it depends on (to leverage global distribution). Temporarily for ease, I'm just having everything go through HAProxy. I'm having a hard time, however, getting HAProxy to send stuff properly to cloudfront.
My backend definition looks like this:
backend music_static
http-request set-header Host <hash>.cloudfront.net
option httpclose
server cloudfront <hash>.cloudfront.net
I figured that by setting the Host header value, I would be "spoofing" things correctly on their way to CloudFront. Obviously, visiting .cloudfront.net behaves exactly as I expect.

You probably moved over from this issue, but I see its not answered yet.
One solution to this issue is to enable SNI on CloudFront (this cost money, but worked for me - http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html). The above Host header doesnt help, as HTTP Host header is sent after TCP handshake, and to support SNI CloudFront requires host details in TCP handshake.

Related

Cloudfare redirect to Github Pages from the non-primary domain

I have my Github Pages set up with a custom domain: mark.gg. This domain is set in the CNAME file in the repository. The Enforce HTTPS option is also on.
I use Cloudflare for DNS and for the mark.gg domain I have the four A records and one www subdomain CNAME record set to point to Github. Everything works fine if I access my site on www.mark.gg, mark.gg, http://mark.gg, https://www.mark.gg.
In the Crypto section of Cloudflare I have SSL set to Full, Always Use HTTPS set to On, Onion Routing set to On, and Opportunistic Encryption set to On.
I'm having issues getting order domains to redirect to mark.gg through Cloudflare. For example for my markcerqueira.com domain, my current DNS setup is:
The 1.2.3.4 is a dummy IP address. The key here is I have the traffic routing through Cloudflare so I can have it trigger a Forwarding URL Page Rule:
I used to have just one Page Rule that forwarded *markcerqueira.com/* to https://www.mark.gg and that didn't work so this image is just the most recent stab in the dark.
The Page Rule works as I see the address updated to mark.gg when I visit markcerqueira.com but I get an insecure connection error: SSL_ERROR_BAD_CERT_DOMAIN.
At this point, unsure if I'm just missing some option or what I'm trying to do is impossible via just solely Cloudflare.
The issue was rooted in the SSL setting available in the Crypto tab. I had SSL set to Flexible under the (very incorrect) assumption that Flexible SSL would be less error-prone compared to Full or Full (Strict). Flexible SSL forbids HTTPS at the origin which is what Enforce HTTPS via GitHub Pages enables. Turning the setting to Full or Full (Strict) clears up my redirect issue. For good measure here are all the Crypto settings I have configured for my redirecting domain that currently work without issue:
SSL - Full (Strict)
Always Use HTTPS - On
Authenticated Origin Pulls - On
Minimum TLS Version - TLS 1.0
Opportunistic Encryption - On
Onion Routing - On
Automatic HTTPS Rewrites - On

S3: "Redirect all requests to another host name" over HTTPS

I created an S3 bucket and enabled "Redirect all requests to another host name" under "Static Website Hosting".
This works and when I visit http://www.XXXX.com.s3-website-us-east-1.amazonaws.com, I am redirected to my end destination.
If however, I try to access the same URL over HTTPS: https://www.XXXX.com.s3-website-us-east-1.amazonaws.com, the connection times out.
Is it possible to specify an SSL certificate to use so that the redirect can handle HTTPS traffic?
With S3 by itself, no, it isn't possible. The web site endpoints don't speak SSL at all, and the REST endpoints don't handle redirects or allow any cert other than the *.s3(-region).amazonaws.com cert.
However, you can do it with CloudFront and S3 combined, if your clients support SNI.
Create a CloudFront distribution, configured with your hostname and SSL certificate, but don't use an "S3 Origin." Use a "custom origin" and that origin is your S3 web site endpoint hostname, with all requests forwarded to the origin using http (even though it's https on the front end).
If you are not familiar with CloudFront, this probably sounds a little convoluted, but I use it for exactly this purpose (among others).
Requests hit CloudFront, which allows you to use your own SSL cert... and then CloudFront forwards the request to S3, which returns the redirect, which CloudFront will cache and return to the requester (as well as to future requesters hitting the same CloudFront edge location).
The cost for this extra layer is negligible and you should not see any meaningful change in performance (if anything, there's potential for a slight speed improvement).

How do SNI clients send server_name in CLIENT HELLO

I hope i am at the right place asking this question, its regarding understanding of SNI
According to https://devcentral.f5.com/articles/ssl-profiles-part-7-server-name-indication#.U5wEnfmSzOz
"With the introduction of SNI, the client can indicate the name of the server to which he is attempting to connect as part of the "Client Hello" message in the handshake process"
My question is how does client like browser or any HTTP client (say java.net) send this server name in CLIENT HELLO?? Does client do by itself or you have to add it Programmatically to https request (e.g how in JAVA.net HttpsURLConnection)
Reading from http://www.ietf.org/rfc/rfc4366.txt
"Currently, the only server names supported are DNS hostnames"
so the hostname is the server_name sent by SNI complient client or any other name can be sent by the client..
I hope i am clear, do improve the question/wording if its unclear or let me know if its not clear
thanks
If you are using an https library, which you can give a URL and the library will fetch the contents of that URL for you, then the clean way to add SNI support is to perform it entirely within the library.
It is the library which parses the URL to find the hostname, the caller will never know which part of the URL is the hostname, so the caller couldn't tell the library which hostname to send in the SNI request. If the caller had to somehow figure out the hostname in order to tell this to the library, then that would be a poorly designed library.
You might look a level deeper in the software stack and find that an https library might be building on top of an SSL library. In such a case even the https library does not need to know about SNI. The https library would simply tell the SSL library, that it want a connection to a particular hostname. The SSL library would resolve the hostname to get IP address to connect to, the SSL library would also be performing the SSL handshake during which the client may send a hostname as part of SNI and the server send a hostname as part of a certificate for the client to verify.
During connection setup, the SSL client library need to use the hostname for three different purposes. It would be trivial to support the usage of three different hostnames for those three purposes. The https library already know the hostname, and passing that hostname three times to the SSL library rather than just one wouldn't be any significant amount of additional work. But it would hardly make sense to support this anyway.
In fact SNI could be entirely transparent to the https library. It would make sense to extend the SSL library with SNI support without changing the API to the https library. There is little reason to turn off SNI support in a client, which supports it. So defaulting to having SNI enabled makes sense.

Force HTTPS in Neo4j configuration

Is is possible force HTTPS URLs even when the X-Forwarded-Host header is not present?
Update:
We are using HAProxy in front of the Neo4j server. The configuration is
frontend proxy-ssl
bind 0.0.0.0:1591 ssl crt /etc/haproxy/server.pem
reqadd X-Forwarded-Proto:\ https
default_backend neo-1
This works well when every connection contains only one request. However, for Neo4j drivers which uses keep-alive (like Py2neo), the header is added only to the first request.
Without the X-Forwarded-Proto header, the generated URLs are http://host:1591, instead of https://host:1591.
According to the HAProxy documentation, this is the normal behavior:
since HAProxy's HTTP engine does not support keep-alive, only headers
passed during the first request of a TCP session will be seen. All subsequent
headers will be considered data only and not analyzed. Furthermore, HAProxy
never touches data contents, it stops analysis at the end of headers.
The workaround is to add option http-server-close in the frontend, so it will force that every request is in its own connection, but it will be nicer if we can support keep-alive.
Put something like Apache or Nginx in front of your Neo4j server to perform that task.
In terms of py2neo, I can add some functionality to cater for this situation quite easily. What if I were to include X-Forwarded-Proto: https for all https connections? Would that cause a problem in cases where a proxy isn't used?

SSL offloading / redirecting specific URLs using HAproxy?

I have a working setup using a hardware load balancer that controls redirection in such a fashion that all requests to http://example.com/login/* are redirected (using HTTP 302) to https://example.com/login/* and all requests that are NOT for /login are inversely redirected from HTTPS to HTTP.
This allows me to wrap the login functions and user/password exchange in SSL but otherwise avoid slowing connections with encryption and also solving some problems with embedded content mixed content warnings in some browsers.
The load balance, however, is end of life and I am looking for a replacement solution, preferably in software.
I think HAproxy is going to be able to serve as my load balacing solution, but I have only been able to find configuration examples and documentation for redirecting everything from HTTP to HTTPS, or vice versa.
Is it possible to do what I am proposing using HAproxy or should I look for a different solution?
I realize I will need to use the development version of HAproxy to support SSL at all.
I would suggest you do not use a DEV build for your production environment.
To answer your question, I would assume you're going to use HAProxy version 1.4:
Is it possible to do what I am proposing using HAProxy or should I look for a different solution?
Yes. It is possible but you have to use another software to handle the HTTPS traffic. Stunnel is proven to be good in this. So I'd say the setup is going to be:
HAProxy 1.4
# Redirect http://../login to https://../login
frontend HTTPSRedirect
bind 1.2.3.4:80
default_backend AppServers
redirect prefix https://www.domain.com/login if { path_beg -i /login }
# Handler for requests coming from Stunnel4.
frontend HTTPReceiver
bind 5.6.7.8:80
default_backend AppServers
Stunnel4
[https]
accept=443
connect=5.6.7.8:80 (HAProxy IP)