Cloudflare Load Balancer - redirect reveals backend server hostname - redirect

I have a Cloudflare Load Balancer configuration with two origin servers:
app.example.com -> backend1.example.com
-> backend2.example.com
This works fine most of the time. However, when a backend server does an HTTP redirect, it reveals the backend server hostname to the browser. For example, if there is a redirect from /a to /b the request/response would look like this (with some headers omitted for brevity):
Request
GET /a HTTP/1.1
Host: app.example.com
Response
HTTP/1.1 302 Found
Location: https://backend1.example.com/b
This means the browser tries to connect to the backend server directly, bypassing the load balancer.
What I want
Is it possible for the Location to be corrected by the Cloudflare Load Balancer, similar to what ProxyPassReverse does in an Apache reverse proxy?
For example:
HTTP/1.1 302 Found
Location: https://app.example.com/b
or even
HTTP/1.1 302 Found
Location: /b
Or do I need to find a way to fix this on the backend server?

Here's an approach that may work, if the backend supports it.
The X-Forwarded-Host request header is (a) injected by some reverse proxies and (b) honoured by some application servers. It allows the application to see what original hostname the browser connected to before it was reverse proxied, and then use that hostname when constructing redirects.
It's easily spoofed by the reverse proxy so it's often not automatically trusted by the application server.
Here's how to use it.
Add a Cloudflare Transform Rule:
Rule Name: Add X-Forwarded-Host,
When: Hostname equals app.example.com
HTTP Request Header Modification,
Set Dynamic,
Header Name: X-Forwarded-Host,
Value: http.host
Deploy
Now on the backend, configure the application server to support it (if required).
For example, JBoss or Wildfly:
/subsystem=undertow/server=default-server/https-listener=default:write-attribute(name=proxy-address-forwarding,value=true)
Express for Node.js: Use the trust proxy setting
Your application server may support it out of the box, it may need a bit of configuration, or it may not support it at all. Look for X-Forwarded-Host in the docs.

Related

Redirecting HTTP to HTTPS behind load balancer

I'm moving an ASP.NET Core application to AWS Beanstalk and I'm having an issue forcing HTTPS for all requests. The useful error from the logs is:
Failed to determine the https port for redirect.
According to the docs on enforcing HTTPS:
If requests are forwarded in a reverse proxy configuration, use Forwarded Headers Middleware before calling HTTPS Redirection Middleware. Forwarded Headers Middleware updates the Request.Scheme, using the X-Forwarded-Proto header
Based on my setup it looks like it should be correct:
public void Configure(IApplicationBuilder app, IHostingEnvironment env) {
// aws ssl termination
app.UseForwardedHeaders(new ForwardedHeadersOptions() {
ForwardedHeaders = ForwardedHeaders.XForwardedProto
});
if (env.IsDevelopment()) {
app.UseDeveloperExceptionPage();
} else {
app.UseExceptionHandler("/error/500");
app.UseHsts();
}
app.UseHttpsRedirection();
// lots of other stuff removed for brevity
}
The load balancer is accepting requests on HTTP (80) and HTTPS (443) and the application is setup in IIS to only accept requests on HTTP (80). This and the error message makes it seem related to an announcement they made, but based on the docs I would expect the forward headers middleware to resolve the issue.
Update
If instead of using UseHttpsRedirection I switch to using the RequireHttpsAttribute and AddRedirectToHttps rewrite middleware the redirects work correctly. It's just the UseHttpsRedirection middleware that I can't get working.
Okay me summarize the comments:
My load balancer isn't performing the HTTPS redirection, so that's why I think I need the middleware. Unless I'm misunderstanding?
So you send an HTTP request to the proxy, which is redirected to your application and then you get that error?
This is because the X-Forwarded-Proto headers have the value http and the Https Redirection middleware won't recognize it as secure protocol and try to redirect.
As per documentation, https configuration is required for the UseHttpsRedirection:
A port must be available for the middleware to redirect an insecure request to HTTPS. If no port is available:
Redirection to HTTPS doesn't occur.
The middleware logs the warning "Failed to determine the https port for redirect."
HTTPS requests (to the proxy) on the proxy should then work, since the X-Forwarded-Proto header are set to https and the redirection Middleware should skip it.
In this case, you have to configure https on the application too (since its required for the middleware). It can be a self-signed certificate, in a reverse proxy configuration it the 443 on the ASP.NET Core app should never be hit. You don't even have to expose the port (when using Docker).
Alternatively, handle the https redirection on the reverse proxy itself. This is the better approach, as the requests will never hit your application in the first place unless its https.

Use haproxy as a reverse proxy with an application behind Internet proxy

I need to integrate several web applications on-premise and off-site under a common internally hosted URL. The on-premise applications are in the same data center as the haproxy, but the off-site applications can only be reached via a http proxy because the server on which haproxy is running has no direct Internet access. Therefore I have to use a http Internet proxy, SOCKS might be an option too.
How can I tell haproxy that a backend can only be reached via proxy ?
I would rather not use an additional component like socksify / proxifier / proxychains / tsocks / ... because this introduces additional overhead.
This picture shows the components involved in the setup:
When I run this on a machine with direct Internet connection I can use this config and it works just fine:
frontend main
bind *:8000
acl is_extweb1 path_beg -i /policies
acl is_extweb2 path_beg -i /produkte
use_backend externalweb1 if is_extweb1
use_backend externalweb2 if is_extweb2
backend externalweb1
server static www.google.com:80 check
backend externalweb2
server static www.gmx.net:80 check
(Obviously these are not the URLs I am talking to, this is just an example)
Haproxy is able to check the external applications and routes traffic to them:
In the safe environment of the company I work at I have to use a proxy and haproxy is unable to connect to the external applications.
How can I enable haproxy to use those external web application servers behind a http proxy (no authentication needed) while providing access to them through a common http page / via browser ?
How about to use delegate ( http://delegate.org/documents/ ) for this, just as an idea.
haproxy -> delegate -f -vv -P127.0.0.1:8081 PROXY=<your-proxy>
http://delegate9.org/delegate/Manual.shtml?PROXY
I know it's not that elegant but it could work.
I have tested this setup with a local squid and this curl call
echo 'GET http://www.php.net/' |curl -v telnet://127.0.0.1:8081
The curl call simluates the haproxy tcp call.
I was intrigued to make it work but i really could not find anything in the haproxy documentation, so i googled a bit and found that nginx might do the trick, but it didn't for me, after a bit more of googleing i ended up finding a configuration for apache that works.
here is the important part:
Listen 80
SSLProxyEngine on
ProxyPass /example/ https://www.example.com/
ProxyPassReverse /example/ https://www.example.com/
ProxyRemote https://www.example.com/ http://corporateproxy:port
ProxyPass /google/ https://www.google.com/
ProxyPassReverse /google/ https://www.google.com/
ProxyRemote https://www.google.com/ http://corporateproxy:port
i'm quite sure there should be a way to translate this configuration to nginx and even to haproxy... if i manage to find the time i will update the answer with my findings.
for apache to work you should also enable a few modules, i put up a github repository with a basic docker configuration that showcases feel free to have a look at that to see the full working configuration.

Spinnaker Gate is redirecting to the incorrect authentication URL

So I have spinnaker running behind an https load balancer and my external ports use the standard 443 which get port mapped to the spinnaker instance still on port 9000. I've gotten pretty much everything to work except a redirect from gate is still appending the :9000 port to my URL.
requests sent to https://my.url.com/gate/auth/redirect?to=https://my.url.com/#/infrastructure send back a redirect response with the location header in the 301 location:https://my.url.com:9000/gate/login which fails because the load balancer is only listening for 443. If I manually delete the port and go right to https://my.url.com/gate/login the oauth flow works as expected and once authed all deck functionality and subsequent gate queries work as expected.
In my /etc/default/spinnaker file I have
SPINNAKER_DECK_BASEURL=https://my.url.com
SPINNAKER_GATE_BASEURL=https://my.url.com/gate
in /opt/spinnaker/config/gate-googleOAuth.yml I have
spring:
oauth2:
client:
preEstablishedRedirectUri: ${SPINNAKER_GATE_BASEURL}/login
useCurrentUri: false
and I've ran /opt/spinnaker/bin/reconfigure_spinnaker.sh plus restarts to make sure deck and gate get updated. Does anyone have any ideas what I might be missing?
I figured out my problem. With the help of this issue pointing me in the right direction (https://github.com/spinnaker/spinnaker/issues/1112) and some digging I found that the issue was with apache2 and the reverse proxy back to gate.
ProxyPassReverse
This directive lets Apache httpd adjust the URL in the Location, Content-Location
and URI headers on HTTP redirect responses. This is essential when Apache httpd
is used as a reverse proxy (or gateway) to avoid bypassing the reverse proxy because
of HTTP redirects on the backend servers which stay behind the reverse proxy.
from apache2 documentation https://httpd.apache.org/docs/current/mod/mod_proxy.html#proxypassreverse

HAProxy to CloudFront

I have two components to my application, an API server (which is shared between several versions of the app), and static asset servers for the different distributions (mobile/desktop). I am using HAproxy to make the API server and the static asset servers behave as though they are on the same domain (to prevent CORS nastiness). My static asset servers are on CloudFront. Eventually, the HTML will reference the cloudfront URLs for the assets it depends on (to leverage global distribution). Temporarily for ease, I'm just having everything go through HAProxy. I'm having a hard time, however, getting HAProxy to send stuff properly to cloudfront.
My backend definition looks like this:
backend music_static
http-request set-header Host <hash>.cloudfront.net
option httpclose
server cloudfront <hash>.cloudfront.net
I figured that by setting the Host header value, I would be "spoofing" things correctly on their way to CloudFront. Obviously, visiting .cloudfront.net behaves exactly as I expect.
You probably moved over from this issue, but I see its not answered yet.
One solution to this issue is to enable SNI on CloudFront (this cost money, but worked for me - http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html). The above Host header doesnt help, as HTTP Host header is sent after TCP handshake, and to support SNI CloudFront requires host details in TCP handshake.

Force HTTPS in Neo4j configuration

Is is possible force HTTPS URLs even when the X-Forwarded-Host header is not present?
Update:
We are using HAProxy in front of the Neo4j server. The configuration is
frontend proxy-ssl
bind 0.0.0.0:1591 ssl crt /etc/haproxy/server.pem
reqadd X-Forwarded-Proto:\ https
default_backend neo-1
This works well when every connection contains only one request. However, for Neo4j drivers which uses keep-alive (like Py2neo), the header is added only to the first request.
Without the X-Forwarded-Proto header, the generated URLs are http://host:1591, instead of https://host:1591.
According to the HAProxy documentation, this is the normal behavior:
since HAProxy's HTTP engine does not support keep-alive, only headers
passed during the first request of a TCP session will be seen. All subsequent
headers will be considered data only and not analyzed. Furthermore, HAProxy
never touches data contents, it stops analysis at the end of headers.
The workaround is to add option http-server-close in the frontend, so it will force that every request is in its own connection, but it will be nicer if we can support keep-alive.
Put something like Apache or Nginx in front of your Neo4j server to perform that task.
In terms of py2neo, I can add some functionality to cater for this situation quite easily. What if I were to include X-Forwarded-Proto: https for all https connections? Would that cause a problem in cases where a proxy isn't used?