I have haproxy as a load balance in front of my web servers(IIS), that works well.
Now I want to have mod_security configured with apache for request filtering and then pass the request to haproxy for load balancing.
I have already installed mod_security on centos, now how can I tell my WAF(mod_security) to forward request to haproxy.
okay I got it solved... add following lines to the httpd.conf file
ProxyPreserveHost On
ProxyRequests off
ProxyVia Off
ProxyPass / http://x.x.x.x:80/
ProxyPassReverse / http://x.x.x.x:80/
Note: Replace x.x.x.x with your actual IP
Related
I have a Cloudflare Load Balancer configuration with two origin servers:
app.example.com -> backend1.example.com
-> backend2.example.com
This works fine most of the time. However, when a backend server does an HTTP redirect, it reveals the backend server hostname to the browser. For example, if there is a redirect from /a to /b the request/response would look like this (with some headers omitted for brevity):
Request
GET /a HTTP/1.1
Host: app.example.com
Response
HTTP/1.1 302 Found
Location: https://backend1.example.com/b
This means the browser tries to connect to the backend server directly, bypassing the load balancer.
What I want
Is it possible for the Location to be corrected by the Cloudflare Load Balancer, similar to what ProxyPassReverse does in an Apache reverse proxy?
For example:
HTTP/1.1 302 Found
Location: https://app.example.com/b
or even
HTTP/1.1 302 Found
Location: /b
Or do I need to find a way to fix this on the backend server?
Here's an approach that may work, if the backend supports it.
The X-Forwarded-Host request header is (a) injected by some reverse proxies and (b) honoured by some application servers. It allows the application to see what original hostname the browser connected to before it was reverse proxied, and then use that hostname when constructing redirects.
It's easily spoofed by the reverse proxy so it's often not automatically trusted by the application server.
Here's how to use it.
Add a Cloudflare Transform Rule:
Rule Name: Add X-Forwarded-Host,
When: Hostname equals app.example.com
HTTP Request Header Modification,
Set Dynamic,
Header Name: X-Forwarded-Host,
Value: http.host
Deploy
Now on the backend, configure the application server to support it (if required).
For example, JBoss or Wildfly:
/subsystem=undertow/server=default-server/https-listener=default:write-attribute(name=proxy-address-forwarding,value=true)
Express for Node.js: Use the trust proxy setting
Your application server may support it out of the box, it may need a bit of configuration, or it may not support it at all. Look for X-Forwarded-Host in the docs.
I deploy piranha cms from my Debian development machine to Centos 7.9.2009 server. On development machine i don't have an issue to manager login. After login with default user dan password i directed to proper page. However, on the production server, after login to http://10.10.10.10:5010/manager the server refused to connect and i don't even able to access http://10.10.10.10:5010/manager page again until i clear the cache. The piranha.db on development and server are exactly the same. Piranha CMS Server using kestrel on port 5010. The home page accessible normally from http://10.10.10.10:5010. All other pages also accessible but manager. Here is my website conf:
<VirtualHost *:80>
RequestHeader set X-Forwarded-Proto "http"
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:5010/
ProxyPassReverse / http://127.0.0.1:5010/
ProxyPass / http://10.10.10.10:5010/
ProxyPassReverse / http://10.10.10.10:5010/
ServerName kid.domain.com
ServerAlias kid.domain.com
DocumentRoot /var/www/kid.domain.com/public_html
ErrorLog /var/www/kid.domain.com/error.log
CustomLog /var/www/kid.domain.com/request.log combined
</VirtualHost>
Any advice highly appreciated
Could this be something to do with HTTPS? I know that I had an issue logging in to my manager interface when I hadn't realised that I was trying to access it via HTTP rather than HTTPS.
Is your server and/or your CMS requiring HTTPS somewhere along the line?
I need to integrate several web applications on-premise and off-site under a common internally hosted URL. The on-premise applications are in the same data center as the haproxy, but the off-site applications can only be reached via a http proxy because the server on which haproxy is running has no direct Internet access. Therefore I have to use a http Internet proxy, SOCKS might be an option too.
How can I tell haproxy that a backend can only be reached via proxy ?
I would rather not use an additional component like socksify / proxifier / proxychains / tsocks / ... because this introduces additional overhead.
This picture shows the components involved in the setup:
When I run this on a machine with direct Internet connection I can use this config and it works just fine:
frontend main
bind *:8000
acl is_extweb1 path_beg -i /policies
acl is_extweb2 path_beg -i /produkte
use_backend externalweb1 if is_extweb1
use_backend externalweb2 if is_extweb2
backend externalweb1
server static www.google.com:80 check
backend externalweb2
server static www.gmx.net:80 check
(Obviously these are not the URLs I am talking to, this is just an example)
Haproxy is able to check the external applications and routes traffic to them:
In the safe environment of the company I work at I have to use a proxy and haproxy is unable to connect to the external applications.
How can I enable haproxy to use those external web application servers behind a http proxy (no authentication needed) while providing access to them through a common http page / via browser ?
How about to use delegate ( http://delegate.org/documents/ ) for this, just as an idea.
haproxy -> delegate -f -vv -P127.0.0.1:8081 PROXY=<your-proxy>
http://delegate9.org/delegate/Manual.shtml?PROXY
I know it's not that elegant but it could work.
I have tested this setup with a local squid and this curl call
echo 'GET http://www.php.net/' |curl -v telnet://127.0.0.1:8081
The curl call simluates the haproxy tcp call.
I was intrigued to make it work but i really could not find anything in the haproxy documentation, so i googled a bit and found that nginx might do the trick, but it didn't for me, after a bit more of googleing i ended up finding a configuration for apache that works.
here is the important part:
Listen 80
SSLProxyEngine on
ProxyPass /example/ https://www.example.com/
ProxyPassReverse /example/ https://www.example.com/
ProxyRemote https://www.example.com/ http://corporateproxy:port
ProxyPass /google/ https://www.google.com/
ProxyPassReverse /google/ https://www.google.com/
ProxyRemote https://www.google.com/ http://corporateproxy:port
i'm quite sure there should be a way to translate this configuration to nginx and even to haproxy... if i manage to find the time i will update the answer with my findings.
for apache to work you should also enable a few modules, i put up a github repository with a basic docker configuration that showcases feel free to have a look at that to see the full working configuration.
On AWS, I'm hosting Multiple (totally different) Domains on EC2 covered by an ELB on top. I already have 1 Wildcard SSL Cert for 1 Domain and its childs. (xxxx.site1.com)
Then now can I add one more Single SSL Cert (on same ELB) for 1 another different Domain, like (www.site2.com) please?
I'm asking this because some Articles are saying, it won't work and just crush.
Please kindly advise.
No. The only way you could do it is if you use a second port for HTTPS connections (other than 443) which doesn't apply to real world scenarios since 443 is the default port for HTTPS
Having said that, you can simply create a second ELB and assign your second wildcard certificate to it. You can also forward your traffic to the same backend server as the one where the first ELB is forwarding its traffic to.
Hope this helps.
Yes. But not by terminating SSL on the load balancer. You have to enable Proxy Protocol on the ELB and transparently forward TCP requests to the web server. There are more details in this article on how to configure the ELB with example NGINX configurations:
Multiple SSL domains on AWS ELB with Nginx
Using the AWS CLI to enable:
aws elb create-load-balancer-policy \
--load-balancer-name acme-balancer \
--policy-name EnableProxyProtocol \
--policy-type-name ProxyProtocolPolicyType \
--policy-attributes AttributeName=ProxyProtocol,AttributeValue=True
aws elb set-load-balancer-policies-for-backend-server \
--load-balancer-name acme-balancer \
--instance-port 9443 \
--policy-names EnableProxyProtocol
aws elb describe-load-balancers --load-balancer-name acme-balancer
There is also a mod_proxy_protocol module available if you are using Apache.
This does NOT add an additional distribution layer; ELB still handles distributing the traffic, connection draining. However, SSL termination is handled by each individual server.
Since October 10th 2017 it's possible to do this with Application Load Balancer. You can bind multiple certificates to the same secure listener on your load balancer and ALB will automatically choose the optimal TLS certificate for each client. For more information see: https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
I agree with the above answer for Nginx by Garth Kerr.
In case of Apache:
You can terminate SSL certificates either at ELB or Apache/Nginx(server) level
In case of multi-tenant(multi-client) architecture, we may need to support different customers(with different domains - *.abc.com, *.xyz.com) under a single ELB, which will not work in an existing ELB setup.
Solution:
You can do this by adding listeners in ELB like below:
TCP 443 (instead of HTTPS - 443) - this will pass through the 443 requests
Then, you can terminate the SSL certificates at the server level
You have to purchase the certificate from external vendors (like GoDaddy) and install & terminate the certificates at the server level.
E.g.,
Apache virtual host looks like
NameVirtualHost *:443
<VirtualHost *:443>
ServerName abc.com
####abc HTTPS Certificate
SSLEngine on
SSLCertificateFile /opt/organization/site/ssl_keys/abc/abc_gd.crt
SSLCertificateKeyFile /opt/organization/site/ssl_keys/abc/abc.pem
SSLCertificateChainFile /opt/organization/site/ssl_keys/abc/abc_gd_bundle.crt
WSGIScriptAlias / /opt/organization/site/deployment-config/abc.wsgi
ServerSignature On
Alias /media/ /opt/organization/site/media/
<Directory /opt/organization/site/media/>
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
NameVirtualHost *:80
<VirtualHost *:80>
ServerName abc.com
#Rewrite to HTTPS in case of HTTP
RewriteEngine On
RewriteCond %{SERVER_NAME} abc.com
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule . https://%{SERVER_NAME}%{REQUEST_URI} [L,R]
WSGIScriptAlias / /opt/organization/site/deployment-config/abc.wsgi
ServerSignature On
Alias /media/ /opt/organization/site/media/
<Directory /opt/organization/site/media/>
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
can somebody tell me why
ProxyPass /melonfire/ http://www.melonfire.com/
can work by accessing http://localhost/melonfire/ but
ProxyPass /facebook/ http://www.facebook.com/
is redirectig the browser to http://www.facebook.com/
Why? How could I make these redirections stop?