Haproxy Backend module uses IP instead of domain name - haproxy

I am trying to use domain name in haproxy backend module configuration
backend prod_auth
balance leastconn
option httpchk GET /auth-service/generalhealthcheck
http-check expect string "Service is up and reachable"
server auth-service-1 domain-service.com:8080 check
but haproxy uses IP(10.1.122.83) of domain-service.com instead of the domain name itself to do health check, it becomes an issue because my service works on domain name not on IP.
root#ram:~$ curl http://domain-service.com/auth-service/generalhealthcheck
["Service is up and reachable"]
root#ram:~$ curl http://10.1.122.83/auth-service/generalhealthcheck
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
I cannot make my service working on IP as there are multiple other services running in the same server and uses different domain name(rDNS).
I don't know why haproxy is using IP instead of domain name, i verified it using WireShark. is there any way I can force haproxy to use domain name mentioned in backend modules?
I tried setting in /etc/hosts but that does not work
domain-service.com domain-service.com
I am using HA-Proxy version 1.7.8 2017/07/07

You can customize the Host header send to your domain using the same option httpchk as following:
backend domain-service-backend
mode http
option httpchk GET /get HTTP/1.1\r\nHost:\ domain-service.com

Related

Cloudflare Load Balancer - redirect reveals backend server hostname

I have a Cloudflare Load Balancer configuration with two origin servers:
app.example.com -> backend1.example.com
-> backend2.example.com
This works fine most of the time. However, when a backend server does an HTTP redirect, it reveals the backend server hostname to the browser. For example, if there is a redirect from /a to /b the request/response would look like this (with some headers omitted for brevity):
Request
GET /a HTTP/1.1
Host: app.example.com
Response
HTTP/1.1 302 Found
Location: https://backend1.example.com/b
This means the browser tries to connect to the backend server directly, bypassing the load balancer.
What I want
Is it possible for the Location to be corrected by the Cloudflare Load Balancer, similar to what ProxyPassReverse does in an Apache reverse proxy?
For example:
HTTP/1.1 302 Found
Location: https://app.example.com/b
or even
HTTP/1.1 302 Found
Location: /b
Or do I need to find a way to fix this on the backend server?
Here's an approach that may work, if the backend supports it.
The X-Forwarded-Host request header is (a) injected by some reverse proxies and (b) honoured by some application servers. It allows the application to see what original hostname the browser connected to before it was reverse proxied, and then use that hostname when constructing redirects.
It's easily spoofed by the reverse proxy so it's often not automatically trusted by the application server.
Here's how to use it.
Add a Cloudflare Transform Rule:
Rule Name: Add X-Forwarded-Host,
When: Hostname equals app.example.com
HTTP Request Header Modification,
Set Dynamic,
Header Name: X-Forwarded-Host,
Value: http.host
Deploy
Now on the backend, configure the application server to support it (if required).
For example, JBoss or Wildfly:
/subsystem=undertow/server=default-server/https-listener=default:write-attribute(name=proxy-address-forwarding,value=true)
Express for Node.js: Use the trust proxy setting
Your application server may support it out of the box, it may need a bit of configuration, or it may not support it at all. Look for X-Forwarded-Host in the docs.

Encrypt & Decrypt data between Kubernetes API Server and Client

I have two kubernetes cluster setup with kubeadm and im using haproxy to redirect and load balance traffic to the different clusters. Now I want to redirect the requests to the respective api server of each cluster.
Therefore, I need to decrypt the ssl requests, read the "Host" HTTP-Header and encrypt the traffic again. My example haproxy config file looks like this:
frontend k8s-api-server
bind *:6443 ssl crt /root/k8s/ssl/apiserver.pem
mode http
default_backend k8s-prod-master-api-server
backend k8s-prod-master-api-server
mode http
option forwardfor
server master 10.0.0.2:6443 ssl ca-file /root/k8s/ssl/ca.crt
If I now access the api server via kubectl, I get the following errors:
kubectl get pods
error: the server doesn't have a resource type "pods"
kubectl get nodes
error: the server doesn't have a resource type "nodes"
I think im using the wrong certificates for decryption and encryption.
Do I need to use the apiserver.crt , apiserver.key and ca.crt files in the directory /etc/kubernetes/pki ?
Your setup probably entails authenticating with your Kubernetes API server via client certificates; when your HAProxy reinitiates the connection it is not doing so with the client key and certificate on your local machine, and it's likely making an unauthenticated request. As such, it probably doesn't have permission to know about the pod and node resources.
An alternative is to proxy at L4 by reading the SNI header and forwarding traffic that way. This way, you don't need to read any HTTP headers, and thus you don't need to decrypt and re-encrypt the traffic. This is possible to do with HAProxy.

Use haproxy as a reverse proxy with an application behind Internet proxy

I need to integrate several web applications on-premise and off-site under a common internally hosted URL. The on-premise applications are in the same data center as the haproxy, but the off-site applications can only be reached via a http proxy because the server on which haproxy is running has no direct Internet access. Therefore I have to use a http Internet proxy, SOCKS might be an option too.
How can I tell haproxy that a backend can only be reached via proxy ?
I would rather not use an additional component like socksify / proxifier / proxychains / tsocks / ... because this introduces additional overhead.
This picture shows the components involved in the setup:
When I run this on a machine with direct Internet connection I can use this config and it works just fine:
frontend main
bind *:8000
acl is_extweb1 path_beg -i /policies
acl is_extweb2 path_beg -i /produkte
use_backend externalweb1 if is_extweb1
use_backend externalweb2 if is_extweb2
backend externalweb1
server static www.google.com:80 check
backend externalweb2
server static www.gmx.net:80 check
(Obviously these are not the URLs I am talking to, this is just an example)
Haproxy is able to check the external applications and routes traffic to them:
In the safe environment of the company I work at I have to use a proxy and haproxy is unable to connect to the external applications.
How can I enable haproxy to use those external web application servers behind a http proxy (no authentication needed) while providing access to them through a common http page / via browser ?
How about to use delegate ( http://delegate.org/documents/ ) for this, just as an idea.
haproxy -> delegate -f -vv -P127.0.0.1:8081 PROXY=<your-proxy>
http://delegate9.org/delegate/Manual.shtml?PROXY
I know it's not that elegant but it could work.
I have tested this setup with a local squid and this curl call
echo 'GET http://www.php.net/' |curl -v telnet://127.0.0.1:8081
The curl call simluates the haproxy tcp call.
I was intrigued to make it work but i really could not find anything in the haproxy documentation, so i googled a bit and found that nginx might do the trick, but it didn't for me, after a bit more of googleing i ended up finding a configuration for apache that works.
here is the important part:
Listen 80
SSLProxyEngine on
ProxyPass /example/ https://www.example.com/
ProxyPassReverse /example/ https://www.example.com/
ProxyRemote https://www.example.com/ http://corporateproxy:port
ProxyPass /google/ https://www.google.com/
ProxyPassReverse /google/ https://www.google.com/
ProxyRemote https://www.google.com/ http://corporateproxy:port
i'm quite sure there should be a way to translate this configuration to nginx and even to haproxy... if i manage to find the time i will update the answer with my findings.
for apache to work you should also enable a few modules, i put up a github repository with a basic docker configuration that showcases feel free to have a look at that to see the full working configuration.

Intercept all outgoing connections made by a process to redirect it to a localhost proxy

I am working in an environment where there are multiple services hosted. A service(consider web services) exposes some APIs and also acts as client to call other services too.
Now what I want to achieve is that if Service A (acting as client) wants to talk to Service B(acting as server here) using http , then I want to intercept outgoing HTTP request and redirect it to localhost proxy.
There are multiple services running on a host and a service also talks to multiple other services, so I don't want to change the configuration of every outgoing endpoint configuration to point to proxy.
Sample configurations:
Following are the services endpoint which service A connects while doing some processing like:
a1.example.com:2430
a2.example.com:8280
a3.example.com:4380
a4.example.com:4280
a5.example.com:3158
a6.example.com:8238
I have looked into configuring squid proxy as transparent proxy. But how should I enforce every outgoing connection (with different destination ports ) to redirect to localhost proxy.

Apiary Proxy Request Timed Out

I'm trying to use Apiary to document my API and test requests but I keep getting a response 504 Proxy Request Timed Out.
My API is running on my machine under http://localhost:3000/ and I specified that under the HOST metadata.
When I click compare under the call, it shows that Apiary added a header "host" that specifies a user specific proxy.
Is there something I am missing or does Apiary just not like localhosts?
Because the proxy remotely calls your specified HOST, you cannot directly call localhost. You could could use https://ngrok.com and set up a tunnel and use the tunnel URL as the HOST.