Haproxy ability for multiple port 80, 443 frontends - haproxy

I have the need to have both layer7 and layer4. An api where i divert requests two 2 different backends. one backend for GET requests and one backend for PUT,GET & DELETE requests. That being my layer 7 front end.
The layer4 front to handle websites who make the api requests. They just split round robin.
Layer7 was working consistently before I add in this layer4 section, ie just did our loadbalancing for the api. As i migrate the websites w/ the layer4 frontend, api requests, ie layer7, get lost sometimes but not always.
As you can see, sometimes request is lost, sometimes works:
[richv#lb2 ~]$ curl api
curl: (52) Empty reply from server <-- this is not ok, http 503
[richv#lb2 ~]$ curl api
{"error_code":256,"error_message":"No Site specified"} <-- this is ok, cause didnt give all the headers.
Front ends are:
frontend layer7-http-listener
bind *:80
bind *:443 ssl crt /etc/httpd/certs/haproxy.pem
mode http
option httpclose
option httplog
option forwardfor
option accept-invalid-http-request
frontend layer4-listener
bind *:80 transparent
bind *:443 transparent
bind *:3306
mode tcp
option tcplog

Related

HAProxy redirect port and mask url

I have a couple of webservers that are reachable directly through the following URL:
https://abcd.example.com:8445/desktop/container/landing.jsp?locale=en_US
https://wxyz.example.com:8445/desktop/container/landing.jsp?locale=en_US
I need to use HAProxy to loadbalance between the two and use the following URLs instead when hitting the frontend:
http://1234.example.com/desktop/container/landing.jsp?locale=en_US
or
https://1234.example.com:8445/desktop/container/landing.jsp?locale=en_US
So other requirements beside the two above:
If initial traffic is port 80, convert to port 8445
Mask the URL so that on the browser while it redirected to https and port to 8445, the host remains intact, like so: https://1234.example.com:8445/desktop/container/landing.jsp?locale=en_US
Here's my config so far:
frontend WebApp_frontend
mode http
bind 10.4.34.11:80
acl is80 dst_port 80
http-request set-uri https://%[req.hdr(Host)]:8445%[path]?%[query] if is80
default_backend WebApp-backend
backend WebApp_backend
description WebApp
balance roundrobin
mode http
server webserver1 10.2.89.222:8445 check inter 5s fall 3 rise 5 downinter 1m ssl verify none
server webserver2 10.4.89.223:8445 check inter 5s fall 3 rise 5 downinter 1m ssl verify none
The problem I'm facing right now is that when you access the frontend, HAProxy will redirect you to any of the webservers and force your client to hit the webserver directly instead of through the HAProxy. I need the connection to remain through the HAProxy.
If all your application is doing is redirecting to HTTPs then you should probably just handle that directly within HAProxy. You might want to also explore whether your application supports X-Forwarded-Proto and X-Forwarded-Host.
Another option is you can have HAProxy rewrite the redirects from the backend application to the hostname you choose. Using HAProxy 2.1 you would do something like this:
http-response replace-header location https?://[^:/]*(:?[0-9]+/.*) https://1234.example.com\1 if { status 301:302 }

Proxy requests to backend using h2c

Re-asking the question from HA-Proxy discourse site here in the hopes of getting more eyes on it.
I am using HA-Proxy version 1.9.4 2019/02/06 for proxying HTTP traffic to a h2c backend. I am however seeing HA-Proxy set the :scheme to https (and from as far as I can tell uses SSL in the request) when proxying the request. When I hit the backend directly, the :scheme is set to http and the request is non-SSL as expected. I have verified this HA-Proxy behavior using wireshark.
Any suggestions on what I should change in my configuration so that I can set to make sure that the :scheme gets set to http while proxying the request to the backend?
I am using curl 7.54.0 to make requests:
$ curl http://localhost:9090
where HA-Proxy is listening on port 9090.
My HA-Proxy config file:
global
maxconn 4096
daemon
defaults
log global
option http-use-htx
timeout connect 60s
timeout client 60s
timeout server 60s
frontend waiter
mode http
bind *:9090
default_backend local_node
backend local_node
mode http
server localhost localhost:8080 proto h2
It's not supported yet. The client=>haproxy connection can be HTTP/2, the haproxy=>server connection cannot.
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#1.1
HTTP/2 is only supported for incoming connections, not on connections
going to servers.
Just add proto h2 to the server definition.
Cite from example:
server server1 192.168.1.13:80 proto h2
It's an experimental feature of haproxy-1.9, you must enable option http-use-htx to use it.
option http-use-htx is enabled by default since haproxy-2.0-dev3.
This was reported as an issue in haproxy github and has been fixed in version 2.0.

how to prevent 502 status code as response by haproxy as load balancer

I have 3 server:
server (A)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (B)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (C)= a HAProxy as load balancer for port 80 of server (A) and (B)
and server A & B are quite similar.
every things works very well and haproxy forwards requests to server (A) & (B), but if kestrel in one of servers (e.g. A) be killed, nginx respond 502 bad gateway error and haproxy not detect this issue and still redirect requests to it, and this is mistake! it must redirect requests to server (B) in this time.
global
log 127.0.0.1 local2 info
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
retries 3
timeout connect 5s
timeout client 50s
timeout server 50s
stats enable
stats hide-version
stats auth admin:admin
stats refresh 10s
stats uri /stat?stats
frontend http_front
bind *:80
mode http
option httpclose
option forwardfor
reqadd X-Forwarded-Proto:\ http
default_backend http_back
backend http_back
balance roundrobin
mode http
cookie SERVERID insert indirect nocache
server ServerA 192.168.1.2:80 check cookie ServerA
server ServerB 192.168.1.3:80 check cookie ServerB
How Can I resolve this issue?
thanks very much.
You are only checking whether nginx is running, not whether the application is healthy enough to use.
In the backend, add option httpchk.
option httpchk GET /some/path HTTP/1.1\r\nHost:\ example.com
Replace some path with a path that will prove whether the application is usable on that server if it returns 200 OK (or any 2xx or 3xx response), and replace example.com with the HTTP Host header the application expects.
option httpchk
By default, server health checks only consist in trying to establish a TCP connection. When option httpchk is specified, a complete HTTP request is sent once the TCP connection is established, and responses 2xx and 3xx are
considered valid, while all other ones indicate a server failure, including the lack of any response.
This will mark the server as unhealthy if the app is not healthy, so HAProxy will stop sending traffic to it. You will want to configure a check interval for each server using inter and downinter and fastinter options on each server entey to specify how often HAProxy should perform the check.

Transmission Torrent behind HAProxy - HTTP Response Header used as session identifier and stickiness token

I've been trying, and failing so far, to run Transmission behind HAProxy.
If I just add a new backend and route traffic as follows:
frontend http-in
bind *:80
reqadd X-Forwarded-Proto:\ http
acl host1 hdr_end(host) -i web.host1.host
use_backend apache_backend if host1
acl transmission_host hdr_end(host) -i transmission.host1.host
use_backend transmission_backend if transmission_host
Then I get a 409 conflict error stating I have an invalid session-id header. That's pretty obvious and expected since there's a proxy in the middle.
I thought of recompiling transmission to get rid of the check, but decided in the end to face the challenge of learning a bit more of HAProxy. What did I have in mind?
Client reaches HAProxy
HAProxy connects to transmission-daemon
Daemon replies with X-Transmission-Session-Id
HAProxy stores the Session-Id somehow and replaces Session-Id sent by the client with the one captured by HAProxy.
After a lot of Googling and playing with the settings, I got an almost working configuration:
frontend http-in
bind *:80
reqadd X-Forwarded-Proto:\ http
capture response header X-Transmission-Session-Id len 48
acl host1 hdr_end(host) -i web.host1.host
use_backend apache_backend if host1
acl transmission_host hdr_end(host) -i transmission.host1.host
use_backend transmission_backend if transmission_host
backend transmission_backend
mode http
http-request set-header X-Transmission-Session-Id %hs
server transmission-daemon transmission.intranet:9091
My configuration examples are summarized.
It works, sort of. I get a login prompt for transmission, but the page loads incredibly slow. I'm more than 10 minutes in and still don't have it fully loaded.
More pages go through this proxy: HTTP, HTTPS, TCP, some load balanced, some set as fail-overs. They all load normally and fast. If I connect directly to the transmission-daemon server, it loads fast as well.
I'll keep looking around.
Any ideas?
Thanks in advance!
3 years later,
from what I've seen in https://gist.github.com/yuezhu/93184b8d8d9f7d0ada0a186cbcda9273
you should capture request and response in frontend http-in,
I didn't dug much more, but the backend seems to need
stick-table type binary len 48 size 30k expire 30m
stick store-response hdr(X-Transmission-Session-Id)
stick on hdr(X-Transmission-Session-Id)
to work

HAProxy - Request getting Broadcast to every server

I am hosting two different application versions on same servers on different ports. In basic version i expect that following configuration should send request in RoundRobin fashion to different ports. But what i am observing is the request is getting broadcasted to ALL of my server endpoints. Meaning in below example my main request to port 8080 gets FWD to both www.myappdemo.com:5001 and www.myappdemo.com:5002... although the response send by proxy is ALWAYS from www.myappdemo.com:5001.
Can anyone tell what is wrong here?
global
debug
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:8080
default_backend servers
backend servers
balance roundrobin
server svr_50301 www.myappdemo.com:5001 maxconn 32 check
server svr_50302 www.myappdemo.com:5002 maxconn 32 check
i can advise you to enable logs and web interface, after that you can provide us more logs and you can check in web interface also if haproxy detects you second server(svr_50302) to be alive.
Reference to HAProxy 1.5 Doc's :
Web Interface - http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stats%20admin
Good info how to enable login - http://webdevwonders.com/haproxy-load-balancer-setup-including-logging-on-debian/
Best Regards,
Dani