HAProxy timeout after 120 seconds - haproxy

I made a load balancer using HAProxy. My connections can takes up to 1-4 minutes, so I increased the default timeout values in HAProxy to 300s as follows:
global
daemon
log 127.0.0.1 local0 notice
maxconn 2000
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
option redispatch
timeout connect 300s
timeout client 300s
timeout server 300s
option http-keep-alive
frontend LOAD_BALANCER_TIER
bind *:80
default_backend WEB_SERVER_TIER
backend WEB_SERVER_TIER
balance leastconn
mode tcp
server segmentingApi01 some_private_ip:7331 check tcp-ut 300000
server segmentingApi02 some_private_ip:7331 check tcp-ut 300000
server segmentingApi03 some_private_ip:7331 check tcp-ut 300000
As you can see I even increased the TCP connection in server options. Yet, my request to the load balancer timeout after exactly 120s. Please note that I believe the issue is from the load balancer as when I send a request to the servers directly (some_private_ip:7331) it does not timeout.
I was wondering if somebody could help me with this.

First, I don't think "redispatch" and "http-keep-alive" work in tcp mode - as haproxy does not deal with application (http) information in tcp mode.
Maybe you should give "option tcpka" a try. This does TCP keep alive, so the OS won't cancel the connection when no data is exchanged - which I guess is happening here.
You should not set connection timeout to such a high value, because this timeout is for making the initial connection to the server.

Related

Php application is too slow when using haproxy

In my php application when I use mysql direct connection it work perfectly. But when I use haproxy ip for ha, it's taking too long to retrieve result.
Ha proxy configuration as following
global
maxconn 5000
nbproc 5
defaults
retries 3
option redispatch
timeout client 120s
timeout connect 10s
timeout server 120s
frontend galera_cluster_frontend
bind 0.0.0.0:3307
mode tcp
log global
option dontlognull
option tcplog
default_backend galera_cluster_backend
backend galera_cluster_backend
mode tcp
option tcpka
option tcplog
option log-health-checks
retries 3
balance leastconn
server db-server-01 e2e-34-201:3306 check weight 1
server db-server-02 e2e-34-202:3306 check weight 3
server db-server-03 e2e-34-203:3306 check weight 2

Chaining haproxys with optimal connection re-use and keep-alive

I need to understand the optimal connection configurations for chaining 2 haproxys to provide maximum connection reuse between the edge proxy, the app proxy and the app containers.
Where should I be correctly using http-server-close, http-reuse or keep-alive settings?
I have haproxy instances sitting in edge regions which proxy over private networks to a central data center where another haproxy provides the application routing based on url paths to application containers. All application containers are completely stateless rest servers.
The setup is as follows:
haproxy-edge(s) -> haproxy-app(s) -> app-component(s)
Each haproxy-edge serves thousands of concurrent browser and api connections and does ssl offloading etc.
haproxy-app can only be reached via connections from haproxy-edge and does path routing, sets consistent response headers etc.
haproxy-edge connection settings
defaults
mode http
option redispatch
option httplog
option dontlognull
option log-health-checks
option http-ignore-probes
option http-server-close
timeout connect 5s
timeout client 15s
timeout server 300s
timeout http-keep-alive 4s
timeout http-request 10s
timeout tunnel 2m
timeout client-fin 1s
timeout server-fin 1s
...
backend ...
# Routes to haproxy-app. No backend specific connection settings at all
haproxy-app connection settings
defaults
mode http
balance roundrobin
option redispatch
option httplog
option dontlognull
option http-ignore-probes
option http-server-close
timeout connect 5s
timeout client 15s
timeout server 300s
#timeout http-keep-alive 4s
timeout http-request 10s
timeout tunnel 2m
timeout client-fin 1s
timeout server-fin 1s
frontend http-in
...
tcp-request inspect-delay 5s
option http-server-close
...
backend ...
# Routes to app components. No backend specific connection settings at all
I see no reuse of connections in haproxy stats page and the number of sessions/connections seems to be similar at both haproxys, but would expect many on edge to fewer reused connections in haproxy-app.
After testing various combinations the simple (and obvious) change of removing option http-server-close from both haproxy-edge and haproxy-app allowed the connection reuse to work effectively. Haproxy 2.x has some nice new stats page values for reporting on new v's reused connections.

How to configure haproxy to use a different backend for each request

I have an Haproxy 1.5.4. I would like to configure the haproxy to use a different backend for each request. This way , I want to ensure that a diffeent backend is used for each request. I curently use the following config:
global
daemon
maxconn 500000
nbproc 2
log 127.0.0.1 local0 info
defaults
mode tcp
timeout connect 50000ms
timeout client 500000ms
timeout server 500000ms
timeout check 5s
timeout tunnel 50000ms
option redispatch
listen httptat *:3310
mode http
stats enable
stats refresh 5s
stats uri /httpstat
stats realm HTTPS proxy stats
stats auth https:xxxxxxxxxxx
listen HTTPS *:5008
mode tcp
#maxconn 50000
balance leastconn
server backend1 xxx.xxx.xxx.xxx:125 check
server backend1 xxx.xxx.xxx.xxx:126 check
server backend1 xxx.xxx.xxx.xxx:127 check
server backend1 xxx.xxx.xxx.xxx:128 check
server backend1 xxx.xxx.xxx.xxx:129 check
server backend1 xxx.xxx.xxx.xxx:130 check
......
simply change the balance setting from leastconn to roundrobin
from the haproxy manual for 1.5 :
roundrobin Each server is used in turns, according to their weights.
This is the smoothest and fairest algorithm when the server's
processing time remains equally distributed. This algorithm
is dynamic, which means that server weights may be adjusted
on the fly for slow starts for instance. It is limited by
design to 4095 active servers per backend. Note that in some
large farms, when a server becomes up after having been down
for a very short time, it may sometimes take a few hundreds
requests for it to be re-integrated into the farm and start
receiving traffic. This is normal, though very rare. It is
indicated here in case you would have the chance to observe
it, so that you don't worry.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-balance

Can haproxy be configured to understand SSL sessions without being sticky to time

Am using HAProxy version haproxy-1.4.24 on a SLES 11 SP3 server. I need to load balance (using least connections or round robin) between 3 servers which talk only SSL. A session from client to server starts with client/server handshake followed by a series of "chatty" messages and then close of session.
I do not want to use stick src directive since it needs a time limit argument, making my load balancing ineffective.
Below is the configuration file am using. Can someone let me know how to achieve per session stickiness (one client sticks to one server until the SSL session ends)?
global
log /dev/log local0
log /dev/log local1 notice
#chroot /var/lib/haproxy
#stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
#user haproxy
#group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind *:80
bind *:443
mode tcp
default_backend nodes
backend nodes
mode tcp
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
server s1 s1.mydomain.com:443 check
server s2 s2.mydomain.com:443 check
server s3 s3.mydomain.com:443 check

How can I make HAProxy reject TCP connections when all backend servers are down

We are using HAProxy to forward incoming TCP connections to a separate server that uses a raw TCP. The issue that we are seeing is that the client connection is accepted and then closed rather then rejected immediately. Since we have enabled a health check is there any way for HAProxy to unbind from the port so that the initial connection fails?
listen custom_forward
mode tcp
bind *:11144
default-server inter 10m fastinter 20s downinter 1m maxconn 100
server custom_server hostname:10144 check
You want to explicitly reject the connection if backend servers are down:
acl site_dead nbsrv lt 1
tcp-request connection reject if site_dead
Or acl site_dead nbsrv(backend_name) lt 1 where backend_name is the name of a different backend.
nbsrv documentation
acl documentation
tcp-reject documentation