HAProxy, "timeout tunnel" vs "timeout client/server" - haproxy

When configuring HAProxy, what is the difference between setting "timeout tunnel" alone vs setting both "timeout client" and "timeout server", all to the same value?

Timeout client and timeout server applies respectively on the client side and server side from an HAProxy point of view. They mean the inactivity timeout on this part of the connection. They both apply in TCP and HTTP mode, that said, in HTTP timeout server also means "max time for server to generate an answer".
timeout tunnel applies only on HTTP mode, when HAProxy performs in tunnel mode or when a websocket is established. More information about timeout tunnel for websockets here (and a comparison with the timeout mentioned previously):
https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy/

Related

Understanding keepalive between client and cockroachdb with haproxy

We are facing a problem where our client lets name it A. Is attempting to connect DB server (Cockroach) name B load balanced via ha-proxy
A < -- > haproxy < -- > B
Now at every, while our client A is receiving Broken Pipe error.
But I'm not able to understand why?
Cockroach server already has the below default value i.e 60 seconds.
COCKROACH_SQL_TCP_KEEP_ALIVE ## which is enabled to send for 60 second
Plus our haproxy config has the following setting.
defaults
mode tcp
# Timeout values should be configured for your specific use.
# See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
timeout connect 10s
timeout client 1m
timeout server 1m
# TCP keep-alive on client side. Server already enables them.
option clitcpka
option clitcpka
So what is causing the TCP connection to drop when the keepalive is enabled on every end.
Keepalive is what makes connections go away if one of the end points has died without closing the connection. Investigate in that direction.
The only time keepalive actually keeps the connection alive is in connection with an ill-configured firewall that drops idle connections.

Azure Load Balancer keeps socket on server side open after timeout

The timeout documentation says that the client receives an error when the clients sends data to the load balancer.
When the connection is closed, your client application may receive the following error message: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."
But our service-fabric endpoint still has that tcp socket open that already timed-out, even for days now.
The client just sent a TCP CLOSE after the timeout was already applied.
Why does the load balancer not inform the service fabric node that tcp connection was closed because of timeout?
Can the windows OS on the service fabric node close the socket after a timeout of no activity? Found a TCP Keep-alive documentation, be tcp-keepalive feature is currently not usable by our application.
Load Balancer does not send TCP RST when sessions are idle timed out. Please investigate with Service Fabric how to manage this scenario and enable sending of TCP keep-alives.

How to configure haproxy to use a different backend for each request

I have an Haproxy 1.5.4. I would like to configure the haproxy to use a different backend for each request. This way , I want to ensure that a diffeent backend is used for each request. I curently use the following config:
global
daemon
maxconn 500000
nbproc 2
log 127.0.0.1 local0 info
defaults
mode tcp
timeout connect 50000ms
timeout client 500000ms
timeout server 500000ms
timeout check 5s
timeout tunnel 50000ms
option redispatch
listen httptat *:3310
mode http
stats enable
stats refresh 5s
stats uri /httpstat
stats realm HTTPS proxy stats
stats auth https:xxxxxxxxxxx
listen HTTPS *:5008
mode tcp
#maxconn 50000
balance leastconn
server backend1 xxx.xxx.xxx.xxx:125 check
server backend1 xxx.xxx.xxx.xxx:126 check
server backend1 xxx.xxx.xxx.xxx:127 check
server backend1 xxx.xxx.xxx.xxx:128 check
server backend1 xxx.xxx.xxx.xxx:129 check
server backend1 xxx.xxx.xxx.xxx:130 check
......
simply change the balance setting from leastconn to roundrobin
from the haproxy manual for 1.5 :
roundrobin Each server is used in turns, according to their weights.
This is the smoothest and fairest algorithm when the server's
processing time remains equally distributed. This algorithm
is dynamic, which means that server weights may be adjusted
on the fly for slow starts for instance. It is limited by
design to 4095 active servers per backend. Note that in some
large farms, when a server becomes up after having been down
for a very short time, it may sometimes take a few hundreds
requests for it to be re-integrated into the farm and start
receiving traffic. This is normal, though very rare. It is
indicated here in case you would have the chance to observe
it, so that you don't worry.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-balance

Haproxy still dispatches connections to backend server when it's graceful restarting

We are using haproxy for thrift(rpc) server load balancing in tcp mode. But we've encountered one problem when backend server restarts.
When our thrift(rpc) server restarts, it first stop listening on the port which haproxy is configured to connect, but will still process running requests until they are all done(graceful restart).
So during restarting period, there are still connected sockets made from client to backend server via haproxy while backend server is not accepting any new connections, but haproxy still treats this backend server as healthy, and will dispatch new connections to this server. Any new connections dispatched to this server will take quite a long time to connect, and then timeout.
Is there any way to notify haproxy that server has stop listening and not to dispatch any connection to it?
I've tried following:
timeout connect set to very low + redispatch + retry 3
option tcp-check
Both not solve the problem.

HAProxy - Request getting Broadcast to every server

I am hosting two different application versions on same servers on different ports. In basic version i expect that following configuration should send request in RoundRobin fashion to different ports. But what i am observing is the request is getting broadcasted to ALL of my server endpoints. Meaning in below example my main request to port 8080 gets FWD to both www.myappdemo.com:5001 and www.myappdemo.com:5002... although the response send by proxy is ALWAYS from www.myappdemo.com:5001.
Can anyone tell what is wrong here?
global
debug
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:8080
default_backend servers
backend servers
balance roundrobin
server svr_50301 www.myappdemo.com:5001 maxconn 32 check
server svr_50302 www.myappdemo.com:5002 maxconn 32 check
i can advise you to enable logs and web interface, after that you can provide us more logs and you can check in web interface also if haproxy detects you second server(svr_50302) to be alive.
Reference to HAProxy 1.5 Doc's :
Web Interface - http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stats%20admin
Good info how to enable login - http://webdevwonders.com/haproxy-load-balancer-setup-including-logging-on-debian/
Best Regards,
Dani