haproxy close connecton after 1 minute - haproxy

this config
frontend https_frontend
bind *:4055
mode tcp
maxconn 8192
use_backend https_web
backend https_web
mode tcp
balance roundrobin
option http-keep-alive
server haproxy2 xxx.xxx.xxx.xxx:4055 send-proxy-v2
new connection send keep-alive packets every 30 seconds. but connection drop after 1 minute

I think this is because you're using mode tcp, but option http-keep-alive is a mode http option. In this case, it would most likely be using whatever value you have for timeout client or timeout server before dropping the connection.
For more details about option http-keep-alive and mode http, see:
https://www.haproxy.com/documentation/aloha/7-5/traffic-management/lb-layer7/http-modes/#http-modes-in-haproxy

frontend https_frontend
bind *:4055
mode tcp
maxconn 8192
use_backend https_web
backend https_web
mode tcp
balance roundrobin
timeout client 600000
timeout server 600000
server haproxy2 147.78.65.172:4055 send-proxy-v2
now i send keep-alive packets and real data every 30 seconds
but steel drop after 2 minutes
its not http/https query. its sample tcp communication with rand data. maybe it problem?

Related

Haproxy agent-check DRAIN(agent) status still accepts new connections

I am trying to perform a load balancing for my Node.js application which uses websockets. I need haproxy to stop load balance new connections on a server, which has reached its maximum number of connections, keeping the existing ones intact in the same time.
I do this by performing an agent-check for each of my servers. If a server cannot accept new connections it responds with "drain" to an agent-check. If a server is able to respond to a new connection, it responds with "ready" to an agent-check.
Here is my haproxy.cfg configuration file:
global
daemon
maxconn 240000
log /dev/log local0 debug
log /dev/log local1 notice
tune.ssl.default-dh-param 2048
defaults
mode http
log global
option httplog
option dontlognull
option dontlog-normal
option http-server-close
option redispatch
timeout connect 20000ms
timeout http-request 1m
timeout client 2100000ms
timeout server 2100000ms
timeout queue 30s
timeout check 5s
timeout http-keep-alive 180s
timeout tunnel 3600s
timeout tarpit 60s
frontend stats
bind *:8084
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUE
frontend test
mode http
bind *:5000
default_backend ws
backend ws
mode http
fullconn 100000
balance roundrobin
cookie SERVERID insert indirect nocache
server 1 backend1:9999 check agent-check agent-port 8080 cookie 1 inter 500 fall 1 rise 2
And here is how I respond to haproxy agent-check in my Node.js app:
const healthCheckServer = net.createServer((c) => {
let data = '';
if (currentConn < MAX_CONN) {
data += 'ready';
} else {
data += 'drain';
}
c.write(data + '\r\n');
c.destroy();
});
healthCheckServer.listen(8080, '0.0.0.0');
When number of connections to my application is reaching its maximum, haproxy correctly changes server status to DRAIN (agent) (I can observe this in haproxy web dashboard). The problem is, that new connections are still accepted by the application.
I am new to haproxy, so can someone point me to where I am wrong?
Found out that when server is drained by agent (status is set to DRAIN (agent)) and if server is the only one existing in the backend it will still accept new connections.
When there are multiple servers present and each server is drained the behavior is just like expected: haproxy returns HTTP 503.
UPDATE:
Turned out that I was looking in the wrong direction all the time.
First I had to mark my backend which processes WebSocket connections as a non-http (remove mode http line). My guess is haproxy incorrectly counts current sessions for http backend when using WebSockets. Removing mode http solved my problem.
Second, returning maxconn:<conn> in agent-check looks like much simple and more idiomatic way to limit the number of concurrent connections.
Sources:
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-agent-check
https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy/
I hope this will help someone.

HAProxy close front-end connection after N HTTP requests

I'm attempting to configure HAProxy to close a client TCP connection after it has been used to process N requests. My goal is to have our long-lived clients occasionally re-establish connections that are otherwise kept-alive by HTTP Keep-Alive.
Basically I'm trying to implement the equivalent of nginx's keepalive_requests (http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests).
I currently have something like:
frontend https-in
bind *:443 ssl crt /etc/ssl/private/cert.pem
stick-table type binary len 32 size 1000 expire 75s store gpc0
acl close_connection sc0_get_gpc0 gt 3
acl exceeded_connection sc0_get_gpc0 gt 4
http-response set-header Connection Keep-Alive unless close_connection
http-response set-header Keep-Alive timeout=75\ max=3 unless close_connection
http-response set-header Connection Close if close_connection
timeout http-keep-alive 75s
timeout client 75s
tcp-request content track-sc0 ssl_fc_session_id
tcp-request content reject if exceeded_connection
http-request sc-inc-gpc0
default_backend https
backend https
option httpchk GET /health
server localhost 127.0.0.1:8080 maxconn 1000
But some problems with this include:
The SSL session ID may be re-used across connections
This ends abruptly closing a connection from a client when they've exceeded the threshold (assuming they ignore the Connection: Close)
Are there any recommended approaches for something like this? Ideally I would like to:
Track the counter based on individual TCP connection (not src to avoid the case where the same IP has multiple connections established)
Close the connection on the final response (at the same time I send Connection: Close)
But I haven't been able to track down ways to do either of those.
Thanks!
Edit
I was able to devise a better way to track unique TCP connections by creating hashing a tuple of the src,src_port,dst,dst_port:
http-request set-header X-Unique-Id %[src]:%[src_port]:%[dst]:%[dst_port]
http-request set-header X-Unique-Id-SHA %[req.fhdr(X-Unique-Id),sha1]
http-request track-sc0 req.fhdr(X-Unique-Id-SHA)
I'm not crazy about having to create the dummy headers, but this seems to work.

Why does HAProxy show that a server's check URL is a 404 when running curl on this URL is successful?

I'm setting up HAProxy to load-balance a resource between 3 back-ends. Here is the HAProxy config : (In the following snippets I replaced the actual domain name by example.net)
global
log 127.0.0.1 local2
log-send-hostname
maxconn 2000
pidfile /var/run/haproxy.pid
stats socket /var/run/haproxy.sock mode 600 level admin
stats timeout 30s
daemon
# SSL ciphers
...
defaults
mode http
option forwardfor
option contstats
option http-server-close
option log-health-checks
option redispatch
timeout connect 5000
timeout client 10000
timeout server 10000
...
frontend front
bind *:443 ssl crt /usr/local/etc/haproxy/front.pem
reqadd X-Forwarded-Proto:\ https if { ssl_fc }
stats uri /haproxy?stats
option httpclose
option forwardfor
default_backend back
balance source
backend back
balance roundrobin
option httpchk GET /healthcheck HTTP/1.0
server server1 xxx.xxx.xxx.xxx:80 check inter 5s fall 2 rise 1
server server2 yyy.yyy.yyy.yyy:8003 check backup
server mysite example.net:80 check backup
The issue is the following: even though the first 2 servers respond correctly, the domain-based one always shows as a 404:
What is counter-intuitive to me is that if I use curl to access this same healthcheck, I get an HTTP 200 (like I would expect to see in the HAProxy stats) :
curl -I http://example.net/healthcheck
HTTP/1.1 200 OK
When I ping my site, I get:
# ping example.net
PING example.net (217.160.0.195) 56(84) bytes of data.
64 bytes from 217-160-0-195.elastic-ssl.ui-r.com (217.160.0.195): icmp_seq=1 ttl=50 time=45.7 ms
Is it because the IP of my domain is shared with other domains (1&1 shared hosting) that HAProxy can't access it? Why is that and how to make HAProxy reach it correctly?

Redirecting to backend based on port

I'm fairly new to HAProxy so just looking for a little direction here. Here's a log of the problem and the config for that as well. I'm trying to force specific destination ports to use a specific backend and it's not working.
Dec 18 18:49:34 localhost HAPLB[8405]: x.x.x.x:64725 [18/Dec/2014:18:49:27.157] 890_imappop_25 890_imappop_25-smtp/<NOSRV> -1/-1/7084 187 PR 225/35/35/0/3 0/0
backend 890_imappop_25-smtp
balance roundrobin
option redispatch
stick-table type ip size 60k peers mypeers
server filter1-mail 192.168.115.38:25 check
server filter2-mail 192.168.115.39:25 check
listen 890_imappop_25
bind 192.168.115.100:25
mode tcp
balance roundrobin
option redispatch
option tcplog
log 127.0.0.1 local0 debug
stick-table type ip size 60k peers mypeers
acl smtp_25 dst_port 25
acl smtp_225 dst_port 225
acl smtp_587 dst_port 587
use_backend 890_imappop_25-smtp if smtp_25
use_backend 890_imappop_225-smtp if smtp_225
use_backend 890_imappop_587-smtp if smtp_587
server imappop1-mail 192.168.115.42:25 check
server imappop2-mail 192.168.115.43:25 check
The fix was to add mode tcp to the backend section, so in this case it was defaulting to HTTP which obviously SMTP doesn't know how to talk to. Can't believe I forgot that.
backend 890_imappop_25-smtp
balance roundrobin
mode tcp
option redispatch
stick-table type ip size 60k peers mypeers
server filter1-mail 192.168.115.38:25 check
server filter2-mail 192.168.115.39:25 check

jBoss thread count increaed after upgrading haproxy from 1.5dev21 to 1.5.1

I upgraded my haproxy from 1.5dev21 to 1.5.1 stable version with same configuartion. At the backend, I am using jBoss.
As soon as we upgraded, I encountered serious issue regarding jBoss thread counts. It has been increased tremendously.
After rollback to 1.5dev21, everything works fine.
Please find my below configuration file of haproxy. Kindly suggest any changes required to migrate/upgrade to 1.5.1
global
daemon
maxconn 20000
defaults
mode http
timeout connect 15000ms
timeout client 50000ms
timeout server 50000ms
timeout queue 60s
stats enable
stats refresh 5s
backend backend_http
mode http
cookie JSESSIONID prefix
balance leastconn
option forceclose
option persist
option redispatch
option forwardfor
server server3 192.168.58.211:80 cookie server3_cokkie maxconn 1024 check
server server4 192.168.58.212:80 cookie server4_cookie maxconn 1024 check
acl force_sticky_server3 hdr_sub(server3_cookie) TEST=true
force-persist if force_sticky_server3
acl force_sticky_server4 hdr_sub(server4_cookie) TEST=true
force-persist if force_sticky_server4
rspidel ^Server:.*
rspidel ^X-Powered-By:.*
rspidel ^AMF-Ver:.*
listen frontend_http *:80
mode http
maxconn 20000
default_backend backend_http
listen frontend_https
mode http
maxconn 20000
bind *:443 ssl crt /opt/haproxy-ssl/conf/ssl/testsite.pem
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Protocol:\ https
reqadd X-Forwarded-Port:\ 443
reqadd X-Forwarded-SSL:\ on
acl valid_domains hdr_end(host) -i gateway.testsite.com www.testsite.com m.testsite.com
redirect scheme http if !valid_domains
default_backend backend_http if valid_domains
Found this on the haproxy manual, may be of help:
Option "http-tunnel" disables any HTTP processing past the first request and
the first response. This is the mode which was used by default in versions
1.0 to 1.5-dev21. It is the mode with the lowest processing overhead, which
is normally not needed anymore unless in very specific cases such as when
using an in-house protocol that looks like HTTP but is not compatible, or
just to log one request per client in order to reduce log size. Note that
everything which works at the HTTP level, including header parsing/addition,
cookie processing or content switching will only work for the first request
and will be ignored after the first response.