I'm try to config haproxy to reuse connection(http keep alive) for reduce TCP establish connection cost
It's ok for normal request. But for health check, it's alway create new connection to check backend.
How can we re-use a connection for health check purpose.
Here is my config
balance roundrobin
mode http
option http-keep-alive
option httpchk GET /actuator/health HTTP/1.1\r\nHost:\ 10.1.1.2:8003
server micro-dev 10.1.1.2:8003 check maxconn 2000
Thanks
Few days agos, I have the same question about this header Connection: close which is unfortunately always added to the request.
The support replied that this behavior is normal.
In fact, the single request health check tells the server to close the connection once the response is returned.
It is not possible to change this behavior.
For more informations read this https://github.com/haproxy/haproxy/issues/560
Related
I'm currently investigating an issue of random "Layer4" timeouts being reported by health checks used in HAPRoxy. The actual backend server being checked is proven to be up and responding at the time of these errors, as other trafic to the server is flowing through.
Thus making me suspect there might be issues caused by our configuration.
Server health is currently configured as follow:
option httpchk GET /health HTTP/1.1\r\nHost:\ Haproxy\r\nConnection:\ close
http-check expect string OK
server server1 server1.internal.example.com check check-ssl port 443 verify none inter 3s fall 2 backup
Trying to understand the docs, I see the "http-check connect" and "linger" options being mentioned. Would the "connect" directive make any actual difference to how the connection for the health check is set up compared to our current conf?
Any other feedback/observartions on the above config is welcome.
I have 3 server:
server (A)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (B)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (C)= a HAProxy as load balancer for port 80 of server (A) and (B)
and server A & B are quite similar.
every things works very well and haproxy forwards requests to server (A) & (B), but if kestrel in one of servers (e.g. A) be killed, nginx respond 502 bad gateway error and haproxy not detect this issue and still redirect requests to it, and this is mistake! it must redirect requests to server (B) in this time.
global
log 127.0.0.1 local2 info
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
retries 3
timeout connect 5s
timeout client 50s
timeout server 50s
stats enable
stats hide-version
stats auth admin:admin
stats refresh 10s
stats uri /stat?stats
frontend http_front
bind *:80
mode http
option httpclose
option forwardfor
reqadd X-Forwarded-Proto:\ http
default_backend http_back
backend http_back
balance roundrobin
mode http
cookie SERVERID insert indirect nocache
server ServerA 192.168.1.2:80 check cookie ServerA
server ServerB 192.168.1.3:80 check cookie ServerB
How Can I resolve this issue?
thanks very much.
You are only checking whether nginx is running, not whether the application is healthy enough to use.
In the backend, add option httpchk.
option httpchk GET /some/path HTTP/1.1\r\nHost:\ example.com
Replace some path with a path that will prove whether the application is usable on that server if it returns 200 OK (or any 2xx or 3xx response), and replace example.com with the HTTP Host header the application expects.
option httpchk
By default, server health checks only consist in trying to establish a TCP connection. When option httpchk is specified, a complete HTTP request is sent once the TCP connection is established, and responses 2xx and 3xx are
considered valid, while all other ones indicate a server failure, including the lack of any response.
This will mark the server as unhealthy if the app is not healthy, so HAProxy will stop sending traffic to it. You will want to configure a check interval for each server using inter and downinter and fastinter options on each server entey to specify how often HAProxy should perform the check.
So if this is the backend config:
backend main
mode http
balance leastconn
cookie serverid insert indirect nocache
stick-table type string len 36 size 1m expire 8h
stick on cookie(JSESSIONID)
option httpchk HEAD /web1 HTTP/1.0
http-check expect ! rstatus ^5
server monintdevweb 10.333.33.33:443 check cookie check ssl verify none #web1
server monintdevweb2 10.222.22.122:443 check cookie check ssl verify none #web2
server localmaint 10.100.00.105:9042 backup #maint
option log-health-checks
option redispatch
timeout connect 1s
timeout queue 5s
timeout server 3600s
It seems it always sends all users to web1. i.e. its not evenly load balancing using leastconn algorithm specified. I tried with stick table using src IP and that does what i want - ie persist sessions but each new session should get balanced between servers. Is it not possible to have that using cookies?
Another problem with cookies i noticed was that if I were to bring down services on the web1, all users get redirected to web2 and then redirected back to web1 when web1 is restored!! What real life scenario would find this behavior be useful?
After scouring the internet for a whole day and going thru every link that talks about load balancing and sticky sessions, I found the answer immediately after i posted the question. I need to use the "application ID" which will help load balancer differentiate between each user session so it can continue to load balance requests. I am not using IIS/asp.net so thats why it didn't hit me earlier. But here is the config that works..
change these lines..
cookie serverid insert indirect nocache
stick-table type string len 36 size 1m expire 8h
stick on cookie(JSESSIONID)
to:
stick-table type string len 36 size 1m expire 8h
stick on cookie(DWRSESSION)
where DWRSESSION is the my application session ID.
I have an Haproxy 1.5.4. I would like to configure the haproxy to use a different backend for each request. This way , I want to ensure that a diffeent backend is used for each request. I curently use the following config:
global
daemon
maxconn 500000
nbproc 2
log 127.0.0.1 local0 info
defaults
mode tcp
timeout connect 50000ms
timeout client 500000ms
timeout server 500000ms
timeout check 5s
timeout tunnel 50000ms
option redispatch
listen httptat *:3310
mode http
stats enable
stats refresh 5s
stats uri /httpstat
stats realm HTTPS proxy stats
stats auth https:xxxxxxxxxxx
listen HTTPS *:5008
mode tcp
#maxconn 50000
balance leastconn
server backend1 xxx.xxx.xxx.xxx:125 check
server backend1 xxx.xxx.xxx.xxx:126 check
server backend1 xxx.xxx.xxx.xxx:127 check
server backend1 xxx.xxx.xxx.xxx:128 check
server backend1 xxx.xxx.xxx.xxx:129 check
server backend1 xxx.xxx.xxx.xxx:130 check
......
simply change the balance setting from leastconn to roundrobin
from the haproxy manual for 1.5 :
roundrobin Each server is used in turns, according to their weights.
This is the smoothest and fairest algorithm when the server's
processing time remains equally distributed. This algorithm
is dynamic, which means that server weights may be adjusted
on the fly for slow starts for instance. It is limited by
design to 4095 active servers per backend. Note that in some
large farms, when a server becomes up after having been down
for a very short time, it may sometimes take a few hundreds
requests for it to be re-integrated into the farm and start
receiving traffic. This is normal, though very rare. It is
indicated here in case you would have the chance to observe
it, so that you don't worry.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-balance
I am hosting two different application versions on same servers on different ports. In basic version i expect that following configuration should send request in RoundRobin fashion to different ports. But what i am observing is the request is getting broadcasted to ALL of my server endpoints. Meaning in below example my main request to port 8080 gets FWD to both www.myappdemo.com:5001 and www.myappdemo.com:5002... although the response send by proxy is ALWAYS from www.myappdemo.com:5001.
Can anyone tell what is wrong here?
global
debug
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:8080
default_backend servers
backend servers
balance roundrobin
server svr_50301 www.myappdemo.com:5001 maxconn 32 check
server svr_50302 www.myappdemo.com:5002 maxconn 32 check
i can advise you to enable logs and web interface, after that you can provide us more logs and you can check in web interface also if haproxy detects you second server(svr_50302) to be alive.
Reference to HAProxy 1.5 Doc's :
Web Interface - http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stats%20admin
Good info how to enable login - http://webdevwonders.com/haproxy-load-balancer-setup-including-logging-on-debian/
Best Regards,
Dani