I need to understand the optimal connection configurations for chaining 2 haproxys to provide maximum connection reuse between the edge proxy, the app proxy and the app containers.
Where should I be correctly using http-server-close, http-reuse or keep-alive settings?
I have haproxy instances sitting in edge regions which proxy over private networks to a central data center where another haproxy provides the application routing based on url paths to application containers. All application containers are completely stateless rest servers.
The setup is as follows:
haproxy-edge(s) -> haproxy-app(s) -> app-component(s)
Each haproxy-edge serves thousands of concurrent browser and api connections and does ssl offloading etc.
haproxy-app can only be reached via connections from haproxy-edge and does path routing, sets consistent response headers etc.
haproxy-edge connection settings
defaults
mode http
option redispatch
option httplog
option dontlognull
option log-health-checks
option http-ignore-probes
option http-server-close
timeout connect 5s
timeout client 15s
timeout server 300s
timeout http-keep-alive 4s
timeout http-request 10s
timeout tunnel 2m
timeout client-fin 1s
timeout server-fin 1s
...
backend ...
# Routes to haproxy-app. No backend specific connection settings at all
haproxy-app connection settings
defaults
mode http
balance roundrobin
option redispatch
option httplog
option dontlognull
option http-ignore-probes
option http-server-close
timeout connect 5s
timeout client 15s
timeout server 300s
#timeout http-keep-alive 4s
timeout http-request 10s
timeout tunnel 2m
timeout client-fin 1s
timeout server-fin 1s
frontend http-in
...
tcp-request inspect-delay 5s
option http-server-close
...
backend ...
# Routes to app components. No backend specific connection settings at all
I see no reuse of connections in haproxy stats page and the number of sessions/connections seems to be similar at both haproxys, but would expect many on edge to fewer reused connections in haproxy-app.
After testing various combinations the simple (and obvious) change of removing option http-server-close from both haproxy-edge and haproxy-app allowed the connection reuse to work effectively. Haproxy 2.x has some nice new stats page values for reporting on new v's reused connections.
Related
In my php application when I use mysql direct connection it work perfectly. But when I use haproxy ip for ha, it's taking too long to retrieve result.
Ha proxy configuration as following
global
maxconn 5000
nbproc 5
defaults
retries 3
option redispatch
timeout client 120s
timeout connect 10s
timeout server 120s
frontend galera_cluster_frontend
bind 0.0.0.0:3307
mode tcp
log global
option dontlognull
option tcplog
default_backend galera_cluster_backend
backend galera_cluster_backend
mode tcp
option tcpka
option tcplog
option log-health-checks
retries 3
balance leastconn
server db-server-01 e2e-34-201:3306 check weight 1
server db-server-02 e2e-34-202:3306 check weight 3
server db-server-03 e2e-34-203:3306 check weight 2
I made a load balancer using HAProxy. My connections can takes up to 1-4 minutes, so I increased the default timeout values in HAProxy to 300s as follows:
global
daemon
log 127.0.0.1 local0 notice
maxconn 2000
defaults
log global
mode tcp
option tcplog
option dontlognull
retries 3
option redispatch
timeout connect 300s
timeout client 300s
timeout server 300s
option http-keep-alive
frontend LOAD_BALANCER_TIER
bind *:80
default_backend WEB_SERVER_TIER
backend WEB_SERVER_TIER
balance leastconn
mode tcp
server segmentingApi01 some_private_ip:7331 check tcp-ut 300000
server segmentingApi02 some_private_ip:7331 check tcp-ut 300000
server segmentingApi03 some_private_ip:7331 check tcp-ut 300000
As you can see I even increased the TCP connection in server options. Yet, my request to the load balancer timeout after exactly 120s. Please note that I believe the issue is from the load balancer as when I send a request to the servers directly (some_private_ip:7331) it does not timeout.
I was wondering if somebody could help me with this.
First, I don't think "redispatch" and "http-keep-alive" work in tcp mode - as haproxy does not deal with application (http) information in tcp mode.
Maybe you should give "option tcpka" a try. This does TCP keep alive, so the OS won't cancel the connection when no data is exchanged - which I guess is happening here.
You should not set connection timeout to such a high value, because this timeout is for making the initial connection to the server.
I have 3 server:
server (A)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (B)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (C)= a HAProxy as load balancer for port 80 of server (A) and (B)
and server A & B are quite similar.
every things works very well and haproxy forwards requests to server (A) & (B), but if kestrel in one of servers (e.g. A) be killed, nginx respond 502 bad gateway error and haproxy not detect this issue and still redirect requests to it, and this is mistake! it must redirect requests to server (B) in this time.
global
log 127.0.0.1 local2 info
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
retries 3
timeout connect 5s
timeout client 50s
timeout server 50s
stats enable
stats hide-version
stats auth admin:admin
stats refresh 10s
stats uri /stat?stats
frontend http_front
bind *:80
mode http
option httpclose
option forwardfor
reqadd X-Forwarded-Proto:\ http
default_backend http_back
backend http_back
balance roundrobin
mode http
cookie SERVERID insert indirect nocache
server ServerA 192.168.1.2:80 check cookie ServerA
server ServerB 192.168.1.3:80 check cookie ServerB
How Can I resolve this issue?
thanks very much.
You are only checking whether nginx is running, not whether the application is healthy enough to use.
In the backend, add option httpchk.
option httpchk GET /some/path HTTP/1.1\r\nHost:\ example.com
Replace some path with a path that will prove whether the application is usable on that server if it returns 200 OK (or any 2xx or 3xx response), and replace example.com with the HTTP Host header the application expects.
option httpchk
By default, server health checks only consist in trying to establish a TCP connection. When option httpchk is specified, a complete HTTP request is sent once the TCP connection is established, and responses 2xx and 3xx are
considered valid, while all other ones indicate a server failure, including the lack of any response.
This will mark the server as unhealthy if the app is not healthy, so HAProxy will stop sending traffic to it. You will want to configure a check interval for each server using inter and downinter and fastinter options on each server entey to specify how often HAProxy should perform the check.
The Haproxy documentation (http://cbonte.github.io/haproxy-dconv/1.7/intro.html#3.3.2) lists as a basic feature:
authentication with the backend server lets the backend server it's really the expected haproxy node that is connecting to it
I have been attempting to do just that and have been unable to. So here's the question:
How do I send a request off to a backend with self signed certificates for authentication. The front-end request that uses this backend, is just http.
Here's my haproxy.cfg file:
global
maxconn 4096
daemon
log 127.0.0.1 local0
defaults
log global
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 5s
timeout client 15min
timeout server 15min
frontend public
bind *:8213
use_backend api if { path_beg /api/ }
default_backend web
backend web
mode http
server blogweb1 127.0.0.1:4000
backend api
mode tcp
acl clienthello req.ssl_hello_type 1
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
server blogapi 127.0.0.1:8780
I eventually got this to start working. I believe what was throwing me off was the fact that after doing a haproxy -f <configFile> -st it didn't actually close the process like I thought it would. So, none of my changes/updates took. I kill -9 the tens of haproxy service and reran the command (haproxy -f ) and now it's working.
Now, this is a hypothesis, albeit one I am very confident in. I will still present my final configuration just in case someone will glean something from here. I used https://www.haproxy.com/doc/aloha/7.0/deployment_guides/tls_layouts.html. That link answers the question I had of "how do you authenticate to the backend using ssl" like the docs say you can.
global
maxconn 4096
daemon
log 127.0.0.1 local0
defaults
log global
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 5s
timeout client 15min
timeout server 15min
frontend public
bind *:443
mode http
use_backend api if { path_beg /api/ }
backend api
mode http
option httplog
server blogapi 127.0.0.1:4430 ssl ca-file <caFile.Pem> crt <clientCert.pem> verify required
I have an Haproxy 1.5.4. I would like to configure the haproxy to use a different backend for each request. This way , I want to ensure that a diffeent backend is used for each request. I curently use the following config:
global
daemon
maxconn 500000
nbproc 2
log 127.0.0.1 local0 info
defaults
mode tcp
timeout connect 50000ms
timeout client 500000ms
timeout server 500000ms
timeout check 5s
timeout tunnel 50000ms
option redispatch
listen httptat *:3310
mode http
stats enable
stats refresh 5s
stats uri /httpstat
stats realm HTTPS proxy stats
stats auth https:xxxxxxxxxxx
listen HTTPS *:5008
mode tcp
#maxconn 50000
balance leastconn
server backend1 xxx.xxx.xxx.xxx:125 check
server backend1 xxx.xxx.xxx.xxx:126 check
server backend1 xxx.xxx.xxx.xxx:127 check
server backend1 xxx.xxx.xxx.xxx:128 check
server backend1 xxx.xxx.xxx.xxx:129 check
server backend1 xxx.xxx.xxx.xxx:130 check
......
simply change the balance setting from leastconn to roundrobin
from the haproxy manual for 1.5 :
roundrobin Each server is used in turns, according to their weights.
This is the smoothest and fairest algorithm when the server's
processing time remains equally distributed. This algorithm
is dynamic, which means that server weights may be adjusted
on the fly for slow starts for instance. It is limited by
design to 4095 active servers per backend. Note that in some
large farms, when a server becomes up after having been down
for a very short time, it may sometimes take a few hundreds
requests for it to be re-integrated into the farm and start
receiving traffic. This is normal, though very rare. It is
indicated here in case you would have the chance to observe
it, so that you don't worry.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-balance