url_param routing algorithm in HAProxy - haproxy

haproxy.cfg:
global
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 5000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 10m
timeout server 10m
timeout http-keep-alive 10s
timeout check 10s
maxconn 4000
listen stats
bind :9000
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /haproxy
frontend http-in
bind *:80
acl acl_is_host hdr(host) -i example.com
acl acl_is_param url_param(id) -m str server1 server2 server3 server4
use_backend worker_cluster if acl_is_host acl_is_param
backend worker_cluster
cookie APPID insert secure httponly
balance url_param id
hash-type consistent
server w1 localhost:8080 weight 40 maxconn 1000 check
server w2 localhost:8081 weight 40 maxconn 1000 check
server w3 localhost:8082 weight 10 maxconn 1000 check
server w4 localhost:8083 weight 10 maxconn 1000 check
Previously, when I had only 2 servers w1, w2 in the backend without any weights, it worked perfectly fine. But after I added 2 more servers w3,w4 with weights (All weights on servers adding to 100), I see issues like, requests with param id=server1 reaching to server w2.
Is there any mistake in the config file?
Are the weights always should add up to 100?
Haproxy version used: 1.5.18

Related

haproxy Infinite page refresh loop

On different hosts, there are two identical sites on IIS web servers, access to them is open on two ports 443 and 9000, if I turn on both hosts in haproxy, then when the site page is opened, an endless page refresh cycle occurs. It works if you disable one of the catches or specify the weight as srv1 weight 100 srv2 weight 0. How can I fix this?
My haproxy config:
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
# log /dev/log local0
# log /dev/log local1 notice
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 10000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats mode 666 level user
ssl-default-bind-options no-sslv3
ssl-default-bind-ciphers EECDH:+AES256:-3DES:RSA+AES:RSA+3DES:!NULL:!RC4
# ssl-server-verify none
defaults
mode http
log global
option httplog
option dontlognull
#option http-server-close
#option forwardfor except 127.0.0.0/8
#option redispatch
retries 10
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 150s
timeout http-keep-alive 65s
timeout check 10s
maxconn 10000
log-format "%{+Q}o\ client = %ci:%cp, server = %si:%sp path = %HU, status = %ST, %b, t_простоя = %Ti, t_отклика = %Tr, t_сеанса = %Tt, request = %r, byte = %B"
frontend stats
bind *:8404
http-request use-service prometheus-exporter if { path /metrics }
no log
stats enable
stats uri /stats
stats refresh 10s
stats auth adm:123
frontend http
bind *:80
mode http
redirect scheme https if !{ ssl_fc }
frontend https
bind *:443 ssl crt /etc/haproxy/tls/haproxy.pem alpn h2,http/1.1
bind *:9000 ssl crt /etc/haproxy/tls/haproxy.pem alpn h2,http/1.1
option forwardfor
option http-keep-alive
http-request add-header X-Forwarded-Proto https
http-request set-header X-Client-IP %[src]
errorloc 404 https://xxxx.com/500
errorloc 500 https://xxxx.com/500
errorloc 503 https://xxxx.com/500
errorloc 504 https://xxxx.com/500
use_backend https if { hdr(host) -i xxxx.com }
use_backend 9000 if { hdr(host) -i xxxx.com:9000 }
backend https
balance roundrobin
server srv1 192.168.1.11:443 check cookie srv1 weight 80 ssl verify none
server srv2 192.168.1.111:443 check cookie srv2 weight 20 ssl verify none
backend 9000
balance roundrobin
server srv1 192.168.1.11:9000 check cookie srv1 weight 80 ssl verify none
server srv2 192.168.1.111:9000 check cookie srv2 weight 20 ssl verify none
You are using the "cookie" strategy for loadbalancing, but no way to set this cookie :
balance roundrobin
server srv1 192.168.1.11:443 check cookie srv1 weight 80 ssl verify none
server srv2 192.168.1.111:443 check cookie srv2 weight 20 ssl verify none
As a result, the client communicates with both server alternatively. This does not suit well with IIS (and/or underlying technos).
Simply tell HAProxy which cookie to use/set. For example, a cookie named SERVERID :
balance roundrobin
cookie SERVERID insert indirect nocache
server srv1 192.168.1.11:443 check cookie srv1 weight 80 ssl verify none
server srv2 192.168.1.111:443 check cookie srv2 weight 20 ssl verify none
If the client already has a SERVERID cookie, then HAProxy sends the traffic to the corresponding server.
If the client does not have a SERVERID cookie, then HAProxy chooses and instructs the client to Set-Cookie: SERVERID=s1 for example.
Source : https://www.haproxy.com/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/

haproxy configuration for rewriting to /myapp

Here is my haproxy configuration
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 16384
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/run/haproxy.cmd
defaults
mode http
log global
option httplog
option dontlognull
option httpclose
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 20s
timeout client 45s
timeout server 45s
timeout check 20s
maxconn 16384
listen stats :9000
mode http
stats enable
stats uri /haproxy
stats realm HAProxy\ Statistics
stats auth haproxy:password
stats admin if TRUE
listen http :80
#balance leastconn
#balance roundrobin
balance source
option http-server-close
option forwardfor
server web1 10.0.2.10:8080 check inter 3000 rise 2 fall 3
server web2 10.0.2.11:8080 check inter 3000 rise 2 fall 3
# acl has_www hdr_beg(host) -i www
# http-request redirect code 301 location myapp
It is working like this:
I type http://www.example.com:8000 or http://www.example.com so it goes to jboss's 8080 port.
My application is actually accessible through example.com/suite but because the port 80 is blocked by ISP, that's why I am using the port 8000 and because of this; my application is accessible through example.com:8000/mypp
I want to use haproxy config to forward whoever types example.com:8000 to example.com:8000/myapp
How to achieve it?
I am missing something right?
When you define "Server" in your backend nodes section, you can have URI attached to the IP and port like below to achieve what you are looking for,
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind 0.0.0.0:9876
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 127.0.0.1:8443/**myApp** check

HAProxy - Partial SSL Offloading

Currently, I'm using tcp mode to redirect SSL traffic to two backend nodes where the certs are loaded and processed. This is working as expecting, and in apache with SNI I can process the necessary domains.
For one of these domains, I have a need to move it to a different backend and server. My question - is it possible to get this setup in haproxy within the same frontend ip / port? I do not want to load ALL the certs into HAProxy - but it would be fine to load just the cert for the site that needs to point at a different backend.
Basically, can I run something like this without needed to load all ssl certs into haproxy?
acl host_mysite hdr(host) -i mysite.example.com
Update. Here are the existing config sections:
frontend http_frontend
bind :::80 v4v6
tcp-request inspect-delay 5s
default_backend http_backend
acl host_mysite hdr(host) -i mysite.example.com
use_backend http_backend2 if host_mysite
backend http_backend
balance roundrobin
mode http
default-server inter 4s rise 3 fall 2
fullconn 20000
reqadd X-Forwarded-Proto:\ https if { ssl_fc }
server server1 192.168.192.1:80 maxconn 10000 check
server server2 192.168.192.2:80 maxconn 10000 check
backend http_backend2
mode http
default-server inter 4s rise 3 fall 2
fullconn 20000
server server3 192.168.192.3:80 maxconn 10000 check
frontend ssl_frontend
bind :::443 v4v6
mode tcp
default_backend ssl_backend
backend ssl_backend
balance roundrobin
mode tcp
default-server inter 4s rise 3 fall 2
fullconn 20000
reqadd X-Forwarded-Proto:\ https if { ssl_fc }
option ssl-hello-chk
server server1 192.168.192.1:443 maxconn 10000 check
server server2 192.168.192.2:443 maxconn 10000 check

HAproxy web server redirects

I have only one PUBLIC IP address and two web servers one for each site I run. I have been trying to setup an HAproxy to redirect the requests to the correct web server, as following:
If user calls www.domain1.com, the HA proxy send this request to internal IP 192.168.0.249
If user calls www.domain2.com, it will be redirect to internal IP 192.168.0.100
I have made some searches on the Internet and I have made the following configuration for HAProxy.
global
log 127.0.0.1 local0 notice
maxconn 2000
user haproxy
group haproxy
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 10000
timeout server 10000
#stats
listen StatServer 0.0.0.0:80
mode http
stats enable
stats usi /haproxy?stats
stats reaml Strictly\ Private
stats auth user:pass
frontend http-in
bind *:80
default_backend domain1
reqadd X-forwarded-Proto:\ http
acl req_dimain2 hdr_beg(host) -i www.domain2.com
acl req_domain2 hdr_beg(host) -i domain2.com
use_backend domain2 if req_domain2
backend domain2
mode http
balance roundrobin
option httplog
option httpclose
option forwardfor
cookie JSSESIONID prefix
option httpchk HEAD /check.txt HTTP/1.0
server domain2 192.168.0.249 cookie avi_cr weight 1 maxconn 2000 check port 80 inter 2000 rise 2 fall 2
backend dimain1
mode http
balance roundrobin
option httplog
option httpclose
option forwardfor
cookie JESSESSIONID prefix
option httpchk HEAD /check.txt HTTP/1.0
server domain1 192.168.0.100 cookie atc_srv weight 1 maxconn 2000 check port 80 inter 2000 rise 2 fall 2
On haproxy stats page I always have both domain1 and domain2 DOWN and pages got error 503. Any help will be appreciated.
Regards.
Marcio
503 Error means HAproxy server didn't get proper web server.You can check on HAproxy front-end part.

haproxy with 2 backends - when 1 backend queues, the other is also affected

I have haproxy setup with 2 backends: be1 and be2
I'm using ACL to route based on the path.
When be2 begins to develop a queue, the requests to be1 are negatively affected -- what normally takes 100ms takes 2-3 seconds (just like what happens to the requests going to be2).
Is there a way to allow be2 to queue up without affecting performance on be1?
At peak, I was serving about 2000 req/s.
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 2000
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
ulimit-n 65535
stats socket /var/run/haproxy.sock
nopoll
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http_in *:80
option httpclose
option forwardfor
acl vt path_beg /route/1
use_backend be2 if vt
default_backend be1
backend be1
balance leastconn
option httpchk HEAD /redirect/are_you_alive HTTP/1.0
server 01-2C2P9HI x:80 check inter 3000 rise 2 fall 3 maxconn 500
backend be2
balance leastconn
option httpchk HEAD /redirect/are_you_alive HTTP/1.0
server 01-3TPDP27 x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-3CR0FKC x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-3E9CVMP x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-211LQMA x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-3H974V3 x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-13UCFVO x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-0HPIGGT x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-2LFP88F x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-1TIQBDH x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-2GG2LBB x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-1H5231E x:80 check inter 3000 rise 2 fall 3 maxconn 250
server 01-0KIOVID x:80 check inter 3000 rise 2 fall 3 maxconn 250
listen stats 0.0.0.0:7474 #Listen on all IP's on port 9000
mode http
balance
timeout client 5000
timeout connect 4000
timeout server 30000
#This is the virtual URL to access the stats page
stats uri /haproxy_stats
#Authentication realm. This can be set to anything. Escape space characters with a backslash.
stats realm HAProxy\ Statistics
#The user/pass you want to use. Change this password!
stats auth ge:test123
#This allows you to take down and bring up back end servers.
#This will produce an error on older versions of HAProxy.
stats admin if TRUE
Not sure how I didn't notice this yesterday, but seeing that maxconn is set to 2000... so that is likely one of my issues?
There are two different maxconn settings. One for frontends and other for backends. The setting for frontends limit the incoming connections, so even though your backend is available it would not get the request as it is queued on the frontend side. Once requests goes through frontend the backend queuing takes place. Frontends are affected by maxconn setting in "default" section, so I would increase that to 4000 for example, as backend should be able to handle it.
Please not that maxconn does not limit requests per second, but simultaneous connections. You might be having some HTTP keep-alive requests active that could limit the available throughput a lot.