Can Haproxy use endpoint to determine load balancing? - haproxy

Is it possible for haproxy to ask a (REST http) Endpoint to determine, if a specific server is available for loadbalancing?
See example below for our setup. If 'a' is down, the haproxy 'ha' should not use the way over 'x'.
x--a
/ \
ha out
\ /
y--b

Yes. You have to have a page to return the HTTP status, and HAProxy will use it to determine the status.
For example:
backend web-backend
option httpchk HEAD /status.php
server servera servera check inter 2s rise 1 fall 2
server serverb serverb check inter 2s rise 1 fall 2

Related

HAPROXY how to listen api rest

I'm trying to listen API rest with HAPROXY. I don't know if this can be possible, its my first problem, i read about api rest to check logs of HAPROXY.
example:
listen IAM *:5000
bind *:5000
balance roundrobin
server node1.openiam.com 10.10.0.0:9080 check fall 5 inter 2000 rise 2
server node2.openiam.com 10.10.0.1:9080 check fall 5 inter 2000 rise 2
but I wanna consume api rest to check the health of server:
curl -v XGET 'http://dev1.openiamdemo.com:8080/idp/oauth2/userinfo?token=rdSOyor6hqJ2CrQ5QrpeXgX.ItgVEx1.nskN'
can be done?
Thanks for help
It resolved with the next configuration:
Haproxy call webservice with token
option httpchk GET /webconsole/rest/api/system/system/info?token=****
server nodo1 10.10.x.x:8080 check

haproxy stick to group of backend servers

So I am struggling to find the correct config for my haproxy:
I have Ruby on Rails web application which is served by two physical hosts, each having 3 workers. The two hosts each have a database, and the two databases are synchronised in real time.
I am trying to have sessions stick to the same host, so requests are still load balanced across the 3 workers in each host.
The objective is to avoid two consecutive requests from the same client being sent to different hosts.
My config looks like this:
frontend web_front
bind *:4100
default_backend web_back
backend web_back
cookie SERVERID insert indirect nocache
balance roundrobin
server host_1_web_1 129.168.0.1:3000 maxconn 1 check cookie SRV_1
server host_1_web_2 129.168.0.1:3001 maxconn 1 check cookie SRV_1
server host_1_web_3 129.168.0.1:3002 maxconn 1 check cookie SRV_1
server host_2_web_1 129.168.0.2:3000 maxconn 1 check cookie SRV_2
server host_2_web_2 129.168.0.2:3001 maxconn 1 check cookie SRV_2
server host_2_web_3 129.168.0.2:3002 maxconn 1 check cookie SRV_2
As you can see, I have set the cookie of each host to the same value, hopping that requests would be still load balanced properly accross workers, but now only the first worker of each host seems to be getting requests.
Is there a way around this? Perhaps using sticky-tables?
If I am correctly understanding you requirements, you want two different levels of load balancing:
1. Server load balancing using session persistence
2. Worker load balancing without session persistence.
One solution would be to have a service in the server side listening for HAProxy connections and doing the load balance across the workers.
But you still can do this with HAProxy by using a dummy backend:
frontend web_front
bind *:4100
default_backend web_back
backend web_back
cookie SERVERID insert indirect nocache
balance roundrobin
server server1 127.0.0.1:3001 cookie SRV_1
server server2 127.0.0.1:3002 cookie SRV_2
listen lt_srv1
bind 127.0.0.1:3001
server host_1_web_1 129.168.0.1:3000 check
server host_1_web_2 129.168.0.1:3001 check
server host_1_web_3 129.168.0.1:3002 check
listen lt_srv2
bind 127.0.0.1:3002
server host_2_web_1 129.168.0.2:3000 check
server host_2_web_2 129.168.0.2:3001 check
server host_2_web_3 129.168.0.2:3002 check

haproxy rest service fail-over not working

frontend localnodes
bind *:9999 ssl crt /etc/ssl/haproxy.pem
mode http
default_backend servers
backend servers
mode http
balance roundrobin
option forwardfor
server A 192.168.101.129:10007 check backup ssl verify none weight 255 #fall 1 rise 1
server B 192.168.101.129:10008 check ssl verify none weight 1#fall 1 rise 1
I am trying to route Rest services , Now the problem is even if the first server is down the fail-over is not switching to available server.SSL is working fine though.Please tell me where i went wrong.
Thanks

HAProxy random HTTP 503 errors

We've setup 3 servers:
Server A with Nginx + HAproxy to perform load balancing
backend server B
backend server C
Here is our /etc/haproxy/haproxy.cfg:
global
log /dev/log local0
log 127.0.0.1 local1 notice
maxconn 40096
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 50000
clitimeout 50000
srvtimeout 50000
stats enable
stats uri /lb?stats
stats realm Haproxy\ Statistics
stats auth admin:admin
listen statslb :5054 # choose different names for the 2 nodes
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:admin
listen Server-A 0.0.0.0:80
mode http
balance roundrobin
cookie JSESSIONID prefix
option httpchk HEAD /check.txt HTTP/1.0
server Server-B <server.ip>:80 cookie app1inst2 check inter 1000 rise 2 fall 2
server Server-C <server.ip>:80 cookie app1inst2 check inter 1000 rise 2 fall 3
All of the three servers have a good amount of RAM and CPU cores to handle requests
Random HTTP 503 errors are shown when browsing: 503 Service Unavailable - No server is available to handle this request.
And also on server's console:
Message from syslogd#server-a at Dec 21 18:27:20 ...
haproxy[1650]: proxy Server-A has no server available!
Note that 90% times of the time there is no errors. These errors happens randomly.
I had the same issue. After days of pulling my hair out I found the issue.
I had two HAProxy instances running. One was a zombie that somehow never got killed during maybe an update or a haproxy restart. I noticed this when refreshing the /haproxy stats page and the PID would change between two different numbers. The page with one of the numbers had absurd connection stats. To confirm I did
netstat -tulpn | grep 80
Or
sudo lsof -i:80
and saw two haproxy processes listening to port 80.
To fix the issue I did a "kill xxxx" where xxxx is the pid with the suspicious statistics.
Adding my answer here for anyone else who encounters this exact same problem but none of the listed solutions above are applicable. Please note that my answer does not apply to the original code listed above.
For anyone else who may have this problem, check your config and see if you might have mistakenly put the same "bind" line in multiple sections of your config. Haproxy does not check this during startup, and I plan to submit this as a recommended validation check to the developers. In my case, I have 3 different sections of the config, and I mistakenly put the same IP binding in two different places. It was about a 50/50 shot on whether or not the correct section would be used or the incorrect section was used. Even when the correct section was used, about half of the requests still got a 503.
It is possible your servers share, perhaps, a common resource that is timing out at certain times, and that your health check requests are being made at the same time (and thus pulling the backend servers out at the same time).
You can try using the HAProxy option spread-checks to randomize health checks.
I had the same issue, due to 2 HAProxy services running in the linux box, but with different name/pid/resources. Unless i stop the unwanted one, the required instances throws 503 error randomly, say 1 in 5 times.
Was trying to use single linux box for multiple URL routing but looks a limitation in haproxy or the config file of haproxy i have defined.
Hard to say without more details, but is it possible you are exceeding the configured maxconn for each backend? The Stats UI shows these stats on both the frontend and on individual backends.
I resolved my intermittent 503s with HAProxy by adding option http-server-close to backend. Looks like uWSGI (which is upstream) is not doing well with keep-alive. Not sure what's really behind the problem, but after adding this option, haven't seen single 503 since.
don't use the "bind" line in multiple sections of your haproxy.cfg
for example, this would be wrong
frontend stats
bind *:443 ssl crt /etc/ssl/certs/your.pem
frontend Main
bind *:443 ssl crt /etc/ssl/certs/your.pem
fix like this
frontend stats
bind *:8443 ssl crt /etc/ssl/certs/your.pem
frontend Main
bind *:443 ssl crt /etc/ssl/certs/your.pem

How to configure HAProxy to send GET and POST HTTP requests to two different application servers

I am using RESTful architecture. I have two application servers running. One should serve only GET request and other should serve only POST request. I want to configure HAProxy to loadbalance the requests depending upon the above condition. Please help me
Here's a partial HAProxy configuration which can do this for you:
frontend webserver
bind :80
mode http
acl is_post method POST
use_backend post_app if is_post
default_backend get_app
backend post_app
mode http
option forwardfor
balance source
option httpclose
option httpchk HEAD / HTTP/1.0
server post_app1 172.16.0.11:80 weight 1 check inter 1000 rise 5 fall 1
server post_app2 172.16.0.12:80 weight 1 check inter 1000 rise 5 fall 1
server post_app3 172.16.0.13:80 weight 1 check inter 1000 rise 5 fall 1 backup
backend get_app
mode http
option forwardfor
balance source
option httpclose
option httpchk HEAD / HTTP/1.0
server get_app1 172.16.0.21:80 weight 1 check inter 1000 rise 5 fall 1
server get_app2 172.16.0.22:80 weight 1 check inter 1000 rise 5 fall 1
server get_app3 172.16.0.23:80 weight 1 check inter 1000 rise 5 fall 1 backup