I am using HAProxy for load balancing my application on RHEL7. I have two servers server1 and server2, of which I want server1 as prefered server. My requirement is: server1 should serve all requests by default; if server1 fails, server2 should be active; when server1 is up, server1 should become active and should process requests. Following is my frontend/backend in haproxy.conf:
frontend frontend_2143
bind *:2143
default_backend backend_2143
backend backend_2143
balance roundrobin
mode tcp
server server1 192.160.0.3:2143 check weight 255
server server2 192.160.0.4:2143 check
With this configuration I am receiving all my request at server1 at begining, and at server2 after server1 is down, but when server1 is up, the requests are still getting received at server2.
Can anyone help here?
Try to use the "backup" directive in your config.
See detailed explanation here: http://blog.haproxy.com/2013/12/23/failover-and-worst-case-management-with-haproxy/
And maybe you should check this too: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-option%20prefer-last-server
Related
i am using haproxy for sometime and it perfectly works fine. But i had one query from my team. what if primary server comes back online.. the request should got to primary server instead of backend servers. Below are my configuration details. Please help me the resolve the issue.
basically in below configuration my request are going to primary server that is Server1 and Server2 in a roundrobin way. If both the primary server goes down it switches to backup server that is Server3 and Server4. Now if both the Primary server becomes live i want all the request should go to the primary server and not to backup server. How can i do this configuration? help will be highly appreciated.
frontend Local_Server
bind ssbbct1076:39250
mode http
default_backend Web_Server
backend Web_Server
balance roundrobin
option httplog
option log-health-checks
option allbackups
server Server1 s2bbct01:39249 check
server Server2 s2bbct02:39249 check
server Server3 c1bbct01:39249 check backup
server Server4 c1bbct02:39249 check backup
you can use option redispatch to redispatch a request to the active servers
I have layer 4 Haproxy setup with three servers configured as follows :
listen db_rw
bind *:3306
log global
mode tcp
option tcpka
default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 1024 weight 100
server server1 192.168.0.101:3306 check
server server2 192.168.0.102:3306 check backup
server server3 192.168.0.103:3306 check backup
Here always server1 is up since others are configured backup until server1 is down; But when server1 is down it will then send request to other two servers 2,3 when server1 is down;
My intension is that, when server 1 is down ; request should be forwarded to server2; when server 2 is down then send request to server 3 only; I am using listen ;
Could anyone tell me how to accomplish this sitation;
I think using acl and *srv_is_up * could be possible;
I Think you have done it right already, have a look at this article : failover-and-worst-case-management-with-haproxy
It seems as though it will only use 1 backup until it fails in turn.
If you want to use both you need to specify option allbackups in backend
So I am struggling to find the correct config for my haproxy:
I have Ruby on Rails web application which is served by two physical hosts, each having 3 workers. The two hosts each have a database, and the two databases are synchronised in real time.
I am trying to have sessions stick to the same host, so requests are still load balanced across the 3 workers in each host.
The objective is to avoid two consecutive requests from the same client being sent to different hosts.
My config looks like this:
frontend web_front
bind *:4100
default_backend web_back
backend web_back
cookie SERVERID insert indirect nocache
balance roundrobin
server host_1_web_1 129.168.0.1:3000 maxconn 1 check cookie SRV_1
server host_1_web_2 129.168.0.1:3001 maxconn 1 check cookie SRV_1
server host_1_web_3 129.168.0.1:3002 maxconn 1 check cookie SRV_1
server host_2_web_1 129.168.0.2:3000 maxconn 1 check cookie SRV_2
server host_2_web_2 129.168.0.2:3001 maxconn 1 check cookie SRV_2
server host_2_web_3 129.168.0.2:3002 maxconn 1 check cookie SRV_2
As you can see, I have set the cookie of each host to the same value, hopping that requests would be still load balanced properly accross workers, but now only the first worker of each host seems to be getting requests.
Is there a way around this? Perhaps using sticky-tables?
If I am correctly understanding you requirements, you want two different levels of load balancing:
1. Server load balancing using session persistence
2. Worker load balancing without session persistence.
One solution would be to have a service in the server side listening for HAProxy connections and doing the load balance across the workers.
But you still can do this with HAProxy by using a dummy backend:
frontend web_front
bind *:4100
default_backend web_back
backend web_back
cookie SERVERID insert indirect nocache
balance roundrobin
server server1 127.0.0.1:3001 cookie SRV_1
server server2 127.0.0.1:3002 cookie SRV_2
listen lt_srv1
bind 127.0.0.1:3001
server host_1_web_1 129.168.0.1:3000 check
server host_1_web_2 129.168.0.1:3001 check
server host_1_web_3 129.168.0.1:3002 check
listen lt_srv2
bind 127.0.0.1:3002
server host_2_web_1 129.168.0.2:3000 check
server host_2_web_2 129.168.0.2:3001 check
server host_2_web_3 129.168.0.2:3002 check
I am trying to setup haproxy on EC2 instance but facing below error:
503 Service Unavailable. No server is available to handle this
request.
Any help is highly appreciated. I tried many ways but all in vain.
My haproxy version is 1.5 and this is haproxy.cfg file :
frontend main
bind *:80
default_backend server
backend server
balance roundrobin
server node1 xx.xx.xx.xx:80 check maxconn 32
server node2 xx.xx.xx.xx:80 check maxconn 32
Probably the config file you shared is not complete. It should contain mode http in frontend and backend server if not mentioned in global settings.
Also check if you can access the webserver, it is up and running.
Allow the connection on webserver through firewall.
You can also share full config file so exact issue can be identified.
Hope this helps!
I have two Windows 2008 R2 Standard Server on which IIS 7.5 is installed (Server1 and Server2). On Server1 I have installed Web Farm Framework 2.2 and created a server Farm "myFarm.com". I have also installed ARR on the Server1.
In the server farm, I have added Server2 and Server1 as the secondary servers. I have configured the ARR with default option. Load balancing is configured to "Round Robin so that request can go to both of the server randomly.
To test my setup I have created a Test.ASPX page and deployed it in both servers. This is a simple page which returns serverName on which server page is executed. This way I would know that load balancing is working or not.
Then I opened Internet Exlorer and tried to browse my Test.ASPX page from server1 which hosts Web Farm and ARR. Everytime I hit the page request goes to Server2 only. I made my server2 has unhealthy in the webfarm to check if Server1 handle the request or not. When I tried to hit the Test.aspx in the browser, I was surprised to add following error:
The request cannot be routed because it has reached the Max-Forwards limit. The server may be self-referencing itself in request routing topology.
From the error message it appears that when my server2 is not available ARR is sending the request to Server1 which is again sending it to itself causing loopback. I couldn't find a way to stop this loopback.
One of the solution which I found after searching is that I should not add Server1 to the web farm as it is hosting ARR, but I have only two servers and I don't want to use one server just for ARR.
As soon as I mark my server2 healthy request starts getting executed by server2.
Could someone suggest what should be configured to resolve this error?
Thanks
You can do a self reference ARR and avoid to get the max-fowards limits if you configure ARR on port 80 and your web farm on another port : ex 8080
So when ARR route the request to itself he will do it on another port so avoid to foward and foward again the request.
Enjoy :-)
I had the same problem recently and this is the configuration that helped me (following what Cedric suggested in another post).
So, here is what you can do:
In your web-site configuration, add additional binding for Server2, for example, to port 88 (i.e. you should be able to get response by navigating to http://Server2:88/Test.ASPX).
In your server farm configuration, add condition to your routing (Routing Rules -> URL Rewrite) to avoid processing requests that go to port 88: