how to disable haproxy after backend servers are down - haproxy

Can you any help me with this issue. I have installed haproxy loadbalancer. it is working perfect, but the problem is other. When the application connect to the backend server direct without loadbalancer and the server is down, the application say "trying to reconnect" - this is good, because a user know that the server is down. But wenn application is connect to loadbalancer and server is down, the application staying open and don't say "trying to reconnect". This is because the app is connect direct to haproxy and the app think, that everything is ok with connection. Do you have any ideas how to make haproxy to be disable or service to be shutdown when all backend servers are down and of course when some of the servers are up, haproxy to be up also

I think you're asking the same question as How can I make HAProxy reject TCP connections when all backend servers are down
You want to explicitly reject the connection if backend servers are down:
acl site_dead nbsrv lt 1
tcp-request reject if site_dead
Or acl site_dead nbsrv(backend_name) lt 1 where backend_name is the name of a different backend.
nbsrv documentation
acl documentation
tcp-reject documentation

Related

Communication fail between Zabbix-Proxy and Server at port 10051 in a k8s cluster with HAProxy

I have a communication problem between Zabbix Proxy and Zabbix Server at port 10051. I’m using HAPROXY version 2.0.13. Look my Kubernetes scenario:
HAPROXY is working fine when I access my website zabbix.domain.com at port 80 and 443.
Zabbix-Proxy has a parameter “Server” that I set with ip address of worker-1 and the communication works fine, but this happen because the traffic don’t pass through HAPROXY server. When I try to set the Server parameter with my domain address zabbix.domain.com that go to my HAPROXY server, the communication dont work, give the impression that HAPROXY cant treat the request.
zabbix_proxy.conf: Work with Worker-1 ip addr, but dont work with domain name.
The domain name as I said, is pointing to HAPROXY server (10.0.0.110). I think the zabbix-proxy is trying to reach the port 10051 of HAPROXY server and the HAPROXY can’t deal with the requests to forward to my worker node.
This is my HAPROXY configuration, I test with frontend and backend, but now, I just rewrite with Listen parameter.
listen zabbix
mode tcp
bind :10051
option forwardfor
server worker-1 10.10.10.112:10051 check
server worker-1 10.10.10.113:10051 check
server worker-1 10.10.10.114:10051 check
Someone can help? There are some manner to point to my website zabbix.domain.com, the haproxy treat the request send to my worker-1 in port 10051? Please tell me If need more information.

In HAProxy, is it possible to stop routing traffic to a specific server with active sessions by disabling the server?

I am trying implement the following setup
HA -|
|- Redis1
|- Redis2
At any time only one of the redis instances should serve the incoming requests.
Going by the documentation, it seems that you can disable a server dynamically and HA would stop directing the traffic to the disabled server.
While this worked for new client connections, existing client connections are still served content from the disabled server.
But if I kill the redis instance, even the existing client connections are redirected to the other instance.
Is it possible to achieve this behavior without killing the instance?
Heres my HA config:
global
stats socket /opt/haproxy/admin.sock mode 660 level admin
stats socket ipv4#*:19999 level admin
defaults
log global
mode tcp
listen myproxy
mode tcp
bind *:4444
balance roundrobin
server redis1 127.0.0.1:6379 check
server redis2 127.0.0.1:7379 check
Found the answer. Need to add the following directives:
on-marked-down shutdown-sessions
This closes any existing sessions. Eg:
server redis1 127.0.0.1:6379 check on-marked-down shutdown-sessions
server redis2 127.0.0.1:7379 check on-marked-down shutdown-sessions

AWS TCP ELB refuse connection when there is no available back-end server

We have a TCP application that receives connections in a protocol that we did not design and don’t control.
This protocol will assume that if it can establish a TCP connection, then it can send a message and that message is acknowledged.
This works ok if connecting directly to a machine, if the machine or application is down, the tcp connection will be refused or dropped and the client will attempt to redeliver the message.
When we use AWS elastic load balancer, ELB will establish a TCP connection with the client, regardless of whether there is an available back-end server to fulfil the request.
As a result if our application or server crashes then we lose messages.
ELB will close the TCP connection shortly thereafter, but its not good enough.
Is there a way to make ELB, only establish a connection if it can reach the back-end server?
What options do we have (within the AWS ecosystem), of balancing a TCP based service, while still refusing connections if they cannot be served.
I don't think that's achievable through ELB. By design a load balancer will manage 2 sets of connections (frontend - LB and LB - backend). The load balancer will attempt to minimize the time it takes to serve the traffic it receives. This means that the FE-LB connection will be established as the LB looks for a Backend connection to use / reuse. The case in which all of the Backend hosts are dead is such an edge case that you end up with the behavior you are seeing. Normally it's not a big deal as the requested will just get disconnected once the LB figures out that it cannot server the traffic.
Back to your protocol: to me it seem really weird that you would interpret the ability to establish a connection as equal to message delivery. It sounds like you're using TCP but not waiting for the confirmations that the message were actually received at the destination. To me that seems wrong and will get you in trouble eventually with or without a load balancer.
And not to sound too pessimistic (I do understand we are not living in an ideal world) what I would do in this specific scenario, if you can deploy additional software on the client, would be to use a tcp proxy on the client that would get disabled automatically whenever the load balancer is unhealthy/unable to serve traffic. Instruct the client to connect to this proxy. Far from ideal but it should do the trick.
You could create a health check from your ELB to verify if the backend EC2 instances respond on the TCP port. See ELB Health Checks
Then, you monitor the health status of the EC2 instances sent by the ELB to CloudWatch.
Once you determine that none of the EC2 instances are responding on the TCP port, you can remove the TCP listener from the ELB. See Delete ELB Listeners
Hopefully, at that point the ELB stops accepting TCP connections.
Note, I have not tested this solution.

haproxy: set timeout if <condition>

The question seems to be quite straight and easy, however I have not been able to find a proper answer.
In haproxy I have 1 backend, say:
backend-1
and 2 frontends, say:
frontend-1
frontend-2
In the backend stanza I want to set a "timeout server" parameter, but, only if the connection comes from frontend-1.
As I didn't find anything I tried to figure it out myself:
backend backend-1
bind *:80
option <blahblah_option>
timeout server 1d if frontend frontend-1
This syntax does not work, and I am mentioning it to let understand what I am trying to achieve.
This is not doable yet in HAProxy.
Later, you will be able to set timeouts using tcp-request and http-request rules.
What we usually do to workaround this for now, is that we setup 2 backends using the same parameters, but different timeout servers.
This is useful when a few urls only deserve a long server timeout.
Edit followup your comment about multiple health checks:
Well, that's why the server's 'track' directive exists:
backend my_app
server srv1 10.0.0.1:80 check
backend my_app_longtime
server srv1 10.0.0.1:80 track my_app/srv1
In the conf above, the server in my_app_longtime backend won't be checked. That said, it will follow up the same state than srv1 in the backend my_app.
Baptiste
Baptiste
I did it like this and it worked. It made it possible to extend timeout on specific app urls, which are more time consuming. Used that trace health check - thanks Babtiste.
frontend www-http
bind 10.0.0.1:80
default_backend app
acl long_url path_beg -i /long_url
use_backend app-extended if long_url
backend app
server web-1 10.0.0.2:80 check
backend app-extended
server web-1 10.0.0.2:80 trace app/web-1
timeout server 10m

can the different hosts (not ip) forwarding to the same port externally?

Im just wondering, can 2 or more different external hostname/DNS redirect to multiple local servers but same port?
Let's see, I have 2 DNS internet domain for an example, myserver1.com and myserver2.com, and both I have same A record to my forwarded server IP (e.g: 102.123.123.123). Under my server which only has 102.123.123.123 IP address has 2 application servers but instead of trying to make they work, I use different port for each server applications for an example, serverApp1 listening to 0.0.0.0:2010, serverApp2 listening to 0.0.0.0:2020
My point is, is there any way or how to forward my myserver1.com:2000 to serverApp1 (port 2010), and myserver2.com:2000 to serverApp2 (port 2020) but both myserver1.com and myserver2.com has a same A record?
Im quite sure either it is in iptables or /etc/hosts or BIND issues, but guide me if I missed something. And by the way, the servers and DNS records are accessible from the internet which is the firewalls are configured properly. Thanks.
I don't have much experience in that, but I think you will need a third server/firewall/proxy listening for the incoming host and route it accordingly.
Again, I don't have much experience in that, so I'm not sure if the firewall is able to do that.
I think you can use redirection servers like apache.
In my application we want to access lot of intranet servers from internet. So what we did, we configured a apache with all the mappings in httpd.
So when ever a request to apache comes, it will be redirected appropriately.
For example - I have two servers or hostname in intranet : 1) abc.com:7300/context1
2) xyz.com:8900/context2
We configured a apache with host name abcxyz.com:9000. When a request like
abcxyz.com:9000/context1 comes it will be redirected to abc.com:7300/context1 and when a request like abcxyz.com:9000/context2 comes it will be redirected to xyz.com:8900/context2.
In your case since the requests are going through the single server (102.123.123.123), you can use redirection.
Hope it helps.