How can I scale punjab connection manager?
server1 and server2 are behind loadbalancer. Not I first got connected with server1. It created the session on second request lets say I got connected with server2. server2 will not recognize my previous session and it will disconnect the request.
My openfire and Punjab server are in each EC2 box behind loadbalancer.
You could add info to the HTTP header and use it for the load balancing when your balancer software supports this.
As Alex said you can use http headers to make sure you continue to go to the same node after a session is established. There is also a blog post I did some time ago about this issue. http://thetofu.livejournal.com/71339.html
Related
in some cluster environments, there are pair servers that are HA 2 by 2. for example i have server1 with IP 22.1.1.1 and server2 with IP 22.1.1.2.
server1 is giving service and server2 is standby. there is this virtual IP 22.1.1.3 that other servers connect to it to get services from server1 and server2.
now i need to monitor this virtual IP to see if it is up and other servers outside its VLAN can connect to it. how i can do this in zabbix?
i don't have an actual physical server to create in zabbix according to this question. i tried to create one but i got errors. also this question is asked 3 years ago. is there any new features i can use to solve this problem?
You can create a host with agent ip 22.1.1.3 and monitor it in agentless mode.
You can ping it (icmpping), connect to a tcp port that you know it's open (net.tcp.service) or, in case of a web service, do a http call with the http agent and react accordingly.
Just create the correct items/templates according to the simple check and http agent documentation.
You do not need a physical server to create a host.
You can create a host with the target IP address and use various items against it - based on your question, you do not need agent items, but some other (remote) type.
We have a TCP application that receives connections in a protocol that we did not design and don’t control.
This protocol will assume that if it can establish a TCP connection, then it can send a message and that message is acknowledged.
This works ok if connecting directly to a machine, if the machine or application is down, the tcp connection will be refused or dropped and the client will attempt to redeliver the message.
When we use AWS elastic load balancer, ELB will establish a TCP connection with the client, regardless of whether there is an available back-end server to fulfil the request.
As a result if our application or server crashes then we lose messages.
ELB will close the TCP connection shortly thereafter, but its not good enough.
Is there a way to make ELB, only establish a connection if it can reach the back-end server?
What options do we have (within the AWS ecosystem), of balancing a TCP based service, while still refusing connections if they cannot be served.
I don't think that's achievable through ELB. By design a load balancer will manage 2 sets of connections (frontend - LB and LB - backend). The load balancer will attempt to minimize the time it takes to serve the traffic it receives. This means that the FE-LB connection will be established as the LB looks for a Backend connection to use / reuse. The case in which all of the Backend hosts are dead is such an edge case that you end up with the behavior you are seeing. Normally it's not a big deal as the requested will just get disconnected once the LB figures out that it cannot server the traffic.
Back to your protocol: to me it seem really weird that you would interpret the ability to establish a connection as equal to message delivery. It sounds like you're using TCP but not waiting for the confirmations that the message were actually received at the destination. To me that seems wrong and will get you in trouble eventually with or without a load balancer.
And not to sound too pessimistic (I do understand we are not living in an ideal world) what I would do in this specific scenario, if you can deploy additional software on the client, would be to use a tcp proxy on the client that would get disabled automatically whenever the load balancer is unhealthy/unable to serve traffic. Instruct the client to connect to this proxy. Far from ideal but it should do the trick.
You could create a health check from your ELB to verify if the backend EC2 instances respond on the TCP port. See ELB Health Checks
Then, you monitor the health status of the EC2 instances sent by the ELB to CloudWatch.
Once you determine that none of the EC2 instances are responding on the TCP port, you can remove the TCP listener from the ELB. See Delete ELB Listeners
Hopefully, at that point the ELB stops accepting TCP connections.
Note, I have not tested this solution.
I want to use this as a proxy server to connect many different clients with servers. Here is what I'm looking to do:
The server software on a user's computer would connect to a proxy server that is running on a VPS. It would pass in some kind of Key or authentication info to identify itself and then would maintain a persistent TCP connection to the proxy server.
A client application running on a mobile device or other computer would connect to the proxy server and pass in some kind of Key or authentication info. The proxy server would match the connection between the client and server based on their authentication info, and then forward all data back and fourth between the connections.
The proxy server would need to be able to handle multiple clients and servers connecting to it at once and use the authentication info to pair them up. There could be multiple clients connecting to the same server at the same time too. The connection from the client and server would both be outbound so that they are not blocked by firewalls. I wrote the client and server software, so I can make them work with any specific proxy.
What is the name of this kind of proxy server? And can anyone recommend any?
Thanks!
Can you any help me with this issue. I have installed haproxy loadbalancer. it is working perfect, but the problem is other. When the application connect to the backend server direct without loadbalancer and the server is down, the application say "trying to reconnect" - this is good, because a user know that the server is down. But wenn application is connect to loadbalancer and server is down, the application staying open and don't say "trying to reconnect". This is because the app is connect direct to haproxy and the app think, that everything is ok with connection. Do you have any ideas how to make haproxy to be disable or service to be shutdown when all backend servers are down and of course when some of the servers are up, haproxy to be up also
I think you're asking the same question as How can I make HAProxy reject TCP connections when all backend servers are down
You want to explicitly reject the connection if backend servers are down:
acl site_dead nbsrv lt 1
tcp-request reject if site_dead
Or acl site_dead nbsrv(backend_name) lt 1 where backend_name is the name of a different backend.
nbsrv documentation
acl documentation
tcp-reject documentation
I have two Windows 2008 R2 Standard Server on which IIS 7.5 is installed (Server1 and Server2). On Server1 I have installed Web Farm Framework 2.2 and created a server Farm "myFarm.com". I have also installed ARR on the Server1.
In the server farm, I have added Server2 and Server1 as the secondary servers. I have configured the ARR with default option. Load balancing is configured to "Round Robin so that request can go to both of the server randomly.
To test my setup I have created a Test.ASPX page and deployed it in both servers. This is a simple page which returns serverName on which server page is executed. This way I would know that load balancing is working or not.
Then I opened Internet Exlorer and tried to browse my Test.ASPX page from server1 which hosts Web Farm and ARR. Everytime I hit the page request goes to Server2 only. I made my server2 has unhealthy in the webfarm to check if Server1 handle the request or not. When I tried to hit the Test.aspx in the browser, I was surprised to add following error:
The request cannot be routed because it has reached the Max-Forwards limit. The server may be self-referencing itself in request routing topology.
From the error message it appears that when my server2 is not available ARR is sending the request to Server1 which is again sending it to itself causing loopback. I couldn't find a way to stop this loopback.
One of the solution which I found after searching is that I should not add Server1 to the web farm as it is hosting ARR, but I have only two servers and I don't want to use one server just for ARR.
As soon as I mark my server2 healthy request starts getting executed by server2.
Could someone suggest what should be configured to resolve this error?
Thanks
You can do a self reference ARR and avoid to get the max-fowards limits if you configure ARR on port 80 and your web farm on another port : ex 8080
So when ARR route the request to itself he will do it on another port so avoid to foward and foward again the request.
Enjoy :-)
I had the same problem recently and this is the configuration that helped me (following what Cedric suggested in another post).
So, here is what you can do:
In your web-site configuration, add additional binding for Server2, for example, to port 88 (i.e. you should be able to get response by navigating to http://Server2:88/Test.ASPX).
In your server farm configuration, add condition to your routing (Routing Rules -> URL Rewrite) to avoid processing requests that go to port 88: