How to set a custom number of backup backend servers in HAproxy? - server

Let's say I have 3 main servers and 3 backup servers. I want HAproxy to replace a main server with a backup server as soon as it goes down.
To elaborate, let's say Main Server 1 goes down, HAproxy will then still continue to use 3 servers in total, where 2 will be main and 1 will be backup. Similarly, if 2 main server goes down, HAproxy will still use a total of 3 servers, 1 from main and 2 from backup.
Also, once the main server is active again, HAproxy should stop using the backup and switch back to the main server.

You can use the backup directive on the server line and option allbackups in the specific section.
You maybe can also add the weight for the server to define which backup server should be used in priority order.

Related

Ansible release with serial: 50% for two different backend in haproxy

I have the following configuration in haproxy.
backend 1
machine-1 machine-1.com:8080
machine-2 machine-2.com:8080
machine-3 machine-3.com:8080
machine-4 machine-4.com:8080
machine-5 machine-5.com:8080
machine-6 machine-6.com:8080
machine-7 machine-7.com:8080
machine-8 machine-8.com:8080
machine-9 machine-9.com:8080
machine-10 machine-10.com:8080
backend 2
machine-11 machine-11.com:8080
machine-12 machine-12.com:8080
Serial is set to 50% in ansible rolling deployment.We also change the state of the machines to maintenance in this window. Thus ansible puts machine 1-6 in maintenance mode in the first go while making 7-12 as maintenance in the second go.
As it puts 7-12 as maintenance in the second go; the backend 2 cluster has no nodes online to take the traffic. This causes a huge number of issues on the application side.
How should I remediate this? I am using ansible 2.0.0.
EDIT 1
Two solutions that I can think of
Make two releases for two backends
replace one machine from 1-6 with one machine in backend 2, say 11.
Looking for solutions other than these. more in the line of using ansible to solve it.
Creating a host groups for each backend and run the update for each backend group in a separate run would be IMHO the best solution. If there is no way to do that it is possible to define batch sizes as a list since Ansible 2.2.
So this should work:
- name: test play
hosts: backend servers.
serial:
- 5
- 1

restarting a multi tier server architecture

My project has 4 servers, 2 on one layer and 2 on another layer. I use a context switch to load balance each layer so it shares requests amongst the two servers. 2 servers lie in the presentation tier side and the other 2 servers lie in the application tier (or we call it the business tier). The presentation tier has a dependency on the application tier. Now, the question I have is if one of the servers in the application tier fails to start but the other three servers start up correctly can you just restart that one application server that failed or do you have to restart all 4 servers? We are using jboss on these servers if that helps with the question. If more info is needed please ask.
I did some tests and to reiterate what was mention as a comment by alpha yes you can just restart one application tier without the need to restart all of the other servers. I did notice that, I don't know if it's a configuration thing or jboss thing, when you restart the application tier server most transactions tend to hit the other application tier that wasn't restarted. I don't know why this happens but it isn't a problem because the transactions, though minimal, that end up going to the restarted server work fine and after some time the balance does return back to 50-50.

LiveRebel Update Strategy

I am trying to utilize LiveRebel on my production environment. After most parts are configured I tried to perform update on my application from lets say version 1.1 to 1.3 as shown below
Does this mean that LiveRebel require two server installation on 2 physical IP addresses ? Can I have two server on 2 virtual IP addresses ?
Rolling restarts use request routing to achieve zero downtime for the users. Sessions are first drained by waiting for old sessions to expire and routing new ones to an identical application on another server. When all sessions are drained, application is updated, while the other server handles the requests.
So, as you can see, for zero downtime you need additional server to handle the requests while application is updated. Full restart doesn't have that requirement, but results in downtime for users.
As for the question about IPs, as long as the two server (virtual) machines can see each other , doesn't really make much difference.

Going from a single zookeeper server to a clustered configuration

I have a single server that I now want to replicate and go for higher availability. One of the elements in my software stack if Zookeeper, so it seems natural to go to a clustered configuration on it.
However, I have data on my single server, and I couldn't find any guide on going to a clustered setup. I tried setting up two independent instances and then going to a clustered configuration, but only data present on the elected master was preserved.
So, how can I safely go from a single server setup to a clustered setup without losing data?
If you go from 1 server straight to 3 servers, you may lose data, as the 2 new servers are sufficient to form a quorum, and elect one of themselves as a leader, ignoring the old server, and losing all data on that machine.
If you grow your cluster from 1 to 2, when the two servers start up, then a quorum can't form without the old server being involved, and data will not be lost. When the cluster finishes starting, all data will be synced to both servers.
Then you can grow your cluster from 2 to 3, and again a quorum can't form without at least 1 server that has a copy of the database, and again when the cluster finishes starting all data will be synced to all three servers.

How the primary server down will be handled automatically in mongodb replication

I never have my hands on coding. I got a doubt regarding mongodb replica sets
below is the situation
I have an alert monitoring application.
It is using mongodb with replica set with 3 nodes.
Applications Java code base keep connecting to the primary and doing some transactions.
Now my question is that,
if the primary server is down, how will it effect the application server.
I mean, would the app server writes error saying connection failed like errors.
OR
the replica set will pick one of the slaves automatically as master and provides the application server to do its activity. How will it happen...?
Thanks & Regards,
UDAY
The replica set will try to pick another server as the new primary. If you have three nodes, and one goes down, the other two will negotiate which one becomes the new master. If two go down, or somehow communication between the remaining breaks down, there will be no new master until the situation is recovered.
The official drivers support this automatic fail-over, as does the mongos routing server if you use it. So the application code does not need to do anything here.
I am not sure if there will be connection errors during the brief period of time this fail-over negotiation takes (you will probably get errors for a few seconds).