I would like to be able to use leastconn in HAProxy while still having different weights on backend hosts.
Context: I have ~100 backend hosts that are being accessed by ~2000 front end hosts. All backend hosts process requests the same way (no faster hosts), however some backend hosts can process more requests (they have more cores). The problem is that I cannot use round robin as it is, because sometimes some backend host get stuck with long connections, and with round robin it will keep receiving more and more front end connections, which it never recovers from. In the current situation, I use leastconn, so all backend hosts process ~ the same number of requests, but I don't optimize their CPU usage.
What I would like to achieve is to be able to still use leastconn, but allowing more connections to certain hosts. For example if we have only 2 hosts: host A with 1 core and host B with 2 cores. At any moment, I would like HAProxy to decide which host to pick based on:
x= num_current_connections_A, y = 0.5*num_current_connections_B. If x<=y go to A, otherwise go to B.
I read this post which states the same issue, but no answer really solved my problem: http://haproxy.formilux.narkive.com/I6hSmq8H/balance-leastconn-does-not-honor-weight
Thank you
Related
My loadbalancer's GUI shows that it's backend has a limit of 20.000 sessions, no matter the maxconn parameter I set in global or in the backend sections.
When running a test, the max parameter in Sessions exceeds the limit.
Any idea how to solve it or if it's even a problem?
HAProxy version: 2.2.11-1ppa1~bionic 2021/03/18
Nothing complex but slightly tricky, the number in the backend session limit is fullconn,
Adding fullconn to your backend will change that limit on the stats page just in case.
backend my-backend
fullconn 200000
However, if fullconn is not set, it sums up all the session limit values for frontends that route to this backend, divided by 10. And there is lot more in the official documentation explained.
Reference documentation
Suppose we have several (100) nodes in an IOT network. Each node has limited resources. There is a postgresql database server running in one of these nodes. Now every node has several (4-5) processes which need to interact with this server to perform some insert and select queries. Each query response has to be as fast as possible for the process to work as it should. Now i think of some ways to do this are :
Each process in a node makes one database client and performs queries.
All processes in a node send their queries to a destination in localhost itself from where the queries are performed through an optimum number of database clients. This way we have some sort of control over the number of database clients like optimisation of queries getting performed through a priority queue implementation or performing queries in separate thread/process through a separate database client in each thread/process. In this case somewhat we have the control over the optimisation of number of clients,number of threads/processes , priority of in what order queries must be executed.
Each node sends all queries through some network protocol directly to the database server which then uses a limited number of database clients performing queries now in its own localhost database and then returning the response to each node through same channel. This way it increases the latency but keeps number of clients minimum. Plus we can also implement some optimisation here running each client in different process/thread etc. Database interaction can be faster in this case since number of clients can be kept minimum, it is running in localhost machine itself but it adds some overhead to transfer the query response data back to the node's process.
In order to keep the resource usage as minimum as possible in every node and queries response as fast as possible , what is the best strategy to solve this problem ?
Without knowing the networking details, option 3 would be used normally.
Reasons:
Authentication: Typically you do not want to use database users to authenticate IoT devices.
Security: By using a specific IoT protocol, you can be sure to use TLS and certificate based server authentication.
Protocol compatibility: In upgrade cases, you must ensure that you can upgrade the client nodes independently from the server nodes, or vice versa. This may not be the case for database protocol.
What is the best practice to get Geo distributed cluster with asynchronous network channels ?
I suspect I would need to have some "load balancer" which should redirect connections "within" it's own DC, do you know anything like this already in place?
Second question, should we use one HA cluster or create dedicated cluster for each of the DC ?
The assumption of the kubernetes development team is that cross-cluster federation will be the best way to handle cross-zone workloads. The tooling for this is easy to imagine, but has not emerged yet. You can (on your own) set up regional or global load-balancers and direct traffic to different clusters based on things like GeoIP.
You should look into Byzantine Clients. My team is currently working on a solution for erasure coded storage in asynchronous network that prevents some problems caused by faulty clients, but it relies on correct clients to establish a consistent state across the servers.
The network consists of a set of servers {P1, ...., Pn} and a set of clients {C1, ..., Cn}, which are all PTIM with running time bounded by a polynomial in a given securty parameter. Servers and clients together are parties. Theres an adversary, which is a PITM with running time boundded by a polynoil. Servers nd clients are controlled by adversary. In this case, theyre calld corruptd, othrwise, theyre called honest. An adversary that contrls up to t servers is called t-limited.
If protecting innocent clients from getting inconsistent values is a priority, then you should go ne, but from the pointview of a client, problems caused by faulty clients don't really hurt the system.
I am using httperf to benchmark web-servers. My configuration, i5 processor and 4GB RAM. How to stress this configuration to get accurate results...? I mean I have to put 100% load on this server(12.04 LTS server).
you can use httperf like this
$httperf --server --port --wsesslog=200,0,urls.log --rate 10
Here the urls.log contains the different uri/path to be requested. Check the documention for details.
Now try to change the rate value or session value, then see how many RPS you can achieve and what is the reply time. Also in mean time monitor the cpu and memory utilization using mpstat or top command to see if it is reaching 100%.
What's tricky about httperf is that it is often saturating the client first, because of 1) the per-process open files limit, 2) TCP port number limit (excluding the reserved 0-1024, there are only 64512 ports available for tcp connections, meaning only 1075 max sustained connections for 1 minute), 3) socket buffer size. You probably need to tune the above limit to avoid saturating the client.
To saturate a server with 4GB memory, you would probably need multiple physical machines. I tried 6 clients, each of which invokes 300 req/s to a 4GB VM, and it saturates it.
However, there are still other factors impacting hte result, e.g., pages deployed in your apache server, workload access patterns. But the general suggestions are:
1. test the request workload that is closest to your target scenarios.
2. add more physical clients to see if the changes of response rate, response time, error number, in order to make sure you are not saturating the clients.
Use case: 100 Servers in a pool; I want to start a ZooKeeper service on each Server and Server applications (ZooKeeper client) will use the ZooKeeper cluster (read/write). Then there is no single point of failure.
Is this solution possible for this use case? What about the performance?
What if there are 1000 Servers in the pool?
If you are simply trying to avoid a single point of failure, then you only need 3 servers. In a 3 node ensemble, a single failure can be tolerated with the remaining 2 nodes forming the quorum. The more servers you have the worse write performance will be. And 100 servers is the extreme of this, if ZK can even handle it.
However, having that many clients is no problem at all. Zookeeper has active deployments with many more than 1000 clients. If you find that you need more servers to handle your read load, you can always add Observers. I highly recommend you join the list serve. It is an excellent way to quickly have your questions answered, and likely in much more detail than anyone will give you on SO.
Maybe zookeeper is not the right tool?
Hazelcast does what you want, I think. You can hundreds of peers, and if the master is lost a new one is elected from all the peers.
You don't need to use all of hazel cast. You can just use the maps, or just the worker pools, or just the synchronisation primitives, or just the messaging etc.