I see in my HAProxy Statistics Report that
the Sessions Curr, Max, Limit all at 2000.
How do I increase the Max and Limit to more than 2000?
Use the maxconn
Sets the maximum per-process number of concurrent connections to <number>
Source: https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#maxconn
It's the maxconn option, but not in the global section as #Ton Hong Tat mentions.
maxconn
Fix the maximum number of concurrent connections on a frontend
Documentation for frontend max sessions:
https://cbonte.github.io/haproxy-dconv/1.8/snapshot/configuration.html#4-maxconn
In order to increase maxconn.
edit haproxy with your favorite editor
under frontend http, add following
maxconn 4000
example as below
frontend localnodes
bind *:8080
mode http
default_backend nodes
maxconn 4000
Related
What parameters and how should be changed in postgresql.conf to increase the maximum number of connections to 3000 if the RAM is 60GB?
To increase connection, you should change max_connections
But I suggest you use the website https://pgtune.leopard.in.ua/#/, This website help you config Postgres server
PS:
in my opinion, don't increase connection to 3000 because if you increase connection, database work memory decreased for each client, you can use pgbouncer
Pgbouncer is the proxy and connection manager when you have too many clients to connecting to the database, this proxy handle clients and create a pool connection
I'm running HAProxy 1.5.18 to front a MySQL Percona XtraDB Cluster with everything setup as per guidelines on Percona website.
I'm seeing that the "Current Sessions" statistic is not updating as I would expect for a backend that has gone down then come back up again.
Its therefore pretty confusing to get an accurate picture of which backend mysql node is taking all the traffic.
Here's the frontend/backend config I'm using:
frontend pxc_frontend
mode tcp
bind *:6033
timeout client 28800s
default_backend pxc_backend
backend pxc_backend
mode tcp
timeout connect 10s
timeout server 28800s
balance leastconn
option httpchk
stick-table type ip size 1
stick on dst
server percona-node1 10.99.1.21:3306 check port 9200 inter 1000 rise 3 fall 3
server percona-node2 10.99.1.22:3306 backup check port 9200 inter 1000 rise 3 fall 3
server percona-node3 10.99.1.23:3306 backup check port 9200 inter 1000 rise 3 fall 3
Here's what I've tried:
1) Start my application up - this makes 50 connections to the DB (via HAProxy) hence the "current sessions" stat in the HAProxy UI shows as 50 for the backend that is the active one (percona-node1 in my case). I verify this using netstat to check the number of connections between HAProxy and the backend MySQL Node.
2) I then shutdown the backend mysql node with all the connections (percona-node1) and let HAProxy failover connections to the next backend in the list (percona-node2). I verify using netstat that HAProxy has 0 connections to the old backend (obviously) and now has 50 connections to the new backend. The "current sessions" stat in the HAProxy UI shows as 50 for the new backend but typically has a number <50 for the old backend.
3) I then bring the old backend mysql node back up again (percona-node1). I verify again using netstat that HAProxy has 0 connections to the newly restarted backend and maintains its 50 connections to the backend percona-node2. The "current sessions" stat in the HAProxy UI shows the same non-zero number for percona-node1 as before and 50 for percona-node2. I would expect it to show 0 for percona-node1 and 50 for percona-node2.
So does the current sessions stats not get cleared down for a node that has gone down then come back again?
Thanks in advance for your wisdom.
In jemeter i am testing for 100000 MQTT concurrent user with ramp up of 10000 and loop count is 1.
The library that I am using for MQTT in Jmeter is https://github.com/emqx/mqtt-jmeter . But I am getting
SEVERE: No buffer space available (maximum connections reached?): connect exception after reaching 64378.
Specification:
OS: Windows 10
Ram : 64 GB
CPU : i7
Configuration in registry editor:
This is due to the windows having too many active client connections.
The default number of ephemeral TCP ports is 5000. Sometimes this number may be insufficient if the server has too many active client connections. In that case the ephemeral TCP ports are all used up and no more can be allocated to a new client connection request resulting in the error message (for a Java application)
You should specify TCP / IP settings by editing the following registry values in the HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ Tcpip \ Parameters registry subkey:
MaxUserPort
Specifies the maximum port number for ephemeral TCP ports.
TcpNumConnections
Specifies the maximum number of concurrent connections that TCP can open. This value significantly affects the number of concurrent osh.exe processes that are allowed. If the value for TcpNumConnections is too low, Windows can not assign TCP ports to stages in parallel jobs, and the parallel jobs can not run.
These keys are not added to the registry by default.
Follow this link to Configuring the Windows registry: Specifying TCP / IP settings and made necessary edit.
Hope this will help.
I've configured Apache web server on my CentOs sever machine. I want to increase 5000 concurrent request with MPM_Prefork. Please suggest best Prefork configuration for that. I've done Prefork configuration on httpd.conf file, but its not working.
My Prefork configuration:
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 30
MaxSpareServers 40
MaxClients 5000
ServerLimit 20
MaxRequestsPerChild 500
</IfModule>
KeepAlive On
MaxKeepAliveRequests 5000
KeepAliveTimeout 5
Some suggestions.
MaxRequestsPerChild if your apache is stable, you can increase that one to a MUCH higher value. This prevents your children processes from dying to often. I have a web site set at 10 000 (high volume).
MaxClients (renamed MaxRequestsWorkers) is set to 5000, but ServerLimit is at 20. So MaxRequestsWorkers is blocked at 20. ServerLimit is the highest MaxRequestsWorkers will be allowed to grow. So put ServerLimit higher.
You said your tests showed 1000 requests at a time with ServerLimit == 20. So do not set ServerLimit to 5000!
If you expect high traffic, increase StartServer so it will be ready.
When you perform your tests, ramp up the load up to it's maximum, then let it sit there for a while. Do not try 5000 at one go.
Setup server-status in your apache. This will allow you to view the state of your Apache (number of workers, what they are doing, ...). If you see all workers busy and have still not reached your 5000, increase the values accordingly.
And finaly realise that 5000 concurrent requests means 5000 browser actively requesting data at the same time, with an open connection. Real users have think time, read time so your requests are staggered more than with a load testing tool.
I use postgresql as database. I have a master/slave with streaming replication. I want to use HAProxy for load balancing. I want to send the writes to the master, and the reads to the slave. Can I do this with haproxy?
No, you can't. HAProxy doesn't understand the PostgreSQL protocol so it has no idea what "reads" or "writes" are.
Take a look at PgPool-II, which can do this to a limited extent. In practice it's usually better to configure the application so it knows to route its read-only queries to a different server if possible.
We are doing it by defining a frontend for reading and other for writing, each one listening on different ports, and routing them to backends where you have your db cluster organized.
Example of HAProxy config:
frontend writes
bind *:5439
default_backend writes_db
frontend reads
bind *:5438
default_backend reads_db
backend writes_db
option pgsql-check user haproxy
server master_db ip.for.my.master:5432 check
backend reads_db
balance roundrobin
option pgsql-check user haproxy
server replica_db ip.for.my.replica:5432 check
In our case, we use Django so we need to define the routers and settings.databases so all write operations are done on one port of the HAProxy server (5438) and all read operations are done on the other one (5439).