What is connections per host mongodb? - mongodb

I followed this tutorial and there is configuration connections per host.
What is this?

connectionsPerHost are the amount of physical connections a single Mongo client instance (it's singleton so you usually have one per application) can establish to a mongod/mongos process. At time of writing the java driver will establish this amount of connections eventually even if the actual query throughput is low (in order words you will see the "conn" statistic in mongostat rise until it hits this number per app server).
There is no need to set this higher than 100 in most cases but this setting is one of those "test it and see" things. Do note that you will have to make sure you set this low enough so that the total amount of connections to your server do not exceed
Found here How to configure MongoDB Java driver MongoOptions for production use?

Related

Maximising concurrent request handling with PostgreSQL / Npgsql client

I have a db and client app that does reads and writes, I need to handle a lot of concurrent reads but be sure that writes get priority, while also respecting my db’s connection limit.
Long version:
I have a single instance pgSQL database which allows 100 connections.
My .net microservice uses Npgsql to connect to the db. It has to do read queries that can take 20-2000ms and writes that can take about 500-2000ms. Right now there are 2 instances of the app, connecting with the same user credentials. I am trusting Npgsql to manage my connection pooling, and am preparing my read queries as there are basically just 2 or 3 variants with different parameter values.
As user requests increased, I started having problems with the database’s connection limit. Errors like ‘Too many connections’ from the db.
To deal with this I introduced a simple gate system in my repo class:
private static readonly SemaphoreSlim _writeGate = new(20, 20);
private static readonly SemaphoreSlim _readGate = new(25, 25);
public async Task<IEnumerable<SomeDataItem>> ReadData(string query, CancellationToken ct)
{
await _readGate.WaitAsync(ct);
// try to get data, finally release the gate
_readGate.Release();
}
public async Task WriteData(IEnumerable<SomeDataItem>, CancellationToken ct)
{
await _writeGate.WaitAsync(ct);
// try to write data, finally release the gate
_writeGate.Release();
}
I chose to have separate gates for read and write because I wanted to be confident that reads would not get completely blocked by concurrent writes.
The limits are hardcoded as above, a total of limit of 45 on each of the 2 app instances, connecting to 1 db server instance.
It is more important that attempts to write data do not fail than attempts to read. I have some further safety here with a Polly retry pattern.
This was alright for a while, but as the concurrent read requests increase, I see that the response times start to degrade, as a backlog of read requests begins to accumulate.
So, for this question, assume my sql queries and db schema are optimized to the max, what can I do to improve my throughput?
I know that there are times when my _readGate is maxed out, but there is free capacity in the _writeGate. However I don’t dare reduce the hardcoded limits because at other times I need to support concurrent writes. So I need some kind of QoS solution that can allow more concurrent reads when possible, but will give priority to writes when needed.
Queue management is pretty complicated to me but is also quite well known to many, so is there a good nuget package that can help me out? (I’m not even sure what to google)
Is there a simple change to my code to improve on what I have above?
Would it help to have different conn strings / users for reads vs writes?
Anything else I can do with npgsql / connection string that can improve things?
I think that postgresql recommends limiting connections to 100, there's a SO thread on this here: How to increase the max connections in postgres?
There's always a limit to how many simultaneous queries that you can run before the perf would stop improving and eventually drop off.
However I can see in my azure telemetry that my db server is not coming close to fully using cpu, ram or disk IO (cpu doesn't exceed 70% and is often less, memory the same, and IOPS under 30% of its capacity) so I believe there is more to be squeezed out somewhere :)
Maybe there are other places to investigate, but for the sake of this question I'd just like to focus on how to better manage connections.
First, if you're getting "Too many connections" on the PostgreSQL side, that means that the total number of physical connections being opened by Npgsql exceeds the max_connection setting in PG. You need to make sure that the aggregate total of Npgsql's Max Pool Size across all app instances doesn't exceed that, so if your max_connection is 100 and you have two Npgsql instances, each needs to run with Max Pool Size=50.
Second, you can indeed have different connection pools for reads vs. writes, by having different connection strings (a good trick for that is to set the Application Name to different values). However, you may want to set up one or more read replicas (primary/secondary setup); this would allow all read workload to be directed to the read replica(s), while keeping the primary for write operation only. This is a good load balancing technique, and Npgsql 6.0 has introduced great support for it (https://www.npgsql.org/doc/failover-and-load-balancing.html).
Apart from that, you can definitely experiment with increasing max_connection on the PG side - and accordingly Max Pool Size on the clients' side - and load-test what this do to resource utilization.

postgresql what means max connections?

What does "max connections" mean in PostgreSQL?
Maybe I have max_connections = 500, and 1000 users come to my site then for the other 500 is the database not available? Or did I misunderstand this?
If you set max_connections to 500 then the database will accept at most 500 connections.
The typical approach to support a (much) higher number of online users is to use a connection pool inside your application - especially fo web applications. It's not unheard of that the ratio between the actual number of used (physical) database connections and the number of "online users" is in the area of 1:100 (so for 1000 users you would only need a connection pool that maintains 10 connections). Depending on the application and workload that ration might even be higher.
How exactly you enable a connection pool in your application depends on the technology you are using. Java web applications typically use that by default (through a JNDI datasource).
If your application (or technology) doesn't support a connection pool directly you need to use an external pooler like pgPool or pgBouncer.

Why WSO2 APIm needs 50+ DB connections at startup?

In our WSO2 setup, whenever the APIm comes up, it creates close to 50+ DB connections towards the PostGres DB. In stable phase, each APIm instance has only 4 DB connections. I would like to understand why it needs 50+ connections at startup? is it a bug or by design?
We run WSO2 in kubernetes setup, PostGres has a max connection limit set to 100, and two instances of APIm is not able to come-up due to this issue.
Within the WSO2 platform, the Tomcat JDBC pooling is used as the default pooling framework due to its production-ready stability and high performance. The goal of tuning the pool properties is to maintain a pool that is large enough to handle peak load without unnecessarily utilizing resources. These pooling configurations can be tuned for your production server in general in the <PRODUCT_HOME>/repository/conf/datasources/master-datasources.xml file. This is applicable if you are using an APIM version less than or equal to 2.6. If you are using APIM-3.X.X then these configurations can be found in <PRODUCT_HOME>/repository/conf/deployment.toml file.
The following parameters should be considered when tuning the connection pool:
The application's concurrency requirement.
The average time used for running a database query.
The maximum number of connections the database server can support.
The maxActive value is the maximum number of active connections that can be allocated from the connection pool at the same time. The default value is 100. The maximum latency (approximately) = (P / M) * T, where,
M = maxActive value
P = Peak concurrency value
T = Time (average) taken to process a query
Therefore, by increasing the maxActive value (up to the expected highest number of concurrency), the time that requests wait in the queue for a connection to be released will decrease. But before increasing the Max. Active value, consult the database administrator, as it will create up to maxActive connections from a single node during peak times, and it may not be possible for the DBMS to handle the accumulated count of these active connections.
Note that this value should not exceed the maximum number of requests allowed for your database.
For more details on this topic please refer to the official documents[1, 2].
[1] https://docs.wso2.com/display/ADMIN44x/Performance+Tuning#PerformanceTuning-JDBCpoolconfiguration
[2] http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html

httperf for bechmarking web-servers

I am using httperf to benchmark web-servers. My configuration, i5 processor and 4GB RAM. How to stress this configuration to get accurate results...? I mean I have to put 100% load on this server(12.04 LTS server).
you can use httperf like this
$httperf --server --port --wsesslog=200,0,urls.log --rate 10
Here the urls.log contains the different uri/path to be requested. Check the documention for details.
Now try to change the rate value or session value, then see how many RPS you can achieve and what is the reply time. Also in mean time monitor the cpu and memory utilization using mpstat or top command to see if it is reaching 100%.
What's tricky about httperf is that it is often saturating the client first, because of 1) the per-process open files limit, 2) TCP port number limit (excluding the reserved 0-1024, there are only 64512 ports available for tcp connections, meaning only 1075 max sustained connections for 1 minute), 3) socket buffer size. You probably need to tune the above limit to avoid saturating the client.
To saturate a server with 4GB memory, you would probably need multiple physical machines. I tried 6 clients, each of which invokes 300 req/s to a 4GB VM, and it saturates it.
However, there are still other factors impacting hte result, e.g., pages deployed in your apache server, workload access patterns. But the general suggestions are:
1. test the request workload that is closest to your target scenarios.
2. add more physical clients to see if the changes of response rate, response time, error number, in order to make sure you are not saturating the clients.

Persistent connection or connection pooling in PHP54+ Nginx + PHPFPM + MongoDB

I am using pecl mongo 1.4.x driver(http://pecl.php.net/package/mongo/1.4.1), with the setup mentioned in the title on a moderate traffic services (5K - 10K request per min).
And I've found that, in mongoDB the Auth command is taking a large chunck of traffic, and connection request rate is like 30-50 per second.
This impacts the performance seriously(lock ratio up, memory management doesn't cops nicely)
And if I do netstat in a box(which I have 5-8 boxes in total), I see 2-3K mongo connections in total (some in WAIT some in ESTABLISHED) per box.
My question is how can I reduce the # of connection to mongoDB, how to setup persistent connection properly?
It seems the way of persistent connection working in PECL mongoDB Driver has been changing since 1.2 then 1.3 and it performs slightly differently in 1.4.
Here is the way I invoke the client with the driver:
$mongo = new MongoClient(
"host1:11004,host2:11004", array(
'replicaSet' => MG_REPLICASET,
'password'=>"superpasswd",
'username'=>"myuser",
'db'=>"mydb",
'journal' => true,
"readPreference"=> MongoClient::RP_SECONDARY_PREFERRED
)
);
In the 1.4 version all connections are persistent, unless you close them yourself - which you should never do. You will see a connection per IP/username/password/database combination from each PHP processing unit. In your case, per each PHPFPM process. In order to reduce the number of connections, you need to have less username/password/database combinations. However, with 8 boxes, and 50 FPM processes and 3 nodes in your replicaset you're already at 1200 connections - without even taken into account database/username/password differences. There is not much that you can do about that, but it shouldn't have a big influence in performance. You are much more likely to hit RAM/slow disk limitations.
I guess I found a solution to avoid exceessive mongo connection request.
We need to set PHP_FCGI_MAX_REQUESTS (or pm. max_requests in php-fpm) to a larger number, so the process won't recycle too often.
Also I need to make sure pm.request_terminate_timeout is not too small, so workers won't be killed too often.