I found that it is way less than total number of get commands. Is this number of server to server connections that are reused.
Total connections is the number of connections that have been made to the server since you started it. Current connections is the number of connections that the server currently has. Total connections should be way less than the total number of get commands because typically you connect to memcached and send multiple get/set requests over the same connection.
Related
I have connection pools setup for my system to handle concurrent connections for my managed database clusters in DigitalOcean.
Overall, each client I have, has their own DB, then I create a pool for that connection to avoid the error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
Yesterday I ran into connection issues with a default database that my system also uses, I hadn't thought the connection pooling was needed for whatever dumb reason or another. No worries, I started getting flooded with error emails and then fixed the system to use the correct pooling mechanism.
This is where my question comes in, with the pooling on DigitalOcean they give you a specific "size" depending on your subscription, my subscription has an available "size" for the clusters of 97. As my clients grow I will be creating new pools and databases for them, so eventually I will run out of slots to assign a pool...what does this "size" dictate?
For example 1 client I have has an allotted size of 10 to their connection pool. Speaking to support:
The connection pool with a size of 1 will only allow 1 connection at a time. As for how you can estimate the number of simultaneous users, this is something you'll need to look over as your user and application grow. We don't have a way to give you that estimate from our back end.
So with that client that has a size of 10 alloted to their pool, they have 88 staff users that use the system simultaneously throughout the day, then they have about 4,000 users that they manage that can all sign in theoretically at once.
This is a lot more than 10 connections, and I get no errors on connection size at least that I've seen so far.
Given that I have a limited amount, how do I determine the appropriate size to use, does anybody have experience with this in production?
For example, with the connections listed above, is 10 too much, too little, just right?
Update 2/14/23
I have tested the capabilities bit because I was curious and can't get any semi-logical answer. When I use 1 connection pool for my 4,000 user client (although all users would not hit their DB/pool at the same time), I get connection errors (specifically when running background tasks from django-celery and Celery in the middle of the night).
Here are those errors, overall just connection already closed from here:
File "/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/base.py", line 269, in create_cursor cursor = self.connection.cursor()
This issue happened concurrently on 2 nights, but never during the day during normal user activity.
Once I upped the connection pool for said 4,000 user client to 2 instead of 1 the connection already closed error never occurred again.
I have a local app that connects to a local MongoDB. It has 2 databases and about 60 collections in total.
I open one connection and then get an object to access each collection.
I let the system run the whole afternoon and checking stats, I found this:
I don't understand why I have over 750k connections; but also, I don't really understand the metrics; for example the number blow: total connections created, hovering at 1770...
Can someone explain what is going on?
Total Connections Created refers to the number of times the server has accepted a connection since it started running, so if it has been running for many months, it is likely to have many. It doesn't mean they are still active (and most won't be).
You can choose to only show active connections under Current Connections by clicking on the menu icon and choosing In Use:
Here's more info on why you might be seeing a large number of available connections: MongoDB available connections
I followed this tutorial and there is configuration connections per host.
What is this?
connectionsPerHost are the amount of physical connections a single Mongo client instance (it's singleton so you usually have one per application) can establish to a mongod/mongos process. At time of writing the java driver will establish this amount of connections eventually even if the actual query throughput is low (in order words you will see the "conn" statistic in mongostat rise until it hits this number per app server).
There is no need to set this higher than 100 in most cases but this setting is one of those "test it and see" things. Do note that you will have to make sure you set this low enough so that the total amount of connections to your server do not exceed
Found here How to configure MongoDB Java driver MongoOptions for production use?
I would like to create a server infrastructure that allows 500 clients to connect all at the same time and have an indefinite connection. The plan is to have clients connect to TCP/IP sockets on the server and have them stay connected, with the server sending out data to clients randomly and clients sending data to the server randomly, similar to a small MMOG, but with scarcely any data. I came up with this plan in comparison to TCP polling every 15-30 seconds from each client.
My question is, in leaving these connections open, will this cause a massive sum of server bandwidth usage at idle? Is this the best approach without getting into the guts of TCP?
TCP uses no bandwidth when idle, except maybe a few bytes every so often (default is 2 hours) if "keep-alive" is enabled.
500 connections is nothing, though epoll() is a good choice to reduce system overhead. 5000 connections might start to become an issue.
Bandwidth isn't your major concern, but there's a limit to the number of connections you can have open (though it is quite high).
But if polling every 15 seconds would be fast enough, I'd consider it a waste to keep the connection open.
I have read that you should keep the number of connections in your database connection pool lower than the number of threads running in the application server and that might use that pool correct?
I have read too that having a high number of connections is not good but I don't really know why? Would it use more memory?
Right now during pick times my server is running out of connections and I don't know it would be good just to increase the number of connections.
Thank you
With a Small Connection pool you have faster access on the connection table but may not have enough connections to satisfy requests.
In the other hand with a Large Connection pool there are more connections to fulfill requests and requests will spend less (or no) time in the queue but access on the connection table is slower.
http://docs.oracle.com/cd/E19316-01/820-4343/abehs/index.html
core_count = 4
connections = ((core_count * 2) + effective_spindle_count) = 9