I'm using django ORM with Postgres.
After any operations with models (e.g. simple select) in postgres appears new opened connection in IDLE state.
I've tried all possible transaction manipulations, I've tried calling
connection.close()
manually. All useless.
And sooner or later, I'm recieveing "FATAL: connection limit exceeded for non-superusers" message.
What can I've made wrong?
Well, false alarm. It's normal behavior for PGPool, which was involved for this DB.
Related
If there are many open session with idle status on postgresql instance, I read that it could slow down the “snapshot” created for each SQL statement. I don't understand the mechanism that leads to this.
I am architecting a database where I expected to have 1,000s of tenants where some data will be shared between tenants. I am currently planning on using Postgres with row level security for tenant isolation. I am also using knex and Objection.js to model the database in node.js.
Most of the tutorials I have seen look like this where you create a separate knex connection per tenant. However, I've run into a problem on my development machine where after I create ~100 connections, I received this error: "remaining connection slots are reserved for non-replication superuser connections".
I'm investigating a few possible solutions/work-arounds, but I was wondering if anyone has been able to make this setup work the way I'm intending. Thanks!
Perhaps one solution might be to cache a limited number of connections, and destroy the oldest cached connection when the limit is reached. See this code as an example.
That code should probably be improved, however, to use a Map as the knexCache instead of an object, since a Map remembers the insertion order.
I have an API running in AWS Lambda and AWS Gateway using Up. My API creates a database connection on startup, and therefore Lambda does this when the function is triggered for the first time. My API is written in node using Express and pg-promise to connect to and query the database.
The problem is that Lambda creates new instances of the function as it sees fit, and sometimes it appears as though there are multiple instances of it at one time.
I keep running out of DB connections as my Lambda function is using up too many database handles. If I log into Postgres and look at the pg_stat_activity table I can see lots of connections to the database.
What is the recommended pattern for solving this issue? Can one limit the number of simultaneous instances of a function in Lambda? Can you share a connection pool across instances of a function (I doubt it).
UPDATE
AWS now provides a product called RDS Proxy which is a managed connection pooling solution to solve this very issue: https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/
There a couple ways that you can run out of database connections:
You have more concurrent Lambda executions than you have available database connections. This is certainly possible.
Your Lambda function is opening database connections but not closing them. This is a likely culprit, since web frameworks tend to keep database connections open across requests (which is more efficient), but on Lambda have no opportunity to close them since AWS will silently terminate the instance.
You can solve 1 by controlling the number of available connections on the database server (the max_connections setting on PostgreSQL) and the maximum number of concurrent Lambda function invocations (as documented here). Of course, that just trades one problem for another, since Lambda will return 429 errors when it hits the limit.
Addressing 2 is more tricky. The traditional and right way of dealing with database connection exhaustion is to use connection pooling. But with Lambda you can't do that on the client, and with RDS you don't have the option to do that on the server. You could set up an intermediary persistent connection pooler, but that makes for a more complicated setup.
In the absence of pooling, one option is to create and destroy a database connection on each function invocation. Unfortunately that will add quite a bit of overhead and latency to your requests.
Another option is to carefully control your client-side and server-side connection parameters. The idea is first to have the database close connections after a relatively short idle time (on PostgreSQL this is controlled by the tcp_keepalives_* settings). Then, to make sure that the client never tries to use a closed connection, you set a connection timeout on the client (how to do so will be framework dependent) that is shorter than that value.
My hope is that AWS will give us a solution for this at some point (such as server-side RDS connection pooling). You can see various proposed solutions in this AWS forum thread.
You have two options to fix this:
You can tweak Postgres to disconnect those idle connections. This is the best way but may require some trial-and-error.
You have to make sure that you connect to the database inside your handler and disconnect before your function returns or exits. In express, you'll have to connect/disconnect while inside your route handlers.
I have the following setup where a service layer, using jooq, contacts a PostgreSQL database.
In this scenario, whenever multiple requests happen quickly one after another (or even not that quickly), I get the following error message:
Internal error processing createItem: Cannot get autoCommit
My queries all run within transactions (using jooq's transactionResult methods).
Searching has not yielded many results, and I do not see why autoCommit should even be enabled in those cases. Is this most likely a configuration issue, or is there something else I can try to troubleshoot this issue better?
I noticed the same problem and message when running massive batch uploads on the limit of physical memory and limited amount of db connection (specific to my environment). It would be hard to provide a reproduction case for that, but to me this is a sign of db performance/memory starvation. Reduction of Java execution threads helped in my case.
All,
SQL 2005 sp3, database is about 70gb in size. Once in a while when I reindex all of my indexes in all of my tables, the front end seems to freeze up or run very slowly. These are queries coming from the front end, not stored procedures in sql server. The front end is using JTDS JDBC connection to access the SQL Server. If we stop and restart the web services sending the queries the problem seems to go away. It is my understandning that we have a connection pool in which we re-use connections and dont establish a new connection each time.
This problem does not happen every time we reindex. I have tried both ways with dbcc dbreindex and alter index online = on and sort in tempdb = on.
Any insight into why this problem occurs once in a while and how to prevent this problem would be very helpful.
Thanks in advance,
Gary Abbott
When this happens next time, look into sys.dm_exec_requests to see what is blocking the requests from the clients. The blocking_session_id will indicate who is blocking, and the wait_type and wait_resource will indicate what is blocking on. You can also use the Activity Monitor to the same effect.
On a pre-grown database an online index rebuild will not block normal activity 9select/insert/update/delete). The load on the server may increase as a result of the online index rebuild and this could result in overall slower responses, but should not cause blocking.
If the database is not pre-grown though then the extra allocations of the index rebuild will trigger database growth events, which can be very slow if left default at 10% increments and without instant file initialisation enabled. During a database growth event all activity is frozen in that database, and this may be your problem even if the indexes are rebuilt online. Again, Activity Monitor and sys.dm_exec_requests would both clearly show this as happening.