Heroku Postgres - How to Auto Close alive DB Connections - Flask Python App - postgresql

I have a Python/Flask app that leverages Heroku platform. Problem is that with every read/write request to Heroku Cloud Postgres, the connection that gets created remains open even after the underlying job is complete. My backend code always have "conn.close" wherever I draw from the database, but irrespective Heroku keeps the connection alive until the server is restarted or the connections are manually killed using :
heroku pg:killall
Problem is that Heroku has a connection limit of 20 for free/hobby databases and this connection limit gets saturated pretty quick.
I want to know if there may be a way to auto shut off the connection (??), once the underlying job is complete, i.e. when backend code says :
conn.close()

Related

Heroku Postgres filling up connections without any use

I have a Heroku Postgres DB(free tier) that's connected to my backend API for testing purposes. Today I tried accessing the database and I kept getting an error "too many connections for Role 'role'". Please note that I've not connected to this API today and for some reason, all 20 connections have been used up.
I can't even connect to the DB through PgAdmin to even try and kill some of the connections as I get the same error there.
Any help please?

PG bouncer running on vm instance stops connecting to cloud sql

The issue:
I'm not sure how to begin problem solving this or where to look for errors.
Setup:
Running a node.js backend via GCR and it's hitting a postgres db on cloud sql through a GCE vm with pg bouncer to handle connection pooling.
Every once in a while, I’ll get hit with a ‘unable to reach the db’ error. It happens once every 1-2 months and I’m not sure what’s causing it.
When it does happen, I have to re-terraform the pgbouncer vm to be able to connect to my db again.
I used this terraform module: cloud-sql-pgbouncer

Gunicorn always times out when doing DB query with Flask app using SQLAlchemy/PostgreSQL

I have a Flask app (technically Dash) that use longish DB queries up to 5 minutes. When I run with the development server (by using python app2.py), the app and DB queries run fine. But when I run the app with Gunicorn, no matter how I tweak the settings, the DB query times out, and the app does not run correctly.
I know that unlike the Flask development server, Gunicorn generally tries to avoid long I/O requests (like a DB query), but no matter how I change the timeout and worker type settings, nothing seems to fix the problem. I tried adding the number of workers and changing the worker type to gevent as I was researching that's better at handling I/O requests, but there is no change in behavior. Does anyone know what would solve this or where to even look? Below is the config I'm using to run Gunicorn, and also the failure message in the log when the DB query times out and spins endlessly. Also, I am running this on Ubuntu Server using SQLAlchemy in my Flask app to connect to a PostgreSQL DB. Thanks and let me know if you need any more details!
gunicorn --bind 127.0.0.1:8050 --workers 4 --worker-class gevent --timeout 600 app2:server
[29512] [CRITICAL] WORKER TIMEOUT (pid:29536)
[29512] [WARNING] Worker with pid 29536 was terminated due to signal 9

"error: too many connections for database 'postgres'" when trying to connect to any Postgres 13 instance

My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).

Is there any way to restart postgres on Heroku?

I deploy a Rails app using Unicorn. After every deployment and after every tweak I do to the DB_POOL I see postgres still hold some connections as idle and new changes are very inconsistently making me wondering If is restarting at all the service after every pool change.
I haven't found any documentation regarding this. Is there any similar command to pg_ctl on Heroku?
No, you cannot restart your Postgres database on Heroku. If you have lingering connections, it's likely an app issue. Try in stalling the pg-extras plugin and looking for IDLE connections:
https://github.com/heroku/heroku-pg-extras
Also, you can try setting up a custom ActiveRecord connection in your after_fork block and enabling the connection reaper, which should clean up any lingering dead connections it finds:
https://devcenter.heroku.com/articles/concurrency-and-database-connections