SQLAlchemy - Postgres Connection Issue while running via Flask - postgresql

Flask with SQLAlchemy
Flask==0.10.1
SQLAlchemy==1.0.8
In production after a lot of usages (connection) we are getting this error. After that restarting the server helps, what will be the permanent solution
OperationalError: (psycopg2.OperationalError) server closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\
Similarly, we are creating a scoped session, we are doing session.close()
I tried Null Pool which doesn't help
Any ideas on that?
Relevant finding
Is it odd that my SQLAlchemy MySQL connection always ends up sleeping?
Handle mysql restart in SQLAlchemy

We need to handle it in SQLAlchemy Dealing Disconnect
https://docs.sqlalchemy.org/en/latest/core/pooling.html#dealing-with-disconnects

Related

SQLAlchemy and RDS Proxy & Lambda randomly cannot establish a connection

I'm using SQLAlchemy inside a Lambda with RDS Proxy (PostgreSQL).
This configuration works, but when I'm invoking about 100 lambdas at the same time some lambdas receive a timeout (3 seconds) even when I'm increasing the timeout for 30 seconds I still get a timeout.
After investigating where this timeout comes from, I concluded that the SQLAlchemy hangs when it tries to get a connection to the database.
I thought the RDS proxy could easily handle 100+ concurrent connections.
Here is my engine configuration:
db_url = f"postgresql://{db_username}:{secret}#{db_hostname}:{db_port}/{db_name}"
engine = create_engine(
db_url,
echo=True,
echo_pool="debug",
poolclass=NullPool)
Usually, when SQLAlchemy creates a connection it logs the following:
DEBUG sqlalchemy.pool.impl.NullPool Created new connection <connection object at 0x7f074ebe5580; dsn: 'user=user password=xxx dbname=dbname host=rds-default.{rds-proxy-id}.{region}.rds.amazonaws.com port=port', closed: 0>
But sometimes it will just hang and won't log anything and eventually timeout.
Any help would be appreciated

Heroku Postgres filling up connections without any use

I have a Heroku Postgres DB(free tier) that's connected to my backend API for testing purposes. Today I tried accessing the database and I kept getting an error "too many connections for Role 'role'". Please note that I've not connected to this API today and for some reason, all 20 connections have been used up.
I can't even connect to the DB through PgAdmin to even try and kill some of the connections as I get the same error there.
Any help please?

Troubleshoot org.postgresql.util.PSQLException: Connection attempt timed out that only occasionally happens

I have several applications running on tomcat with a local PostgresSQL database. And tomcat occasionally reports the following errors:
org.postgresql.util.PSQLException: Connection attempt timed out.
I am able to connect to the database using other tools such as DBeaver. And this problem looks only happens when several applications are connecting to the database. So I want to know how to troubleshoot this issue. Is there any log in PostgresSQL that I can check?
PostgreSQL does have logging. It is very configurable, and we can't tell you how you have it configured. Common locations are /var/log/postgresql/, and in PGDATA/log/. However, a connection timeout will probably not be in the postgresql log as it probably never achieved contact with the postgresql server to start with.

Heroku PostgreSQL Connection reset by peer

We are using PostgreSQL Crane plan, and got a lot of log like this
app postgres - - [5-1] ... LOG: could not receive data from client: Connection reset by peer
We are using about 50 dynos.
Is PostgreSQL running out of connections with bunch of dynos?
Can someone help me explain this case?
Thanks
From what I've found the cause for the errors is the client not disconnecting at the end of the session, or a new connection not being created.
New connection solving the problem:
Postgres error on Heroku with Resque
Explicit disconnection solving the problem:
https://github.com/resque/resque/issues/367 (comment #2)
There's a Heroku FAQ entry on this: Understanding Heroku Postgres Log Statements and Common Errors: could not receive data from client: Connection reset by peer.
Although this log is emitted from postgres, the cause for the error has nothing to do with the database itself. Your application happened to crash while connected to postgres, and did not clean up its connection to the database. Postgres noticed that the client (your application) disappeared without ending the connection properly, and logged a message saying so.
If you are not seeing your application’s backtrace, you may need to ensure that you are, in fact, logging to stdout (instead of a file) and that you have stdout sync’d.

Does database connection stay opened except for errors and explicit closure?

Assuming that no statements to close the connection are made before my script ends and no exception is encountered before closing the connection, does the database's connection stay open?
I'm connecting to the database programmatically via Python Psycopg2 and via Java JDBC4 driver.
Not entirely sure what you want exactly, but let's try:
You can see the connections that exist at any time with PGAdmin or this SQL command
SELECT * FROM pg_stat_activity;
It should be fairly simple to spot when - for your specific use case - the connection closes.
If an SQL query is running at the time you close a connection, I think it will run to completion, ie the backend serving it will remain alive, even if the connection is closed from the client side.