Finding exact query in pg_stat_activity - postgresql

As recently, I keep getting "org.postgresql.util.PSQLException: FATAL: sorry, too many clients already" on my pgadmin4 (at the same time my application hang/not responsive). Upon studying around and read from another thread in here about pg_stat_activity to troubleshoot this issue I still couldnt find the exact query that perhaps didnot close the connection properly. Here the screenshot after I extract the data from pg_stat_activity. I also combine with pg_stat_statements but still unable to find it.
Is there anyway I can get the actual query so that I find the connection that didnt close properly? Sorry im not backend dev but just junior helping on troubleshooting server issue.
I looking from other thread;
Survey few postgresql monitoring tools.
CREATE EXTENSION pg_stat_statements;

Related

Database hosts close connection after some time of inactivity

Currently I use supabase PostgreSQL hosting for my site and node-postgres to make queries. After some time (about 5-7 min) of inactivity connection is terminated and it's impossible to get/send data to DB. It is not node-Postgres problem, because connection has same behaviour even if you make queries from psql terminal.
I even tried to change hosting provider, but second provider (bit.io) had the same problem.
I came up with idea to make interceptor function that catches connection_error and makes request again. But I don't think that it is the most efficient approach.
How to solve this problem?

Why does QGIS creates multiple connections for PostgreSQL Server?

We have a PostgreSQL database with PostGIS running and today we ran into the issue that too less connections were available. Mostly we are using QGIS to access the database. We realized that issue because multiple users got the following error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
When checking the number of connections in pgAdmin I realized a thing I saw before, but as I never ran into problems didn't care too much about.
QGIS creates multiple connections to PostgreSQL for the same user to the same database.
Now I am wondering why this is the case and how I can maybe change that behaviour.
Could this happen for example if a person got access rights to a database through different user groups?
One approach might be the issue that some users run into that if you add layers to a QGIS project that was created before might ask you multiple times for your login credentials if those changed. This seem to me that probably different credentials are saved with the project and therefor multiple connections might be used. Can anyone confirm or dispruve this? - Suggestions for a test scenario are also welcome to check this.
Any ideas, hints or soutions are welcome.
By the way: Yes we increase the number of max_connections, but I want to understand why this happens and get closer to the core of the situation.

Google Cloud SQL Migration Job stuck on Running

I've got a database on Google SQL that is used by our application running on kubernetes in GKE.
The mysql instance is running on 5.6, and I need to update it to 5.7, so I tried using the new migration jobs.
I've set up the connection profile and all the required permissions for the source DB, then followed the instructions to make a continuous migration.
The Job says it's running, migrating the ~450GB database. After about a day, it's still running, the storage used seems to have stopped growing, and the replication delay is at 0. The source database is not currently in use (That's why I'm unsing it to try this out before doing the same with a more important db).
According to this, if the dump phase is done, I should be able to promote the instance, but the promote button remains greyed out, and there's no way to check the running state (it only says "running", and I don't see any way to check if it's dumping, on CDC, or anything else).
The documentation seems a bit lacking, and I couldn't find anything by googling around. Has anyone been using this?
In short, my questions are:
Why can't I promote the instance?
and how can I check in what phase is the migration?
Here's a screencap of my job:
link because SO doesn't let me embed images yet
Thanks.
p.d.: the tag that the documentation says should be used in stackoverflow is: google-cloud-database-migration-service, which is too long and stackoverflow doesn't allow, so I used google-cloud-sql instead :/
I am seeing an issue like this, but possibly more frustrating. After a week for a 2TB database, storage resets to near-zero and the full dump restarts, without any errors or indication of what happened.

Trouble Connecting with PostgreSQL

To be completely up front, I am a novice when it comes to a lot of database-related topics. I am a data scientist, but my background lends itself more to statistical modeling. So please be patient with me :)
I am trying to use a query service of Adobe Experience Platform (AEP) which relies on PostgreSQL. For the life of me, I can not get it working. I've tried lots of things and I get the following:
Error: server closed the connection unexpectedly This probably means
the server terminated abnormally before or while processing the
request.
Can anyone help me determine the root cause of the issue?

Google Cloud SQL Postgres - randomly slow queries from Google Compute / Kubernetes

I've been testing Google Cloud SQL with Postgresql, but I have random queries taking ~3s instead of a few ms.
Troubleshooting I did:
The queries themselves aren't problems, rerunning the same query will work.
Indexes are properly set. The database is also very very small, it shouldn't do this, even if there weren't any index.
The Kubernetes container is connecting to the database through SQL Proxy (I followed this https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine). It is not the problem though as I tried to connect directly to the database, with the same issue.
I configured net.ipv4.tcp_keepalive_time to 60 to make sure the connection weren't dropping.
I also have a pool of connection that are never disconnected to make sure it wasn't from that.
When I run queries directly through my local Postgresql client, I never have the problem.
I don't have this issue when developing locally either and connecting to my local database.
What I'm getting at is: I feel there's some weird connection/link issue between my Google Compute instances and my Google SQL instance that I can't seem to figure out.
Any idea?
Edit:
I also noticed these logs in my SQL Cloud instance every 30s:
ERROR: recovery is not in progress
HINT: Recovery control functions can only be executed during recovery.
STATEMENT: SELECT pg_is_xlog_replay_paused(), current_timestamp
That's an interesting problem you are facing. So my knowledge on Kubernetes isn't that great, but I do have a general understanding so let's see if I can provide some suggestions.
To start with, the API that you linked to in your question does mention that it is still in beta. So I do believe there would still be issues to patch in maximizing speed performance.
Secondly, from what I understand, Kubernetes is a great tool for handling stateless workloads. Thus, handling data where state is required for queries would be a slow operation. This article (although not entirely related) does explain some of the pitfalls of Kubernetes (not all the questions are relevant)
Thirdly, could you explain your use case a little bit? Do you really need to use Kubernetes or will another tool like a powerful Compute Engine Instance or or a Dataflow job resolve the the issue? Are you making your database queries through a programming language or an application call?
Thanks, and do let me know!