PG bouncer running on vm instance stops connecting to cloud sql - postgresql

The issue:
I'm not sure how to begin problem solving this or where to look for errors.
Setup:
Running a node.js backend via GCR and it's hitting a postgres db on cloud sql through a GCE vm with pg bouncer to handle connection pooling.
Every once in a while, I’ll get hit with a ‘unable to reach the db’ error. It happens once every 1-2 months and I’m not sure what’s causing it.
When it does happen, I have to re-terraform the pgbouncer vm to be able to connect to my db again.
I used this terraform module: cloud-sql-pgbouncer

Related

Heroku Postgres - How to Auto Close alive DB Connections - Flask Python App

I have a Python/Flask app that leverages Heroku platform. Problem is that with every read/write request to Heroku Cloud Postgres, the connection that gets created remains open even after the underlying job is complete. My backend code always have "conn.close" wherever I draw from the database, but irrespective Heroku keeps the connection alive until the server is restarted or the connections are manually killed using :
heroku pg:killall
Problem is that Heroku has a connection limit of 20 for free/hobby databases and this connection limit gets saturated pretty quick.
I want to know if there may be a way to auto shut off the connection (??), once the underlying job is complete, i.e. when backend code says :
conn.close()

Heroku Postgres filling up connections without any use

I have a Heroku Postgres DB(free tier) that's connected to my backend API for testing purposes. Today I tried accessing the database and I kept getting an error "too many connections for Role 'role'". Please note that I've not connected to this API today and for some reason, all 20 connections have been used up.
I can't even connect to the DB through PgAdmin to even try and kill some of the connections as I get the same error there.
Any help please?

"error: too many connections for database 'postgres'" when trying to connect to any Postgres 13 instance

My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).

How to connect a running container(tomcat) on amazon ec2 to RDS postgres

In aws, I have an amazon linux instance running with docker installed and my app running as a container. It's running in tomcat. However I need to connect it to my database.
I have made this work with a postgres container earlier doing this:
docker run --link <dbcontainername>:db -P -d tomcat-image
But to have the database more reliable it is wanted to use amazon RDS instead.
I have created a VPC with two subnets which both the instance and the RDS uses
And they are also both in the same Security Group.
I am able to access the tomcat fine through the public ip, but it throws errors because it isn't connected to the db.
Networking is not my strong suit, so there might be something there I am missing, but I find it hard to find any text describing this process without mentioning Elastic bean stalk.(It is my impression that it should be possible to do everything EBS does, manually)
There's a similar question asked here about 8 months ago, but he didn't get any responses so I'm trying again.

Google Cloud SQL: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 0

I'm desperate since my Google Cloud SQL instance went down. I could connect to it yesterday without problem but since this morning i'm unable to connect to it in any way, it produces the following error: The database server returned this error: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 0
This is what I did to try to fix this:
restart instance
added authorized ip-addresses in CIDR notation
reset root password
restored backup
pinged the ip-address and I get response
All these actions completed but i'm still unable to connect through:
PHP
MySQL workbench
Ubuntu MySQL command line
All without luck. What could I do to repair my Cloud SQL instance. Is anyone else having this problem?
I'm from the Cloud SQL team. We are looking into this issue, it should be resolved soon. See https://groups.google.com/forum/#!topic/google-cloud-sql-announce/SwomB2zuRDo. Updates will be posted on that thread (and if there's anything particularly important I'll edit this post).
The problem seems to only affect connections from outside Google Cloud. Clients connecting from App Engine and Compute Engine should work fine.
Our company has same problem.
We are unable to connect through both MySQL workbench and MySQL command line.
Our Google Appengine application has no problems to connect since its not using external IP.
there.I encountered the same problem.You need to find out your public ip address,for that type "my public ip" in Google.Now click on your Cloud SQL instance that you created,under that click on ACCESS CONTROL tab and then click on Authorization tab under that.Under Authorized network,give any name you want to the network and copy your public ip address in the network.Now save changes and try to run the command from console.It should work fine.