I have 2 cloud servers of postgresql, 1st one is working fine but in second after 30 mins i am not able to connect from java application. When i connect from pgadmin it shows 30 to 40 connection and after killing those connection every thing runs smooth.
its
configuration:
postgresql/9.3
max_connections = 100
shared_buffers = 4GB
When same application is connect to other postgresql with same schema every thing works fine forever
Configuration:
postgresql/9.1
max_connections = 100
shared_buffers = 32MB
Can u please help me to understand or fix the issue
I work on a PostgreSQL 9.3 instance with hundreds of open connections. I concur to you that the open connections themselves shouldn't be a problem. Sine we don't have much information, what follows is a description of how to get started troubleshooting.
Check server logs for anything wrong. Maybe there is an issue on the OS level with initiating connections?
Try logging in with psql as the application user. Does the problem persist? If not, the problem is not with PostgreSQL. I would take a closer look at the Java code and see if something is happening there.
Note that psql and other libpq actions may not give you the full picture. Try connecting locally over a non-SSL connection while watching a packet capture. You can find (and look up) the SQLSTATE error of the connection in this case. This is because, for legacy and backwards compatibility reasons libpq does not pass the sqlstate up to the client app when connecting to the database.
My bet though is that this is not a postgresql issue. It may be an operating system issue. It may be a resource issue. It may be a client application issue.
Related
I have a Heroku Postgres DB(free tier) that's connected to my backend API for testing purposes. Today I tried accessing the database and I kept getting an error "too many connections for Role 'role'". Please note that I've not connected to this API today and for some reason, all 20 connections have been used up.
I can't even connect to the DB through PgAdmin to even try and kill some of the connections as I get the same error there.
Any help please?
My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).
I am facing the below issue
ERROR: canceling statement due to user request
inconsistently after I enabled query timeout for xa datasource in my xxx-ds.xml. I have added the following in my ds xml file.
<query-timeout>180</query-timeout>
The query timeout is set for 180 seconds, which means any sql query
that takes more than 180 seconds will get cancelled from application
server side.
But the issue I am facing is inconsistent and the queries gets timed out now and then without taking exactly 180 seconds.
We are using connection pooling also.
While searching stackoverflow found this question, which discusses about the possible causes for this issue while using connection pooling.
The solution suggested there was to set statement_timeout setting in postgresql.conf file. But it is bit difficult for me to enable statement_timeout setting in my database environment as the DB server is shared by multiple applications.
I would like to have a solution to terminate timed out queries from client side effectively and consistently while using connection pooling. I am using
JBoss 4.2.2-GA
postgresql 9.2 (64 bit)
java 1.7
postgresql-9.2-1002.jdbc4.jar
It looks like the issue is with postgresql driver 9.2. When I upgraded to 9.3, issue is fixed.
I want to increase PostgreSQL's Studio login/session time out time. When I leave PostgreSQL Studio to idle just for some time, I get the following message:
You have been logged out due to inactivity. Please relogin or exit.
I am using PostgreSQL from BigSQL 5.0.3 package bundle. Actually, I am researching about compability of MS SQL Server and PostgreSQL queries.
As I am using Postgres now for learning purpose rather than security, I feel it annoying to login frequently.
How can I increase the login/session timeout inside PostgreSQL Studio?
Postgres itself doesn't have an idle connection timeout. This is coming from something else.
Unfortunately, the timeout value is hard-coded to 30 minutes in PostgreSQL Studio. You should only see that if you are not using Studio at all for 30 minutes. Its a pretty simple matter to move that property to the config file so it can be changed. We just need to write a patch.
The timeout is to prevent Studio from holding open idle connections to PostgreSQL. PostgreSQL Studio uses connection pooling to manage connections back to the database so if the browser connection goes away without logging out, we need a way to remove those connections.
I've been a MySQL guy, and now I'm working with Postgres so I am learning. Wondering if someone can tell me why my postgres process on my macbook is sending and receiving data over my network. I am just noticing this is happening for the first time - so maybe it's been going on before this and I just never noticed postgres does this.
What has me a bit nervous, is that I pulled down a production datadump from our server which is set up with replication and I imported it to my local postgres db. The settings in my postgresql.conf don't indicate replication is turned on. So it shouldn't be streaming out to anything, right?
If someone has some insight into what may be happening, or why postgres is sending/receiving packets, I'd love to hear the easy answer (and the complex one if there's more to what's happening).
This is a postgres install via Homebrew on MacOSX.
Thanks in advance!
Some final thoughts: It's entirely possible, I guess, that Mac's activity monitor also shows local 'network' traffic stats. Maybe this isn't going out to the internets.....
In short, I would not expect replication to be enabled for a DB that was dumped from a server that had it if the server to which it was restored had no replication configured at all.
More detail:
Normally, to get a local copy of a database in Postgres, one would do a pg_dump of the remote database (this could be done from your laptop, pointing at your server), followed by a createdb on your laptop to create the database stub and then a pg_restore pointed at the dump to populate its contents. [Edit: Re-reading your post, it seems like you may perhaps have done this, but meant that the dump you used had replication enabled.)]
That would be entirely local (assuming no connections into the DB from off-box), so long as you didn't explicitly setup any replication or anything else that would go off-box. Can you elaborate on what exactly you mean by importing with replication?
Also, if you're concerned about remote traffic coming from Postgres, try running this command a few times over the period of a minute or two (when you are seeing the traffic):
netstat | grep postgres
In general, replication in Postgres in configured at a server level, and has to do with things such as the master server shipping WAL files to the standby server (for streaming replication). You would have almost certainly have had to setup entries in postgresql.conf and pg_hba.conf to ensure that the standby server had access (such as a replication entry in the latter conf file). Assuming you didn't do steps such as this, I think it can pretty safely be concluded that there's no replication going on (especially in conjunction with double-checking via netstat).
You might also double-check the Postgres log to see if it's doing anything replication related. In a default install, that'd probably be in /var/log/postgresql (although I'm not 100% sure if Homebrew installs put it somewhere else).
If it's UDP traffic, to and from a high port, it's likely to be PostgreSQL's internal statistics collector.
These are pre-bound to prevent interference and should not be accessible outside of PostgreSQL.