Is there any way to restart postgres on Heroku? - postgresql

I deploy a Rails app using Unicorn. After every deployment and after every tweak I do to the DB_POOL I see postgres still hold some connections as idle and new changes are very inconsistently making me wondering If is restarting at all the service after every pool change.
I haven't found any documentation regarding this. Is there any similar command to pg_ctl on Heroku?

No, you cannot restart your Postgres database on Heroku. If you have lingering connections, it's likely an app issue. Try in stalling the pg-extras plugin and looking for IDLE connections:
https://github.com/heroku/heroku-pg-extras
Also, you can try setting up a custom ActiveRecord connection in your after_fork block and enabling the connection reaper, which should clean up any lingering dead connections it finds:
https://devcenter.heroku.com/articles/concurrency-and-database-connections

Related

Heroku Postgres - How to Auto Close alive DB Connections - Flask Python App

I have a Python/Flask app that leverages Heroku platform. Problem is that with every read/write request to Heroku Cloud Postgres, the connection that gets created remains open even after the underlying job is complete. My backend code always have "conn.close" wherever I draw from the database, but irrespective Heroku keeps the connection alive until the server is restarted or the connections are manually killed using :
heroku pg:killall
Problem is that Heroku has a connection limit of 20 for free/hobby databases and this connection limit gets saturated pretty quick.
I want to know if there may be a way to auto shut off the connection (??), once the underlying job is complete, i.e. when backend code says :
conn.close()

Postgresql 9.2.1 Normal user mode Vs Standalone Backend mode

I'm having a remote machine that's using postgresql 9.2.1. Suddenly, i couldn't start my pgsql server(pg_isready command is rejecting connections). what my doubt is that, is there any possibility that i can start my database in Standalone back end mode, while it is not opening in Normal user mode?
And, what is the difference in starting the pgsql server in those two modes?
Thanks in advance.
Rather than using single user mode, look into the PostgreSQL server log file. That should tell you what the problem is.
In single-user mode, there will be just a single process accessing the database; none of the background processes are started. You'll be superuser, and the database process will last only for the duration of your session. This is something for emergency recovery, like when system tables are corrupted, you forgot your superuser password and so on.
In your case, single-user mode will probably only help if the database shut down because of an impending transaction ID wraparound. You can then issue the saving VACUUM (FREEZE) in single-user mode.
As soon as you have fixed your problem, upgrade to a supported release of PostgreSQL.

not able to connect postgresql after 30 connections

I have 2 cloud servers of postgresql, 1st one is working fine but in second after 30 mins i am not able to connect from java application. When i connect from pgadmin it shows 30 to 40 connection and after killing those connection every thing runs smooth.
its
configuration:
postgresql/9.3
max_connections = 100
shared_buffers = 4GB
When same application is connect to other postgresql with same schema every thing works fine forever
Configuration:
postgresql/9.1
max_connections = 100
shared_buffers = 32MB
Can u please help me to understand or fix the issue
I work on a PostgreSQL 9.3 instance with hundreds of open connections. I concur to you that the open connections themselves shouldn't be a problem. Sine we don't have much information, what follows is a description of how to get started troubleshooting.
Check server logs for anything wrong. Maybe there is an issue on the OS level with initiating connections?
Try logging in with psql as the application user. Does the problem persist? If not, the problem is not with PostgreSQL. I would take a closer look at the Java code and see if something is happening there.
Note that psql and other libpq actions may not give you the full picture. Try connecting locally over a non-SSL connection while watching a packet capture. You can find (and look up) the SQLSTATE error of the connection in this case. This is because, for legacy and backwards compatibility reasons libpq does not pass the sqlstate up to the client app when connecting to the database.
My bet though is that this is not a postgresql issue. It may be an operating system issue. It may be a resource issue. It may be a client application issue.

Postgresql will not listen unless restarted after every server restart

I have a weird problem since I updated my Postgresql to 9.4 on my Ubuntu Server.
Every time I restart the server, I need to restart manually my PostgreSQL database as well in order that it listen the IP adresses that are in the postgresql.conf file:
Do you have any idea why it might happen?
Thanks!
Edit: The order of services on startup if that can help:

Why is Postgres sending data somewhere? [duplicate]

I've been a MySQL guy, and now I'm working with Postgres so I am learning. Wondering if someone can tell me why my postgres process on my macbook is sending and receiving data over my network. I am just noticing this is happening for the first time - so maybe it's been going on before this and I just never noticed postgres does this.
What has me a bit nervous, is that I pulled down a production datadump from our server which is set up with replication and I imported it to my local postgres db. The settings in my postgresql.conf don't indicate replication is turned on. So it shouldn't be streaming out to anything, right?
If someone has some insight into what may be happening, or why postgres is sending/receiving packets, I'd love to hear the easy answer (and the complex one if there's more to what's happening).
This is a postgres install via Homebrew on MacOSX.
Thanks in advance!
Some final thoughts: It's entirely possible, I guess, that Mac's activity monitor also shows local 'network' traffic stats. Maybe this isn't going out to the internets.....
In short, I would not expect replication to be enabled for a DB that was dumped from a server that had it if the server to which it was restored had no replication configured at all.
More detail:
Normally, to get a local copy of a database in Postgres, one would do a pg_dump of the remote database (this could be done from your laptop, pointing at your server), followed by a createdb on your laptop to create the database stub and then a pg_restore pointed at the dump to populate its contents. [Edit: Re-reading your post, it seems like you may perhaps have done this, but meant that the dump you used had replication enabled.)]
That would be entirely local (assuming no connections into the DB from off-box), so long as you didn't explicitly setup any replication or anything else that would go off-box. Can you elaborate on what exactly you mean by importing with replication?
Also, if you're concerned about remote traffic coming from Postgres, try running this command a few times over the period of a minute or two (when you are seeing the traffic):
netstat | grep postgres
In general, replication in Postgres in configured at a server level, and has to do with things such as the master server shipping WAL files to the standby server (for streaming replication). You would have almost certainly have had to setup entries in postgresql.conf and pg_hba.conf to ensure that the standby server had access (such as a replication entry in the latter conf file). Assuming you didn't do steps such as this, I think it can pretty safely be concluded that there's no replication going on (especially in conjunction with double-checking via netstat).
You might also double-check the Postgres log to see if it's doing anything replication related. In a default install, that'd probably be in /var/log/postgresql (although I'm not 100% sure if Homebrew installs put it somewhere else).
If it's UDP traffic, to and from a high port, it's likely to be PostgreSQL's internal statistics collector.
These are pre-bound to prevent interference and should not be accessible outside of PostgreSQL.