I have got a resources release problem.
PostgreSQL runs on a server 1.
Go lang service runs on a server 2 in a Docker container.
There is an ssh tunnel Docker container for a connection between the database and the service. It does not know anything except ssh.
Docker is in swarm mode.
The service (2) connects to the database via golang database/sql library. I call sql.Open(), driverName = "postgres". Then everything is ok. In some time, may be in 30 minutes, (*DB) Query() returns an error read: connection reset by peer. If I call (*DB) Ping() previously, Ping() does not return an error, but the next call Query() does.
If I call Query() again in some time, a new connection is created. I can see it in a select * from pg_stat_activity; query in the database (state = idle). But the previous connection has not been removed.
So I call (*DB) Close(), create a new DB object and call sql.Open().
Close closes the database, releasing any open resources.
(https://golang.org/pkg/database/sql/#Conn.Close)
But after the Close call I can still see the connection in a select * from pg_stat_activity; query in the database (state = idle).
I see the bad connection and the new one.
As the result there is a resources leak.
What is the correct way to handle a read: connection reset by peer error? Why have I got this error?
Related
I have a Python/Flask app that leverages Heroku platform. Problem is that with every read/write request to Heroku Cloud Postgres, the connection that gets created remains open even after the underlying job is complete. My backend code always have "conn.close" wherever I draw from the database, but irrespective Heroku keeps the connection alive until the server is restarted or the connections are manually killed using :
heroku pg:killall
Problem is that Heroku has a connection limit of 20 for free/hobby databases and this connection limit gets saturated pretty quick.
I want to know if there may be a way to auto shut off the connection (??), once the underlying job is complete, i.e. when backend code says :
conn.close()
so - I can create a dblink connection - eg
select * from dblink( 'dbname=whatever host=the_host user=the_user password=my_password', 'select x, y, z from blah')
works fine.
I can even make what appears to be a persistent connection
select * from dblink_connect( 'dev', 'dbname=whatever host=the_host user=the_user password=my_password');
select * from dblink( 'dev', 'select x, y, z from blah' );
works fine.
For a while.
And then after a while - if I try to use dev again - it starts telling me "no open connection". But if I try to run the connect command again, it tells me a connection with that name already exists.
So how do I establish a named connection that I, and others, can just use directly forever afterwards without having to do any sort of connect/disconnect?
You can give dblink() the name of a foreign server, rather than the name of a connection.
create server dev foreign data wrapper dblink_fdw options (host 'thehost', dbname 'whatever');
create user mapping for public server dev options (user 'the_user', password 'my_password');
Then run your dblink query just as you currently are, using 'dev' as the name.
Note that this will increase the number of connections done, it is just that the system manages them so that you don't need to. So it is good for convenience, but not for performance.
The documentation says:
The connection will persist until closed or until the database session is ended.
So I suspect that you are using a connection pool, and
you may get a different database session for each transaction (but the dblink connection is open in only one of them)
the connection pool may close the backend connections after a while, thereby also closing the dblink connection
If you want to use a feature like dblink, where sessions outlive the duration of a transaction, you need session level pooling.
I have a PostgreSQL database deployed in Google Cloud that I am trying to connect to from a Cloud Run instance. I have tried the following two packages, both of them eventually leading to the same exception:
https://pub.dev/packages/postgres
https://pub.dev/packages/database_adapter_postgre
The exception I am getting is:
SocketException: Failed host lookup: '/cloudsql/{INSTANCE_CONNECTION_NAME}' (OS Error: Name or service not known, errno = -2)
I get here both times when trying to establish the connection, so in the case of the first package:
connection = new PostgreSQLConnection(
'/cloudsql/{INSTANCE_CONNECTION_NAME}',
5432,
'postgres',
username: 'username',
password: 'password');
await connection.open(); // <-- exception thrown here
I have tried changing the host string to /cloudsql/INSTANCE_CONNECTION_NAME}/.s.PGSQL.5432, but that did not work. My first thought were permissions, the service account the Cloud Run instance is using (xxx-compute#developer.gserviceaccount.com) has the Cloud SQL Editor role (tried Client and Admin too).
Running the same database code locally from a dart console app, I can connect to my database via its public IP address as the host with both packages, so the database itself is up and running.
Can someone point me in the right direction with this exception/have an example code for any of the packages above to show how to connect it to a Cloud SQL instance from a Cloud Run?
Edit:
I tried setting up a proxy locally to test out if the connection is wrong like so:
.\cloud_sql_proxy.exe -instances={INSTANCE_CONNECTION_NAME}=tcp:5433 psql
Then changing the connection host value in the code to localhost, and the port to 5433.
To my surprise it works - so from locally I am seemingly able to connect to the DB using that connection string. It still doesn't work when I use it from a Cloud Run instance though. Any help is appreciated!
It seems dart doesn't support connection through unix socket, you need to configure a IP (public or private, as you need).
Alternatively you can use pg which support unix socket connection
Hope this helps.
Just for those who come across this question in the future:
as it stands right now, I had to resort to the suggestion posted by Daniele Ricci and use the public IP for the database. The one thing to point out here was that since Cloud Runs don't have a static IPv4 address to run from, the DB had to be set to allow connections from anywhere (had to add an authorized connection from 0.0.0.0/0), which is unsafe. Until the kind development team of dart figures out how to use UNIX sockets, this seems to be the only way of getting it to work.
Not having actually tested this myself, according to the source code of the postgres package, you have to specify that you want a Unix socket:
connection = PostgreSQLConnection(
...
isUnixSocket: true, // <-- here
);
The default is false.
The host you pass is must also be valid. The docs say:
[host] must be a hostname, e.g. "foobar.com" or IP address. Do not include scheme or port.
I was struggling with the same issue.
The solution is to create a connection as follows:
PostgreSQLConnection getProdConnection() {
final String connectionName = Platform.environment['CLOUD_SQL_CONNECTION_NAME']!;
final String databaseName = Platform.environment['DB_NAME']!;
final String user = Platform.environment['DB_USER']!;
final String password = Platform.environment['DB_PASS']!;
final String socketPath = '/cloudsql/$connectionName/.s.PGSQL.5432';
return PostgreSQLConnection(
socketPath,
5432,
databaseName,
username: user,
password: password,
isUnixSocket: true,
);
}
Then when you create a Cloud Run service, you need to define 'Enviroment variables' as follows.
You also need to select your sql instance in the 'connections' tab.
Then the last thing needed is to configure a Cloud Run service account.
Then the connection to instance should be successful and there should no longer be a need for a 0.0.0.0/0 connection.
However, if you try to run this connection locally on a Windows device during development the connection will not be allowed and you will be presented with this error message: 'Unix domain sockets are not available on this operating system.'
Therefore, I recommend that you open Google SQL networking to your public address and define a local environment using the 'Public IP address' of your SQL instance.
For more information on this topic, I can recommend these resources that have guided me to the right solution:
https://cloud.google.com/sql/docs/postgres/connect-instance-cloud-run#console_5
https://github.com/dart-lang/sdk/issues/47899
We are using PostgreSQL Crane plan, and got a lot of log like this
app postgres - - [5-1] ... LOG: could not receive data from client: Connection reset by peer
We are using about 50 dynos.
Is PostgreSQL running out of connections with bunch of dynos?
Can someone help me explain this case?
Thanks
From what I've found the cause for the errors is the client not disconnecting at the end of the session, or a new connection not being created.
New connection solving the problem:
Postgres error on Heroku with Resque
Explicit disconnection solving the problem:
https://github.com/resque/resque/issues/367 (comment #2)
There's a Heroku FAQ entry on this: Understanding Heroku Postgres Log Statements and Common Errors: could not receive data from client: Connection reset by peer.
Although this log is emitted from postgres, the cause for the error has nothing to do with the database itself. Your application happened to crash while connected to postgres, and did not clean up its connection to the database. Postgres noticed that the client (your application) disappeared without ending the connection properly, and logged a message saying so.
If you are not seeing your application’s backtrace, you may need to ensure that you are, in fact, logging to stdout (instead of a file) and that you have stdout sync’d.
Assuming that no statements to close the connection are made before my script ends and no exception is encountered before closing the connection, does the database's connection stay open?
I'm connecting to the database programmatically via Python Psycopg2 and via Java JDBC4 driver.
Not entirely sure what you want exactly, but let's try:
You can see the connections that exist at any time with PGAdmin or this SQL command
SELECT * FROM pg_stat_activity;
It should be fairly simple to spot when - for your specific use case - the connection closes.
If an SQL query is running at the time you close a connection, I think it will run to completion, ie the backend serving it will remain alive, even if the connection is closed from the client side.