PostgreSQL on remote database: No buffer space available (maximum connections reached)? - postgresql

I'm trying to put a huge data into PostgreSQL (PostGIS for detail).
About 100 scenes, each scene contains 12 bands of raster image. Each image is about 100MB
What I do:
For each scene in scenes (
for each band in scene (
Open connection to postGIS db
Add band
)
SET PGPASSWORD=password
psql -h 192.168.2.1 -p 5432 -U user -d spatial_db -f combine_bands.sql
)
It ran well till scene #46. It causes an error No buffer space available (maximum connections reached)
I run script on Windows 7, my remote server is on Ubuntu 12.04 LTS.
UPDATE: Connect to remote server and run sql file.

This message:
No buffer space available (maximum connections reached?)
comes from a Java exception, not the PostgreSQL server. A java stack trace may be useful to get some context.
If the connection was rejected by PostgreSQL, the message would be:
FATAL: connection limit exceeded for non-superusers
Still it may be that the program exceeds its max number of open sockets by not closing its connections to PostgreSQL. Your script should close each DB connection as soon as it's finished with it, or open just one and reuse it throughout the whole process.
Simultaneaous connections for the same program are only needed when issuing queries in parallel, which doesn't seem to be the case here.

Related

postgresql / postgis : low disk space

Im trying to run a query on two tables on my postgresql database.
The query is like this :
psql -d gis -c "create table hgb as (select osm.*, h.geometry from osm_polygon osm join hautegaronne h on ST_contains(h.geometry,osm.way));"
The osm table is 1.3G rows and the h table is 1 row.
I get the error:
"CAUTION: Connection terminated due to crash of another server process
DETAIL: The postmaster instructed this server process to roll back the transaction
running and exiting because another server process exited abnormally
and that there is probably corrupted shared memory.
TIP: In a moment, you should be able to reconnect to the database.
data and relaunch your order.
the connection to the server was cut unexpectedly
The server may have terminated abnormally before or during the
request processing.
the connection to the server has been lost"
And my linux is showing me a notification "the / volume has only 438,2Mo of free disk space".
The postgresql log file tells :
ERROR: Could not write block 16700650 to file 'base/16384/486730.127': No space available on device
I have tried to set temp_file_limit 10000000 in postgresql.conf but it didnt change anything.
Could I set an external hard drive as a temp folder for postgresql to process my query?
Can I set a cap limit for the temp file size?
Thank you

Google Cloud SQL MySQL 5.7 denies connection when doing large import

I'm having difficulties migrating a database (~3gb sql file) from MySQL 5.6 to MySQL 5.7 on Google Cloud SQL.
First I made a dbdump of the MySQL 5.6 server database:
mysqldump -hxx.xx.xx.xx -uroot -pxxxx dbname --opt --hex-blob --default-character-set=utf8 --no-autocommit > dbname.sql
I then tried to import the database with cloudsql-import:
.go/bin/cloudsql-import --dump=dbname.sql --dsn='root:password#tcp(xx.xx.xx.xx:3306)/dbname'
The import starts but after a while (around 10 minutes) I receive the following error message:
2016/06/29 13:55:48 dial tcp xx.xx.xx.xx:3306: getsockopt: connection refused
Any further connection attempts to the MySQL server are denied with the following error message:
ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (111)
Only a full restart (made from the google cloud platform console) makes it possible to connect to again.
I made a full migration from 5.5 to 5.6 using this method not so long ago. Any ideas why this doesn't work with 5.7?
would you check the storage disk usage on the Console Overview page of the instance? If the storage is full, you can increase the storage size of your instance by changing the storage size value in Edit page.
If the binary logging is enabled, lots of space will be taken by binary logs. You could consider to turn it off when you are running the import.
If you still have trouble with the instance, you can send an email to cloud-sql#google.com for further investigation. Thanks.
I tried analyzing the different rows where the import had timed out, but didn't find anything out of the ordinary. I then fiddled with the available parameters in Google cloud SQL and when using mysqldump.
I finally just tried using a better Machine Type (from two core 8GB Ram to 8 core 30GB ram) and it "solved" the problem.

pg_dump gets SSL error, seems to time out

I'm trying to download a database to my local machine using pg_dump. The command I'm using is:
pg_dump --host xx.xx.xx.xx --port xxxx --username "xxx" --password --format custom --blobs --verbose --file "testing.db" "xxx"
When it gets to dumping the last table in the database it always crashes with this error:
pg_dump: Dumping the contents of table "versions" failed: PQgetCopyData() failed.
pg_dump: Error message from server: SSL error: sslv3 alert handshake failure
pg_dump: The command was: COPY public.xxx (columns) TO stdout;
I SSH'd into a server that's a bit closer to the server I'm downloading from (I'm in Brisbane, it's in San Francisco) and was able to do the pg_dump without issue. So I know the database server is fine. I suspect it's a timeout because it's getting to the last table before failing; if it was actually an SSL error I'd have expected it to come up sooner. That said, the timeout occurs after a different amount of time each time it fails (the two most recent tests failed after 1300s and 1812s respectively).
Any tips on how to debug are welcome.
I'm on OS X 10.8.5. Local pg_dump is 9.2.4, server is Ubuntu Server running psql 9.1.9.
It might be a SSL renegociation problem.
See this parameter on the server (postgresql.conf) and the associated warning about old SSL client libraries, although OS X 10.8 seems newer than this.
From the 9.1 documentation:
ssl_renegotiation_limit (integer)
Specifies how much data can flow over an SSL-encrypted connection before
renegotiation of the session keys will take place.
Renegotiation decreases an attacker's chances of doing cryptanalysis
when large amounts of traffic can be examined, but it also carries a
large performance penalty. The sum of sent and received traffic is
used to check the limit. If this parameter is set to 0, renegotiation
is disabled. The default is 512MB.
Note: SSL libraries from before November 2009 are insecure when using SSL
renegotiation, due to a vulnerability in the SSL protocol.
As a stop-gap fix for this vulnerability, some vendors
shipped SSL libraries incapable of doing renegotiation. If any such
libraries are in use on the client or server, SSL renegotiation should
be disabled.
EDIT:
Updating this parameter in postgresql.conf does not require a server restart, but a server reload with /etc/init.d/postgresql reload or service postgresql reload.
The value can be also be checked in SQL with show ssl_renegotiation_limit;
Even if the size of the dump is smaller than 512Mb, it may be that the amount of data transmitted is way larger, since pg_dump compresses the data locally when using the custom format (--format custom).

Heroku Postgres: Too many connections. How do I kill these connections?

I have an app running on Heroku. This app has an Postgres 9.2.4 (Dev) addon installed. To access my online database I use Navicat Postgres. Sometimes Navicat doesn't cleanly close connections it sets up with the Postgres database. The result is that after a while there are 20+ open connections to the Postgres database. My Postgres installs only allows 20 simultanious connections. So with the 20+ open connections my Postgress database is now unreachable (too many connections).
I know this is a problem of Navicat and I'm trying to solve this on that end. But if it happens (that there are too many connections), how can I solve this (e.g. close all connections).
I've tried all of the following things, without result.
Closed Navicat & restarted my computer (OS X 10.9)
Restarted my Heroku application (heroku restart)
Tried to restart the online database, but I found out there is no option to do this
Manually closed all connections from OS X to the IP of the Postgres server
Restarted our router
I think it's obvious there are some 'dead' connections at the Postgres side. But how do I close them?
Maybe have a look at what heroku pg:kill can do for you? https://devcenter.heroku.com/articles/heroku-postgresql#pg-ps-pg-kill-pg-killall
heroku pg:killall will kill all open connections, but that may be a blunt instrument for your needs.
Interestingly, you can actually kill specific connections using heroku's dataclips.
To get a detailed list of connections, you can query via dataclips:
SELECT * FROM pg_stat_activity;
In some cases, you may want to kill all connections associated with an IP address (your laptop or in my case, a server that was now destroyed).
You can see how many connections belong to each client IP using:
SELECT client_addr, count(*)
FROM pg_stat_activity
WHERE client_addr is not null
AND client_addr <> (select client_addr from pg_stat_activity where pid=pg_backend_Tid())
GROUP BY client_addr;
which will list the number of connections per IP excluding the IP that dataclips itself uses.
To actually kill the connections, you pass their "pid" to pg_terminate_backend(). In the simple case:
SELECT pg_terminate_backend(1234)
where 1234 is the offending PID you found in pg_stat_activity.
In my case, I wanted to kill all connections associated with a (now dead) server, so I used:
SELECT pg_terminate_backend(pid), host(client_addr)
FROM pg_stat_activity
WHERE host(client_addr) = 'IP HERE'
1). First login into Heroku with your correct id (in case you have multiple accounts) using heroku login.
2). Then, run heroku apps to get a list of your apps and copy the name of the one which is having the PostgreSQL db installed.
3). Finally, run heroku pg:killall --app appname to get all the connections terminated.
From the Heroku documentation (emphasis is mine):
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This occurs on Starter Tier (dev and basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
Because Heroku does not provide superuser access your options are rather limited to the above.
Restart server
heroku restart --app <app_name>
It will close all connection and restart.
As the superuser (eg. "postgres"), you can kill every session but your current one with a query like this:
select pg_cancel_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();
If they do not go away, you might have to use a stronger "kill", but certainly test with pg_cancel_backend() first.
select pg_terminate_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();

Timeout of remote connection to Postgresql

I have two EC2 instances, one of them needs to insert large amounts of data into a Postgresql db that lives on the other one. Incidentally it's Open Street Map data and I am using the osm2pgsql utility to insert the data, not sure if that's relevant, I don't think so.
For smaller inserts everything is fine but for very large inserts something times out after around 15 minutes and the operation fails with:
COPY_END for planet_osm_point failed: SSL SYSCALL error: Connection timed out
Is this timeout enforced by Postgresql, Ubuntu or AWS? Not too sure where to start troubleshooting.
Thanks
Could be caused by renegotiation. Check the log, and maybe tweak
ssl_renegotiation_limit = 512MB (the default)
setting it to zero will disable negotiation