I am trying to backup a database products with pg_Dump.
The total size of the database is 1.6 gb. One of the table in the database is product_image which is 1gb in size.
When I run the pg_dump on the database the database backup fails with this error.
##pg_dump: Dumping the contents of table "product_image" failed:
PQgetCopyData
() failed.
pg_dump: Error message from server: lost synchronization with server:
got messag
e type "d", length 6036499
pg_dump: The command was: COPY public.product_image (id, username,
projectid, session, filename, filetype, filesize, filedata, uploadedon, "timestamp") T
If I try to backup the database by excluding the product_image table, the backup succeeds.
I tried increasing the shared_buffer in the postgres.conf to 1.5gb from 128MB , but the issue still persists. How can this issue be resolved?
I ran into the same error and it was due to a buggy patch from RedHat for OpenSSL in early June (2015). There is related discussion on the PostgresSQL mailing list.
If you use SSL connections and cross a transferred size threshold, which depends on your PG version (default 512MB for PG < 9.4), the tunnel attempts to renegotiate the SSL keys and the connection dies with the errors you posted.
The fix that worked for me was setting the ssl_renegotiation_limit to 0 (unlimited) in postgresql.conf, followed by a reload.
ssl_renegotiation_limit (integer)
Specifies how much data can flow over an SSL-encrypted connection before renegotiation of the session keys will take place. Renegotiation decreases an attacker's chances of doing cryptanalysis when large amounts of traffic can be examined, but it also carries a large performance penalty. The sum of sent and received traffic is used to check the limit. If this parameter is set to 0, renegotiation is disabled. The default is 512MB.
Related
Im trying to run a query on two tables on my postgresql database.
The query is like this :
psql -d gis -c "create table hgb as (select osm.*, h.geometry from osm_polygon osm join hautegaronne h on ST_contains(h.geometry,osm.way));"
The osm table is 1.3G rows and the h table is 1 row.
I get the error:
"CAUTION: Connection terminated due to crash of another server process
DETAIL: The postmaster instructed this server process to roll back the transaction
running and exiting because another server process exited abnormally
and that there is probably corrupted shared memory.
TIP: In a moment, you should be able to reconnect to the database.
data and relaunch your order.
the connection to the server was cut unexpectedly
The server may have terminated abnormally before or during the
request processing.
the connection to the server has been lost"
And my linux is showing me a notification "the / volume has only 438,2Mo of free disk space".
The postgresql log file tells :
ERROR: Could not write block 16700650 to file 'base/16384/486730.127': No space available on device
I have tried to set temp_file_limit 10000000 in postgresql.conf but it didnt change anything.
Could I set an external hard drive as a temp folder for postgresql to process my query?
Can I set a cap limit for the temp file size?
Thank you
I'm running a Red Hat 4.8.3-9 server in AWS cloud with version 9.4 of Postgresql. The database has consumed 100% of my disk usage. I went in to the database and truncated the table with the most data. After viewing the size of the tables with \d+, there was not any tables over a couple MB's. I ran du -h * --max-depth=1 and found that /var/lib/pgsql94/data/base/16384 held 472G of the 500G total storage. Even after truncating the tables, my disk usage is still 99%. I'm wondering if there is a solution to release the data because I believe deleting all the OID's in 'data/base/16384 would be bad. I have tried stopping, restarting postgres service. I am not allowed to reboot the machine unfortunately.
df -ih shows inode usage is 1%
sudo lsof +L1 does not show any large files at all
Thank you
Log files: 8K worth of repeating string
LOG: could not write temporary statistics file
"pg_stat_tmp/db_16384.tmp": No space left on device
LOG: could not
close temporary statistics file "pg_stat_tmp/db_0.tmp": No space left
on device
LOG: could not close temporary statistics file
"pg_stat_tmp/global.tmp": No space left on device
LOG: using stale
statistics instead of current ones because stats collector is not
responding
I have a problem with postgres not being able to start:
* Starting PostgreSQL 9.5 database server * The PostgreSQL server failed to start. Please check the log output:
2016-08-25 04:20:53 EDT [1763-1] FATAL: could not map anonymous shared memory: Cannot allocate memory
2016-08-25 04:20:53 EDT [1763-2] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 1124007936 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
This is the response that I got. I checked the log in tail -f /var/log/postgresql/postgresql-9.5-main.logand I can see the same response.
Can someone suggest what can be the problem?
I used following commands to stop/start the postgres server on Ubuntu 14.04 with postgres 9.5:
sudo service postgresql stop
sudo service postgresql start
As other people suggested, the error was only memory. I increased droplet memory on Digital Ocean and it fixed it.
I'm trying to download a database to my local machine using pg_dump. The command I'm using is:
pg_dump --host xx.xx.xx.xx --port xxxx --username "xxx" --password --format custom --blobs --verbose --file "testing.db" "xxx"
When it gets to dumping the last table in the database it always crashes with this error:
pg_dump: Dumping the contents of table "versions" failed: PQgetCopyData() failed.
pg_dump: Error message from server: SSL error: sslv3 alert handshake failure
pg_dump: The command was: COPY public.xxx (columns) TO stdout;
I SSH'd into a server that's a bit closer to the server I'm downloading from (I'm in Brisbane, it's in San Francisco) and was able to do the pg_dump without issue. So I know the database server is fine. I suspect it's a timeout because it's getting to the last table before failing; if it was actually an SSL error I'd have expected it to come up sooner. That said, the timeout occurs after a different amount of time each time it fails (the two most recent tests failed after 1300s and 1812s respectively).
Any tips on how to debug are welcome.
I'm on OS X 10.8.5. Local pg_dump is 9.2.4, server is Ubuntu Server running psql 9.1.9.
It might be a SSL renegociation problem.
See this parameter on the server (postgresql.conf) and the associated warning about old SSL client libraries, although OS X 10.8 seems newer than this.
From the 9.1 documentation:
ssl_renegotiation_limit (integer)
Specifies how much data can flow over an SSL-encrypted connection before
renegotiation of the session keys will take place.
Renegotiation decreases an attacker's chances of doing cryptanalysis
when large amounts of traffic can be examined, but it also carries a
large performance penalty. The sum of sent and received traffic is
used to check the limit. If this parameter is set to 0, renegotiation
is disabled. The default is 512MB.
Note: SSL libraries from before November 2009 are insecure when using SSL
renegotiation, due to a vulnerability in the SSL protocol.
As a stop-gap fix for this vulnerability, some vendors
shipped SSL libraries incapable of doing renegotiation. If any such
libraries are in use on the client or server, SSL renegotiation should
be disabled.
EDIT:
Updating this parameter in postgresql.conf does not require a server restart, but a server reload with /etc/init.d/postgresql reload or service postgresql reload.
The value can be also be checked in SQL with show ssl_renegotiation_limit;
Even if the size of the dump is smaller than 512Mb, it may be that the amount of data transmitted is way larger, since pg_dump compresses the data locally when using the custom format (--format custom).
I have two EC2 instances, one of them needs to insert large amounts of data into a Postgresql db that lives on the other one. Incidentally it's Open Street Map data and I am using the osm2pgsql utility to insert the data, not sure if that's relevant, I don't think so.
For smaller inserts everything is fine but for very large inserts something times out after around 15 minutes and the operation fails with:
COPY_END for planet_osm_point failed: SSL SYSCALL error: Connection timed out
Is this timeout enforced by Postgresql, Ubuntu or AWS? Not too sure where to start troubleshooting.
Thanks
Could be caused by renegotiation. Check the log, and maybe tweak
ssl_renegotiation_limit = 512MB (the default)
setting it to zero will disable negotiation