I'm trying to download a database to my local machine using pg_dump. The command I'm using is:
pg_dump --host xx.xx.xx.xx --port xxxx --username "xxx" --password --format custom --blobs --verbose --file "testing.db" "xxx"
When it gets to dumping the last table in the database it always crashes with this error:
pg_dump: Dumping the contents of table "versions" failed: PQgetCopyData() failed.
pg_dump: Error message from server: SSL error: sslv3 alert handshake failure
pg_dump: The command was: COPY public.xxx (columns) TO stdout;
I SSH'd into a server that's a bit closer to the server I'm downloading from (I'm in Brisbane, it's in San Francisco) and was able to do the pg_dump without issue. So I know the database server is fine. I suspect it's a timeout because it's getting to the last table before failing; if it was actually an SSL error I'd have expected it to come up sooner. That said, the timeout occurs after a different amount of time each time it fails (the two most recent tests failed after 1300s and 1812s respectively).
Any tips on how to debug are welcome.
I'm on OS X 10.8.5. Local pg_dump is 9.2.4, server is Ubuntu Server running psql 9.1.9.
It might be a SSL renegociation problem.
See this parameter on the server (postgresql.conf) and the associated warning about old SSL client libraries, although OS X 10.8 seems newer than this.
From the 9.1 documentation:
ssl_renegotiation_limit (integer)
Specifies how much data can flow over an SSL-encrypted connection before
renegotiation of the session keys will take place.
Renegotiation decreases an attacker's chances of doing cryptanalysis
when large amounts of traffic can be examined, but it also carries a
large performance penalty. The sum of sent and received traffic is
used to check the limit. If this parameter is set to 0, renegotiation
is disabled. The default is 512MB.
Note: SSL libraries from before November 2009 are insecure when using SSL
renegotiation, due to a vulnerability in the SSL protocol.
As a stop-gap fix for this vulnerability, some vendors
shipped SSL libraries incapable of doing renegotiation. If any such
libraries are in use on the client or server, SSL renegotiation should
be disabled.
EDIT:
Updating this parameter in postgresql.conf does not require a server restart, but a server reload with /etc/init.d/postgresql reload or service postgresql reload.
The value can be also be checked in SQL with show ssl_renegotiation_limit;
Even if the size of the dump is smaller than 512Mb, it may be that the amount of data transmitted is way larger, since pg_dump compresses the data locally when using the custom format (--format custom).
Related
I am having problems with odbc psql link on centos8 and with postgres-odbc version 10.3(this one was the default one in the repository)
I have defined my ssl cert files as such
pqopt={sslrootcert=/etc/ssl/certs/db_ssl_cert/client.crt \
sslcert=/etc/ssl/certs/db_ssl_cert/postgresql_client.crt \
sslkey=/etc/ssl/certs/db_ssl_cert/postgresql_client.key}
But I keep getting error(when using isql), even though I can connect using psql
[08001][unixODBC]libpq connection parameter error:invalid connection option "{sslrootcert"
I get this error no matter what I put in front of the first = sign.
Should the certificates be in ~/.postgresql/ directory?
What is the problem? On my windows machine I input parameters as above and no problems with the driver version 11.0, should I simply update the driver to the latest version, or what would be the solution as pgopt was supported from 9.6 postgres-odbc?
I'm setting up a server, with postgresql running as a service. I can use nmap to get current postgresql version
nmap -p 5432 -sV [IP]
It returns:
PORT STATE SERVICE VERSION
5432/tcp open postgresql PostgreSQL DB 9.3.1
Is there a way to hide the postgresql version from nmap scanning? I've searched but it's all about hiding the OS detection.
Thank you.
There's only one answer here: Firewall it.
If you have your Postgres port open, you will be probed. If you can be probed, your service can be disrupted. Most databases are not intended to be open like this to public, they're not hardened against denial-of-service attacks.
Maintain a very narrow white-list of IPs that are allowed to connect to it, and whenever possible use a VPN or an SSH tunnel to connect to Postgres instead of doing it directly. This has the additional advantage of encrypting all your traffic that would otherwise be plain-text.
You have a few options, but first understand how Nmap does it: PostgreSQL database server responds to a malformed handshake with an error message containing the line number in the source code where the error occurred. Nmap has a list of possible PostgreSQL versions and the line number where the error happens in that particular version. The source file in question changes frequently enough that Nmap can usually tell the exact version in use, or at least a range of 2 or 3 version numbers.
So what options do you have?
Do nothing. Why does it matter if someone can tell what version of PostgreSQL you are running? Keep it up to date and implement proper security controls elsewhere and you have nothing to worry about.
Restrict access. Use a firewall to limit access to the database system to only trusted hosts. Configure PostgreSQL to listen only on localhost if network communication is not required. Isolate the system so that unauthorized users can't even talk to it.
Patch the source and rebuild. Change PostgreSQL so that it does not return the source line where the error happened. Or just add a few hundred blank lines to the top of postmaster.c so Nmap's standard fingerprints can't match. But realize you'll have to do this every time there's a new version or security patch.
I am trying to backup a database products with pg_Dump.
The total size of the database is 1.6 gb. One of the table in the database is product_image which is 1gb in size.
When I run the pg_dump on the database the database backup fails with this error.
##pg_dump: Dumping the contents of table "product_image" failed:
PQgetCopyData
() failed.
pg_dump: Error message from server: lost synchronization with server:
got messag
e type "d", length 6036499
pg_dump: The command was: COPY public.product_image (id, username,
projectid, session, filename, filetype, filesize, filedata, uploadedon, "timestamp") T
If I try to backup the database by excluding the product_image table, the backup succeeds.
I tried increasing the shared_buffer in the postgres.conf to 1.5gb from 128MB , but the issue still persists. How can this issue be resolved?
I ran into the same error and it was due to a buggy patch from RedHat for OpenSSL in early June (2015). There is related discussion on the PostgresSQL mailing list.
If you use SSL connections and cross a transferred size threshold, which depends on your PG version (default 512MB for PG < 9.4), the tunnel attempts to renegotiate the SSL keys and the connection dies with the errors you posted.
The fix that worked for me was setting the ssl_renegotiation_limit to 0 (unlimited) in postgresql.conf, followed by a reload.
ssl_renegotiation_limit (integer)
Specifies how much data can flow over an SSL-encrypted connection before renegotiation of the session keys will take place. Renegotiation decreases an attacker's chances of doing cryptanalysis when large amounts of traffic can be examined, but it also carries a large performance penalty. The sum of sent and received traffic is used to check the limit. If this parameter is set to 0, renegotiation is disabled. The default is 512MB.
We are using PostgreSQL Crane plan, and got a lot of log like this
app postgres - - [5-1] ... LOG: could not receive data from client: Connection reset by peer
We are using about 50 dynos.
Is PostgreSQL running out of connections with bunch of dynos?
Can someone help me explain this case?
Thanks
From what I've found the cause for the errors is the client not disconnecting at the end of the session, or a new connection not being created.
New connection solving the problem:
Postgres error on Heroku with Resque
Explicit disconnection solving the problem:
https://github.com/resque/resque/issues/367 (comment #2)
There's a Heroku FAQ entry on this: Understanding Heroku Postgres Log Statements and Common Errors: could not receive data from client: Connection reset by peer.
Although this log is emitted from postgres, the cause for the error has nothing to do with the database itself. Your application happened to crash while connected to postgres, and did not clean up its connection to the database. Postgres noticed that the client (your application) disappeared without ending the connection properly, and logged a message saying so.
If you are not seeing your application’s backtrace, you may need to ensure that you are, in fact, logging to stdout (instead of a file) and that you have stdout sync’d.
I'm trying to run a Drupal migration via SSH and drush (a command line shell), copying data from a postgres database to mysql.
It works fine for a while (~5 mins or so), but then I get the error:
SQLSTATE[HY000]: General error: 7 SSL [error] SYSCALL error: EOF detected
The postgres database connection seems to have gone, and I just get errors:
SQLSTATE[HY000]: General error: 7 no [error] connection to the server
It works fine locally, so I think the problem must be with postgres and running a script over SSH - but googling these errors returns nothing useful. Does anyone know what could be causing this?
Could be a timeout. first inspect the log (maybe change ssl_renegotiation_limit)
BTW: IIRC, the renegotiation does not take place after a fixed amount of time, but after a certain amount of transmitted characters (2GB?)
You should check both your PostgreSQL and MySQL logs for further potential details. If there's not much in the PostgreSQL log, look at the log_min_error_statement in postgresql.conf. As you'll find through that link, you can tune it to increase the amount of logging. If there's still not clues in the PostgreSQL log, I would look at other components in your system for the problem.