I have two EC2 instances, one of them needs to insert large amounts of data into a Postgresql db that lives on the other one. Incidentally it's Open Street Map data and I am using the osm2pgsql utility to insert the data, not sure if that's relevant, I don't think so.
For smaller inserts everything is fine but for very large inserts something times out after around 15 minutes and the operation fails with:
COPY_END for planet_osm_point failed: SSL SYSCALL error: Connection timed out
Is this timeout enforced by Postgresql, Ubuntu or AWS? Not too sure where to start troubleshooting.
Thanks
Could be caused by renegotiation. Check the log, and maybe tweak
ssl_renegotiation_limit = 512MB (the default)
setting it to zero will disable negotiation
Related
I am facing the below issue
ERROR: canceling statement due to user request
inconsistently after I enabled query timeout for xa datasource in my xxx-ds.xml. I have added the following in my ds xml file.
<query-timeout>180</query-timeout>
The query timeout is set for 180 seconds, which means any sql query
that takes more than 180 seconds will get cancelled from application
server side.
But the issue I am facing is inconsistent and the queries gets timed out now and then without taking exactly 180 seconds.
We are using connection pooling also.
While searching stackoverflow found this question, which discusses about the possible causes for this issue while using connection pooling.
The solution suggested there was to set statement_timeout setting in postgresql.conf file. But it is bit difficult for me to enable statement_timeout setting in my database environment as the DB server is shared by multiple applications.
I would like to have a solution to terminate timed out queries from client side effectively and consistently while using connection pooling. I am using
JBoss 4.2.2-GA
postgresql 9.2 (64 bit)
java 1.7
postgresql-9.2-1002.jdbc4.jar
It looks like the issue is with postgresql driver 9.2. When I upgraded to 9.3, issue is fixed.
I have 2 cloud servers of postgresql, 1st one is working fine but in second after 30 mins i am not able to connect from java application. When i connect from pgadmin it shows 30 to 40 connection and after killing those connection every thing runs smooth.
its
configuration:
postgresql/9.3
max_connections = 100
shared_buffers = 4GB
When same application is connect to other postgresql with same schema every thing works fine forever
Configuration:
postgresql/9.1
max_connections = 100
shared_buffers = 32MB
Can u please help me to understand or fix the issue
I work on a PostgreSQL 9.3 instance with hundreds of open connections. I concur to you that the open connections themselves shouldn't be a problem. Sine we don't have much information, what follows is a description of how to get started troubleshooting.
Check server logs for anything wrong. Maybe there is an issue on the OS level with initiating connections?
Try logging in with psql as the application user. Does the problem persist? If not, the problem is not with PostgreSQL. I would take a closer look at the Java code and see if something is happening there.
Note that psql and other libpq actions may not give you the full picture. Try connecting locally over a non-SSL connection while watching a packet capture. You can find (and look up) the SQLSTATE error of the connection in this case. This is because, for legacy and backwards compatibility reasons libpq does not pass the sqlstate up to the client app when connecting to the database.
My bet though is that this is not a postgresql issue. It may be an operating system issue. It may be a resource issue. It may be a client application issue.
I am trying to backup a database products with pg_Dump.
The total size of the database is 1.6 gb. One of the table in the database is product_image which is 1gb in size.
When I run the pg_dump on the database the database backup fails with this error.
##pg_dump: Dumping the contents of table "product_image" failed:
PQgetCopyData
() failed.
pg_dump: Error message from server: lost synchronization with server:
got messag
e type "d", length 6036499
pg_dump: The command was: COPY public.product_image (id, username,
projectid, session, filename, filetype, filesize, filedata, uploadedon, "timestamp") T
If I try to backup the database by excluding the product_image table, the backup succeeds.
I tried increasing the shared_buffer in the postgres.conf to 1.5gb from 128MB , but the issue still persists. How can this issue be resolved?
I ran into the same error and it was due to a buggy patch from RedHat for OpenSSL in early June (2015). There is related discussion on the PostgresSQL mailing list.
If you use SSL connections and cross a transferred size threshold, which depends on your PG version (default 512MB for PG < 9.4), the tunnel attempts to renegotiate the SSL keys and the connection dies with the errors you posted.
The fix that worked for me was setting the ssl_renegotiation_limit to 0 (unlimited) in postgresql.conf, followed by a reload.
ssl_renegotiation_limit (integer)
Specifies how much data can flow over an SSL-encrypted connection before renegotiation of the session keys will take place. Renegotiation decreases an attacker's chances of doing cryptanalysis when large amounts of traffic can be examined, but it also carries a large performance penalty. The sum of sent and received traffic is used to check the limit. If this parameter is set to 0, renegotiation is disabled. The default is 512MB.
I am using PostgreSQL 8.4 and PostGIS 1.5. What I'm trying to do is INSERT data from one table to another (but not strictly the same data). For each column, a few queries are run and there are a total of 50143 rows stored in the table. But the query is quite resource-heavy: after the query has run for a few minutes, the connection is lost. Its happening about 21-22k MS into the execution of the query, after which I have to start the DBMS manually again. How should I go about solving this issue?
The error message is as follows:
[Err] server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Additionally, here is the psql error log:
2013-07-03 05:33:06 AZOST HINT: In a moment you should be able to reconnect to the database and repeat your command.
2013-07-03 05:33:06 AZOST WARNING: terminating connection because of crash of another server process
2013-07-03 05:33:06 AZOST DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
My guess, reading your problem, is that you are hitting out of memory issues. Craig's suggestion to turn off overcommit is a good one. You may also need to reduce work_mem if this is a big query. This may slow down your query but it will free up memory. work_mem is per operation so a query can use many times that setting.
Another possibility is you are hitting some sort of bug in a C-language module in PostgreSQL. If this is the case, try updating to the latest version of PostGIS etc.
I'm trying to run a Drupal migration via SSH and drush (a command line shell), copying data from a postgres database to mysql.
It works fine for a while (~5 mins or so), but then I get the error:
SQLSTATE[HY000]: General error: 7 SSL [error] SYSCALL error: EOF detected
The postgres database connection seems to have gone, and I just get errors:
SQLSTATE[HY000]: General error: 7 no [error] connection to the server
It works fine locally, so I think the problem must be with postgres and running a script over SSH - but googling these errors returns nothing useful. Does anyone know what could be causing this?
Could be a timeout. first inspect the log (maybe change ssl_renegotiation_limit)
BTW: IIRC, the renegotiation does not take place after a fixed amount of time, but after a certain amount of transmitted characters (2GB?)
You should check both your PostgreSQL and MySQL logs for further potential details. If there's not much in the PostgreSQL log, look at the log_min_error_statement in postgresql.conf. As you'll find through that link, you can tune it to increase the amount of logging. If there's still not clues in the PostgreSQL log, I would look at other components in your system for the problem.