Database connection failed when reading from copy - postgresql

I am using kafka source connector for capturing CDC from RDS Aurora Postgres. Getting this error.
Please assist if someone know this issue.
Caused by: org.postgresql.util.PSQLException: Database connection failed when reading from copy
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1074)
at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:37)
at org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:158)
at org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:123)
at org.postgresql.core.v3.replication.V3PGReplicationStream.readPending(V3PGReplicationStream.java:80)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:397)
at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:119)
at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:99)
... 5 more
Caused by: java.net.SocketException: Socket is closed
at java.base/java.net.Socket.setSoTimeout(Socket.java:1155)
at java.base/sun.security.ssl.BaseSSLSocketImpl.setSoTimeout(BaseSSLSocketImpl.java:639)
at java.base/sun.security.ssl.SSLSocketImpl.setSoTimeout(SSLSocketImpl.java:73)
at org.postgresql.core.PGStream.setNetworkTimeout(PGStream.java:589)
at org.postgresql.core.PGStream.hasMessagePending(PGStream.java:139)
at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1109)
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1072)
... 12 more

Debezium 1.1.0.CR1 already handles auto-reconnects in these cases https://debezium.io/blog/2020/03/13/debezium-1-1-c1-released/

yes Jiri Pechanec. Actually the problem was with debezium version. older version does not support Postgres auto-connect facility if connection lost for some temporary issue. Now new version of debezium (1.1.0) support auto-connect facility.

Related

encoding error accessing postgresql from wildfly

just setting up new computer and I can't get my wildfly to connect to the postgres. I'm using same standalone.xml as on old computer.
The postgres database is configured to UTF8 (default). Usign pgadmin, I restored from backup and it shows german Umlaute correctly.
But when I start wildfly, I get following error:
Caused by: java.io.IOException: UngĀ³ltige UTF-8-Sequenz: das erste Byte ist 10xxxxxx: 187
at org.postgresql.core.UTF8Encoding.decode(UTF8Encoding.java:104)
at org.postgresql.core.PGStream.ReceiveString(PGStream.java:331)
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:705)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 35 more
sorry for the german error message. I have no idea why this message is in german.
any ideas what could be wrong?
it turned out that there is an issue with parsing error messages coming with a different locale. Apparently the postgresql jdbc driver can only handle english error messages and there was an error.
Root cause: I made a spelling mistake for a table in the database. That caused Postgresql to throw an error. But it threw the error with a german error message. And the postgresql jdbc driver was unable to parse it and threw a new error as shown in the question.
I fixed the original spelling error and with the root cause gone, there was no more error message to parse.
a year later (now) I finally fixed the locale issue by editing the standalone.xml:
<datasource jndi-name="java:jboss/datasources/PostgresDS" ...>
...
<new-connection-sql>SET lc_messages TO 'en_US.UTF-8'</new-connection-sql>
...
</datasource>

DriverClass not found for database:aurora when export postgresql to S3

I try to export Aurora PostgreSQL to S3 through aws data pipeline. However, I got this error: DriverClass not found for database:aurora
amazonaws.datapipeline.taskrunner.TaskExecutionException: Error copying record at amazonaws.datapipeline.activity.copy.SingleThreadedCopyActivity.processAll(SingleThreadedCopyActivity.java:65) at amazonaws.datapipeline.activity.copy.SingleThreadedCopyActivity.runActivity(SingleThreadedCopyActivity.java:35) at amazonaws.datapipeline.activity.CopyActivity.runActivity(CopyActivity.java:22) at amazonaws.datapipeline.objects.AbstractActivity.run(AbstractActivity.java:16) at amazonaws.datapipeline.taskrunner.TaskPoller.executeRemoteRunner(TaskPoller.java:136) at amazonaws.datapipeline.taskrunner.TaskPoller.executeTask(TaskPoller.java:105) at amazonaws.datapipeline.taskrunner.TaskPoller$1.run(TaskPoller.java:81) at private.com.amazonaws.services.datapipeline.poller.PollWorker.executeWork(PollWorker.java:76) at private.com.amazonaws.services.datapipeline.poller.PollWorker.run(PollWorker.java:53) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: DriverClass not found for database:aurora at private.com.amazonaws.services.datapipeline.database.RdsHelper.getDriverClass(RdsHelper.java:24) at amazonaws.datapipeline.database.ConnectionFactory.getRdsDatabaseConnection(ConnectionFactory.java:151) at amazonaws.datapipeline.database.ConnectionFactory.getConnection(ConnectionFactory.java:73) at amazonaws.datapipeline.database.ConnectionFactory.getConnectionWithCredentials(ConnectionFactory.java:278) at amazonaws.datapipeline.connector.SqlDataNode.createConnection(SqlDataNode.java:100) at amazonaws.datapipeline.connector.SqlDataNode.getConnection(SqlDataNode.java:94) at amazonaws.datapipeline.connector.SqlDataNode.prepareStatement(SqlDataNode.java:162) at amazonaws.datapipeline.connector.SqlInputConnector.open(SqlInputConnector.java:48) at amazonaws.datapipeline.connector.SqlInputConnector.<init>(SqlInputConnector.java:25) at amazonaws.datapipeline.connector.SqlDataNode.getInputConnector(SqlDataNode.java:79) at amazonaws.datapipeline.activity.copy.SingleThreadedCopyActivity.processAll(SingleThreadedCopyActivity.java:47)
The data pipeline node configuration as below
type: RdsDatabase
Jdbc Driver Jar Uri: S3Url
The value of S3Url is the postgresql driver downloaded from this page https://jdbc.postgresql.org/download.html and upload to fixed S3 location.
According to the above error message, the postgresql driver cannot be found. Where this postgresql jdbc driver could be found? or is there any wrong configuration in datapipeline?
Issue was resolved after change the postgresql connection node as following
Type: JdbcDatabase
ConnectionString: jdbc:postgresql://.....
Jdbc Driver Class: org.postgresql.Driver

org.postgresql.util.PSQLException: This connection has been closed. error for long running transactions

We are getting "org.postgresql.util.PSQLException: This connection has been closed." on one of our deployments for only long running transactions (more than a few minutes):
Caused by: org.hibernate.TransactionException: rollback failed
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.rollback(AbstractTransactionImpl.java:217)
at org.springframework.orm.hibernate4.HibernateTransactionManager.doRollback(HibernateTransactionManager.java:604)
... 87 more
Caused by: org.hibernate.TransactionException: unable to rollback against JDBC connection
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doRollback(JdbcTransaction.java:167)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.rollback(AbstractTransactionImpl.java:211)
... 88 more
Caused by: org.postgresql.util.PSQLException: This connection has been closed.
at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822)
at org.postgresql.jdbc2.AbstractJdbc2Connection.rollback(AbstractJdbc2Connection.java:839)
at org.apache.commons.dbcp2.DelegatingConnection.rollback(DelegatingConnection.java:492)
at org.apache.commons.dbcp2.DelegatingConnection.rollback(DelegatingConnection.java:492)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doRollback(JdbcTransaction.java:163)
... 89 more
Our stack is as follows:
Postgresql 9.2 (on db server Ubuntu 16.03)
PgBouncer (on application server Ubuntu 16.03)
Jars (on application server Ubuntu 16.03)
org.postgresql:postgresql:9.2-1004-jdbc41
javax.transaction:jta:1.1
org.apache.commons:commons-pool2:2.4.2
org.apache.commons:commons-dbcp2:2.1.1'
Postgresql and Pgbouncer use the default parameters and we use the following parameters for dbcp:
database-initial-size = 2
database-max-total = 200
database-validation-query = SELECT 1
database-test-on-borrow = true
database-test-while-idle = true
database-max-wait-millis = 3000
database-time-between-eviction-runs-millis = 34000
database-min-evictable-idle-time-millis = 55000
We have other deployments with same parameters but we are not having the same problem there.
I suspect there is a Firewall/Nat which resets the connection after some timeout but I don't know how to check if this is the case. I would appreciate very much if you can guide me about what logs/parameters/configurations to check which may cause this exception.
I have tested that if Postgresql and PgBouncer are on the same server this problem does not occur.
I have also investigated the Postgresql logs and no error messages are logged.

Streamsets DC and Crate exception. ERROR: SQLParseException: line 1:13: no viable alternative at input 'CHARACTERISTICS'

I am trying to connect to Crate as a Streamsets Data collector pipeline origin ( JDBC Consumer ). However I get this error: "JDBC_00 - Cannot connect to specified database: com.streamsets.pipeline.api.StageException: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.PoolInitializationException: Exception during pool initialization: ERROR: SQLParseException: line 1:13: no viable alternative at input 'CHARACTERISTICS' "
Why am I getting this error ? The Crate JDBC Driver version is 2.1.5 and Streamsets Data collector version is 2.4.0.0.
#gashey already solved the issue. Within Streamsets DC uncheck Enforce Read-only Connection on the Advanced tab of my JDBC query consumer configuration
(see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/crateio/hBexxel2KQw/kU34mrsJBgAJ).
We will update the streamsets-documentation with the workaround. https://crate.io/docs/tools/streamsets/

Sqoop import failing with exception interface org.apache.hadoop.mapreduce.lib.db.DBWritable not org.apache.sqoop.mapreduce.DBWritable

I have to migrate code from teradata to hive.. while importing data from Teradata using sqoop, its failing
with below error:
ERROR tool.ImportTool: Encountered IOException running import job:
java.io.IOException: java.lang.RuntimeException: interface
org.apache.hadoop.mapreduce.lib.db.DBWritable not
org.apache.sqoop.mapreduce.DBWritable
at com.cloudera.sqoop.teradata.imports.TeradataImportJob.configureInputFormat(TeradataImportJob.java:111)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:231)
at com.cloudera.sqoop.teradata.TeradataManager.importTable(TeradataManager.java:86)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:413)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
Anyone faced any issue like this?
Can you check the version of the teradata connector that you are using. Try using a different version of the connector jar. I have faced an issue with importing from MySQL table and changing to an earlier version of MySQL connector fixed my issue.