I wrote a server which connects to postgresql via slick when a client send a request. Now I have one client that sends request every 1 second. The problem is after several times, the below error raised:
[SEVERE][slick.psql.db connection adder][Driver] Connection error: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)
at org.postgresql.Driver.makeConnection(Driver.java:450)
at org.postgresql.Driver.connect(Driver.java:252)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:117)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:123)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:365)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:194)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:460)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:697)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:683)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This is my config for slick postgres:
psql {
profile="slick.jdbc.PostgresProfile$"
driver="slick.driver.PostgresDriver$"
db {
driver="org.postgresql.Driver"
url="jdbc:postgresql://localhost:5432/mydb"
user=postgres
password=123
numThreads=2
queueSize=100
}
}
And postgres config says max_connection = 100
Here, the code on server side ran for 50 times(queueSize/numThreads) then showed error.
Also, this is a config for db connection:
lazy val psqlDbConfig: DatabaseConfig[PostgresProfile] = DatabaseConfig.forConfig("psql")
val psqlDb: JdbcBackend#DatabaseDef = psqlDbConfig.db
//sql commands
psqlDb.run(sql""" select * from mytable """.as[MyObj])
}
I supposed that when max connection is reached hikari must release them and manage pool but this case not happened.
Related
I am trying to import data from my local PostgreSQL database to neo4j
Firstly, I load the JDBC driver into memory
CALL apoc.load.driver("org.postgresql.Driver")
Then, I run this query to ingest data from postgresql
WITH "jdbc:postgresql://localhost:5432/graph-test?user=kt" as url
CALL apoc.load.jdbc(url,"os.operating_systems") YIELD row AS line
MERGE (o:Os {name: line.name})
MERGE (of:OsFamily {name: line.familly})
MERGE (o)-[:FROM]->(of)
Unfortunately, I received this error.
Failed to invoke procedure `apoc.load.jdbc`: Caused by: java.net.ConnectException: Connection refused (Connection refused)
Could this error be caused by the incorrect URL, jdbc plugin version, or something else?
I am doing PoC on KSQLDB Elasticsearch connector.
I am following from 2 documents online:
One:
https://ksqldb.io/quickstart.html
All working fine and well and after I fallowed
Second one:
https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/demo_build-a-streaming-pipeline.adoc
I am getting this issue when I run this command:
CREATE SINK CONNECTOR SINK_ES_sample_1 WITH (
'connector.class' = 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector',
'topics' = 'sample_1',
'connection.url' = 'http://localhost:9200',
'type.name' = '_doc',
'key.ignore' = 'false',
'schema.ignore' = 'true',
'transforms'= 'ExtractTimestamp',
'transforms.ExtractTimestamp.type'= 'org.apache.kafka.connect.transforms.InsertField$Value',
'transforms.ExtractTimestamp.timestamp.field' = 'sample_1'
);
Error:
io.confluent.ksql.util.KsqlServerException:
org.apache.hc.client5.http.HttpHostConnectException: Connect to
http://localhost:8083 [localhost/127.0.0.1] failed: Connection refused
(Connection refused) Caused by:
org.apache.hc.client5.http.HttpHostConnectException: Connect to
http://localhost:8083 [localhost/127.0.0.1] failed: Connection
refused (Connection refused) Caused by: Could not connect to the
server. Please check the server details are correct and that the
server is running.
That suggests that you've misconfigured the ksqlDB server in its connection to Kafka Connect.
If you're following that demo script then you should use the associated Docker Compose file which is configured correctly:
KSQL_KSQL_CONNECT_URL: http://kafka-connect-01:8083
I am trying to connect Corda 4.1 (open source) to Azure PostgreSQL.
With the following in the node.conf:
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks"
dataSource.user = "me#my-dev-corda-db"
dataSource.password = Password
}
It throws the error:
[ERROR] 2019-08-08T23:44:45,301Z [main] internal.NodeStartupLogging.invoke - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the
database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=uz1y94, moreInformationAt=https://errors.corda.net/OS/4.1/uz1y94]
net.corda.nodeapi.internal.persistence.CouldNotCreateDataSourceException: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.
....
....
Suppressed: org.postgresql.util.PSQLException: FATAL: SSL connection is required. Please specify SSL options and retry.
...
So I add ssl=true to the url:
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks?ssl=true"
and it throws the error:
[ERROR] 2019-08-08T23:49:45,409Z [main] internal.NodeStartupLogging.invoke - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the
database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=17q5mal, moreInformationAt=https://errors.corda.net/OS/4.1/17q5mal]
net.corda.nodeapi.internal.persistence.CouldNotCreateDataSourceException: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.
...
...
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Could not open SSL root certificate file /home/corda/.postgresql/root.crt.
...
Caused by: org.postgresql.util.PSQLException: Could not open SSL root certificate file /home/corda/.postgresql/root.crt.
...
I then tried setting the sslmode=require:
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks?ssl=true&sslmode=require"
which then errors with:
[ERROR] 2019-08-08T23:53:38,323Z [main] internal.NodeStartupLogging.invoke - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the
database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=uz1y94, moreInformationAt=https://errors.corda.net/OS/4.1/uz1y94]
net.corda.nodeapi.internal.persistence.CouldNotCreateDataSourceException: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.
...
...
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: FATAL: no pg_hba.conf entry for host "57.211.24.3", user "me", database "banks", SSL on
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:512) ~[HikariCP-2.5.1.jar:?]
...
Caused by: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "57.211.24.3", user "me", database "banks", SSL on
...
What are the full correct steps to use Azure PostgreSQL with Corda?
When you connect from Internet to Azure PostgreSQL you need to enable your IP in the server's firewall, see: https://learn.microsoft.com/en-us/azure/postgresql/concepts-firewall-rules#connecting-from-the-internet.
You can do it simply from Azure portal - go to Connection Security and Add Client IP (if you don't have a static IP, then you need to repeat it each time).
For JDBC connection settings only ?sslmode=require is needed, so in your node configuration use:
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks?sslmode=require"
We are getting "org.postgresql.util.PSQLException: This connection has been closed." on one of our deployments for only long running transactions (more than a few minutes):
Caused by: org.hibernate.TransactionException: rollback failed
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.rollback(AbstractTransactionImpl.java:217)
at org.springframework.orm.hibernate4.HibernateTransactionManager.doRollback(HibernateTransactionManager.java:604)
... 87 more
Caused by: org.hibernate.TransactionException: unable to rollback against JDBC connection
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doRollback(JdbcTransaction.java:167)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.rollback(AbstractTransactionImpl.java:211)
... 88 more
Caused by: org.postgresql.util.PSQLException: This connection has been closed.
at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822)
at org.postgresql.jdbc2.AbstractJdbc2Connection.rollback(AbstractJdbc2Connection.java:839)
at org.apache.commons.dbcp2.DelegatingConnection.rollback(DelegatingConnection.java:492)
at org.apache.commons.dbcp2.DelegatingConnection.rollback(DelegatingConnection.java:492)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doRollback(JdbcTransaction.java:163)
... 89 more
Our stack is as follows:
Postgresql 9.2 (on db server Ubuntu 16.03)
PgBouncer (on application server Ubuntu 16.03)
Jars (on application server Ubuntu 16.03)
org.postgresql:postgresql:9.2-1004-jdbc41
javax.transaction:jta:1.1
org.apache.commons:commons-pool2:2.4.2
org.apache.commons:commons-dbcp2:2.1.1'
Postgresql and Pgbouncer use the default parameters and we use the following parameters for dbcp:
database-initial-size = 2
database-max-total = 200
database-validation-query = SELECT 1
database-test-on-borrow = true
database-test-while-idle = true
database-max-wait-millis = 3000
database-time-between-eviction-runs-millis = 34000
database-min-evictable-idle-time-millis = 55000
We have other deployments with same parameters but we are not having the same problem there.
I suspect there is a Firewall/Nat which resets the connection after some timeout but I don't know how to check if this is the case. I would appreciate very much if you can guide me about what logs/parameters/configurations to check which may cause this exception.
I have tested that if Postgresql and PgBouncer are on the same server this problem does not occur.
I have also investigated the Postgresql logs and no error messages are logged.
We started getting the below error intermittently during the last week. So far we could not trace this problem to anything in particular. The query in question is an aggregation to a collection which has around 400k objects. We have the same application running for different clients and it started happening to clients which have passed that 400k mark. I ran the query directly and it took about 1.5 seconds.
The same exact exception took place when we were iterating over the results of another aggregation:
DBCursor cursor = db.cMD.find([colaborador: [$in: listP], data: data], [colab: 1, _id: 0])
def listW = []
while (cursor.hasNext()) //Exception happened here
{ def resultMap = cursor.next().toMap() listW.add(resultMap.colab) }
2015-05-20 14:03:43,511 [quartzScheduler_Worker-6] ERROR listeners.ExceptionPrinterJobListener - Exception occurred in job: Grails Job
org.quartz.JobExecutionException: com.mongodb.MongoException$Network: Read operation to server localhost:27017 failed on database application1 [See nested exception: com.mongodb.MongoException$Network: Read operation to server localhost:27017 failed on database application1]
at grails.plugins.quartz.GrailsJobFactory$GrailsJob.execute(GrailsJobFactory.java:111)
at grails.plugins.quartz.QuartzDisplayJob.execute(QuartzDisplayJob.groovy:27)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: com.mongodb.MongoException$Network: Read operation to server localhost:27017 failed on database application1
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:300)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:271)
at com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
at com.mongodb.DB.command(DB.java:317)
at com.mongodb.DB.command(DB.java:296)
at com.mongodb.DBCollectionImpl.aggregate(DBCollectionImpl.java:99)
at com.mongodb.DBCollection.aggregate(DBCollection.java:1571)
at com.gmongo.internal.Patcher._invoke(Patcher.groovy:49)
at com.gmongo.internal.Patcher$__patchInternal_closure1.doCall(Patcher.groovy:38)
at
OUR APPLICATION CODE
at GrailsMelodyGrailsPlugin$_closure4_closure16_closure17.doCall(GrailsMelodyGrailsPlugin.groovy:184)
at
OUR APPLICATION CODE
at grails.plugins.quartz.GrailsJobFactory$GrailsJob.execute(GrailsJobFactory.java:102)
... 3 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.bson.io.Bits.readFully(Bits.java:48)
at org.bson.io.Bits.readFully(Bits.java:35)
at org.bson.io.Bits.readFully(Bits.java:30)
at com.mongodb.Response.<init>(Response.java:42)
at com.mongodb.DBPort$1.execute(DBPort.java:141)
at com.mongodb.DBPort$1.execute(DBPort.java:135)
at com.mongodb.DBPort.doOperation(DBPort.java:164)
at com.mongodb.DBPort.call(DBPort.java:135)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:292)
... 16 more
Any ideas?
I was seeing that same message doing a simpler db.collection.find(critera). In my case I wasn't using an index. Once one was created things were much faster and hence no more timeouts