I am doing PoC on KSQLDB Elasticsearch connector.
I am following from 2 documents online:
One:
https://ksqldb.io/quickstart.html
All working fine and well and after I fallowed
Second one:
https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/demo_build-a-streaming-pipeline.adoc
I am getting this issue when I run this command:
CREATE SINK CONNECTOR SINK_ES_sample_1 WITH (
'connector.class' = 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector',
'topics' = 'sample_1',
'connection.url' = 'http://localhost:9200',
'type.name' = '_doc',
'key.ignore' = 'false',
'schema.ignore' = 'true',
'transforms'= 'ExtractTimestamp',
'transforms.ExtractTimestamp.type'= 'org.apache.kafka.connect.transforms.InsertField$Value',
'transforms.ExtractTimestamp.timestamp.field' = 'sample_1'
);
Error:
io.confluent.ksql.util.KsqlServerException:
org.apache.hc.client5.http.HttpHostConnectException: Connect to
http://localhost:8083 [localhost/127.0.0.1] failed: Connection refused
(Connection refused) Caused by:
org.apache.hc.client5.http.HttpHostConnectException: Connect to
http://localhost:8083 [localhost/127.0.0.1] failed: Connection
refused (Connection refused) Caused by: Could not connect to the
server. Please check the server details are correct and that the
server is running.
That suggests that you've misconfigured the ksqlDB server in its connection to Kafka Connect.
If you're following that demo script then you should use the associated Docker Compose file which is configured correctly:
KSQL_KSQL_CONNECT_URL: http://kafka-connect-01:8083
Related
I am using docker-compose for influxdb and Cassandra for my application but when I am trying to run application locally on my mac system I am getting below error - I am new for both so not sure where to make the changes for IPs for both docker images.
for Cassandra -
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused"), '::1': error(99, "Tried connecting to [('::1', 9042, 0, 0)]. Last error: Cannot assign requested address")})
For Influxdb - when trying to create a user programmatically
curl: (6) Could not resolve host: influxdb
I am trying to connect Corda 4.1 (open source) to Azure PostgreSQL.
With the following in the node.conf:
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks"
dataSource.user = "me#my-dev-corda-db"
dataSource.password = Password
}
It throws the error:
[ERROR] 2019-08-08T23:44:45,301Z [main] internal.NodeStartupLogging.invoke - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the
database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=uz1y94, moreInformationAt=https://errors.corda.net/OS/4.1/uz1y94]
net.corda.nodeapi.internal.persistence.CouldNotCreateDataSourceException: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.
....
....
Suppressed: org.postgresql.util.PSQLException: FATAL: SSL connection is required. Please specify SSL options and retry.
...
So I add ssl=true to the url:
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks?ssl=true"
and it throws the error:
[ERROR] 2019-08-08T23:49:45,409Z [main] internal.NodeStartupLogging.invoke - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the
database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=17q5mal, moreInformationAt=https://errors.corda.net/OS/4.1/17q5mal]
net.corda.nodeapi.internal.persistence.CouldNotCreateDataSourceException: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.
...
...
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Could not open SSL root certificate file /home/corda/.postgresql/root.crt.
...
Caused by: org.postgresql.util.PSQLException: Could not open SSL root certificate file /home/corda/.postgresql/root.crt.
...
I then tried setting the sslmode=require:
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks?ssl=true&sslmode=require"
which then errors with:
[ERROR] 2019-08-08T23:53:38,323Z [main] internal.NodeStartupLogging.invoke - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the
database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=uz1y94, moreInformationAt=https://errors.corda.net/OS/4.1/uz1y94]
net.corda.nodeapi.internal.persistence.CouldNotCreateDataSourceException: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.
...
...
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: FATAL: no pg_hba.conf entry for host "57.211.24.3", user "me", database "banks", SSL on
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:512) ~[HikariCP-2.5.1.jar:?]
...
Caused by: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "57.211.24.3", user "me", database "banks", SSL on
...
What are the full correct steps to use Azure PostgreSQL with Corda?
When you connect from Internet to Azure PostgreSQL you need to enable your IP in the server's firewall, see: https://learn.microsoft.com/en-us/azure/postgresql/concepts-firewall-rules#connecting-from-the-internet.
You can do it simply from Azure portal - go to Connection Security and Add Client IP (if you don't have a static IP, then you need to repeat it each time).
For JDBC connection settings only ?sslmode=require is needed, so in your node configuration use:
dataSource.url = "jdbc:postgresql://my-dev-corda-db.postgres.database.azure.com:5432/banks?sslmode=require"
I wrote a server which connects to postgresql via slick when a client send a request. Now I have one client that sends request every 1 second. The problem is after several times, the below error raised:
[SEVERE][slick.psql.db connection adder][Driver] Connection error: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)
at org.postgresql.Driver.makeConnection(Driver.java:450)
at org.postgresql.Driver.connect(Driver.java:252)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:117)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:123)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:365)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:194)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:460)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:697)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:683)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This is my config for slick postgres:
psql {
profile="slick.jdbc.PostgresProfile$"
driver="slick.driver.PostgresDriver$"
db {
driver="org.postgresql.Driver"
url="jdbc:postgresql://localhost:5432/mydb"
user=postgres
password=123
numThreads=2
queueSize=100
}
}
And postgres config says max_connection = 100
Here, the code on server side ran for 50 times(queueSize/numThreads) then showed error.
Also, this is a config for db connection:
lazy val psqlDbConfig: DatabaseConfig[PostgresProfile] = DatabaseConfig.forConfig("psql")
val psqlDb: JdbcBackend#DatabaseDef = psqlDbConfig.db
//sql commands
psqlDb.run(sql""" select * from mytable """.as[MyObj])
}
I supposed that when max connection is reached hikari must release them and manage pool but this case not happened.
We are getting "org.postgresql.util.PSQLException: This connection has been closed." on one of our deployments for only long running transactions (more than a few minutes):
Caused by: org.hibernate.TransactionException: rollback failed
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.rollback(AbstractTransactionImpl.java:217)
at org.springframework.orm.hibernate4.HibernateTransactionManager.doRollback(HibernateTransactionManager.java:604)
... 87 more
Caused by: org.hibernate.TransactionException: unable to rollback against JDBC connection
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doRollback(JdbcTransaction.java:167)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.rollback(AbstractTransactionImpl.java:211)
... 88 more
Caused by: org.postgresql.util.PSQLException: This connection has been closed.
at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822)
at org.postgresql.jdbc2.AbstractJdbc2Connection.rollback(AbstractJdbc2Connection.java:839)
at org.apache.commons.dbcp2.DelegatingConnection.rollback(DelegatingConnection.java:492)
at org.apache.commons.dbcp2.DelegatingConnection.rollback(DelegatingConnection.java:492)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doRollback(JdbcTransaction.java:163)
... 89 more
Our stack is as follows:
Postgresql 9.2 (on db server Ubuntu 16.03)
PgBouncer (on application server Ubuntu 16.03)
Jars (on application server Ubuntu 16.03)
org.postgresql:postgresql:9.2-1004-jdbc41
javax.transaction:jta:1.1
org.apache.commons:commons-pool2:2.4.2
org.apache.commons:commons-dbcp2:2.1.1'
Postgresql and Pgbouncer use the default parameters and we use the following parameters for dbcp:
database-initial-size = 2
database-max-total = 200
database-validation-query = SELECT 1
database-test-on-borrow = true
database-test-while-idle = true
database-max-wait-millis = 3000
database-time-between-eviction-runs-millis = 34000
database-min-evictable-idle-time-millis = 55000
We have other deployments with same parameters but we are not having the same problem there.
I suspect there is a Firewall/Nat which resets the connection after some timeout but I don't know how to check if this is the case. I would appreciate very much if you can guide me about what logs/parameters/configurations to check which may cause this exception.
I have tested that if Postgresql and PgBouncer are on the same server this problem does not occur.
I have also investigated the Postgresql logs and no error messages are logged.
After following the instructions from the following official reference "Use the Azure Cosmos DB Emulator for local development and testing", we attempted to connect to MongoDb using MongoChef using the connection string pasted below:
mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==#localhost:10255/admin?ssl=true
But when we test the connection through MongoChef we get the error pasted below:
Connection failed.
SERVER [localhost:10255] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Socket error: Connection refused: connect
Details:
Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27018, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: connect}}]
If we check the ports currently in use on our system, we do not see 10255 being used at all.
Could someone please help us understand what's wrong here.
The connection string is correct. Studio 3T has a very nasty bug - when you use From URI function while creating new connection it cuts the "+" characters that are present in the key, that's why you need to copy the key manually to the corresponding field in connection properties.
Also, make sure the Emulator is actually launched (tray area icon should be present).