Kafka Connect FileConfigProvdier not working - db2

I'm running Kafka Connect with JDBC Source Connector for DB2 in standalone mode. Everything works fine, but I'm putting the passwords and other sensitive info into my connector file in plain text. I'd like to remove this, so I found that FileConfigProvider can be used:
https://docs.confluent.io/current/connect/security.html#fileconfigprovider
However, when I try to use this it does not seem to pick up my properties file. Here's what I'm doing:
connect.standalone.oroperties -
config.providers=file
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
secrets.properties -
password=thePassword
Source Config -
"connection.password": "${file:/Users/me/app/context/src/main/kafkaconnect/connector/secrets.properties:password}",
"table.whitelist": "MY_TABLE",
"mode": "timestamp",
When I try to load my source connector (via rest api) I get the following error:
{"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):\nInvalid value com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][2013][11249][4.26.14] Connection authorization failure occurred. Reason: User ID or Password invalid. ERRORCODE=-4214, SQLSTATE=28000 for configuration Couldn't open connection to jdbc:db2://11.1.111.111:50000/mydb\nInvalid value com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][2013][11249][4.26.14] Connection authorization failure occurred. Reason: User ID or Password invalid. ERRORCODE=-4214, SQLSTATE=28000 for configuration Couldn't open connection to jdbc:db2://11.1.111.111:50000/mydb\nYou can also find the above list of errors at the endpoint /{connectorType}/config/validate"}
The password I'm providing is correct. It works if I just hardcode it into my source json. Any ideas? Thanks!
Edit: As a note, I get similar results on the sink side inserting into a Postgres database.
Edit: Result of GET /connectors:
{
"name": "jdbc_source_test-dev",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"timestamp.column.name": "UPDATED_TS",
"connection.password": "${file:/opt/connect-secrets.properties:dev-password}",
"validate.non.null": "false",
"table.whitelist": "MY_TABLE",
"mode": "timestamp",
"topic.prefix": "db2-test-",
"transforms.extractInt.field": "kafka_id",
"_comment": "The Kafka topic will be made up of this prefix, plus the table name ",
"connection.user": "username",
"name": "jdbc_source_test-dev",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connection.url": "jdbc:db2://11.1.111.111:50000/mydb",
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
},
"tasks": [
{
"connector": "jdbc_source_test-dev",
"task": 0
}
],
"type": "source"
}

Related

Errors with BigQuery Sink Connector Configuration

I am trying to ingest data from MySQL to BigQuery. I am using Debezium components running on Docker for this purpose.
Anytime I try to deploy the BigQuery sink connector to Kafka connect, I am getting this error:
{"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):\nFailed to construct GCS client: Failed to access JSON key file\nAn unexpected error occurred while validating credentials for BigQuery: Failed to access JSON key file\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"}
It shows it's an issue with the service account key (trying to locate it)
I granted the service account BigQuery Admin and Editor permissions, but the error persists.
This is my BigQuery connector configuration file:
{
"name": "kcbq-connect1",
"config": {
"connector.class": "com.wepay.kafka.connect.bigquery.BigQuerySinkConnector",
"tasks.max" : "1",
"topics" : "kcbq-quickstart1",
"sanitizeTopics" : "true",
"autoCreateTables" : "true",
"autoUpdateSchemas" : "true",
"schemaRetriever" : "com.wepay.kafka.connect.bigquery.retrieve.IdentitySchemaRetriever",
"schemaRegistryLocation":"http://localhost:8081",
"bufferSize": "100000",
"maxWriteSize":"10000",
"tableWriteWait": "1000",
"project" : "dummy-production-overview",
"defaultDataset" : "debeziumtest",
"keyfile" : "/Users/Oladayo/Desktop/Debezium-Learning/key.json"
}
Can anyone help?
Thank you.
I needed to mount the service account key in my local directory to the Kafka connect container. That was how I was able to solve the issue. Thank you :)

Use debezium link postgresql 11 Couldn't obtain encoding for database test

I use debezium cdc connect pg, and i build the pg 11 use by docker,the pg is run well. when i use debezium in kafka connector, it report:
Couldn't obtain encoding for database test
the curl is:
curl -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8083/connectors/ -d '{
"name": "debezium",
"config": {
"name": "debezium",
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "localhost",
"database.port": "5432",
"database.dbname": "test",
"database.user": "pg",
"database.password": "135790",
"database.server.name": "ls",
"table.whitelist": "public.test",
"plugin.name": "pgoutput"
}
}'
the kafka exception is:
[2020-07-08 09:24:35,076] ERROR Uncaught exception in REST call to /connectors/ (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
java.lang.RuntimeException: Couldn't obtain encoding for database test
at io.debezium.connector.postgresql.connection.PostgresConnection.determineDatabaseCharset(PostgresConnection.java:434)
at io.debezium.connector.postgresql.connection.PostgresConnection.<init>(PostgresConnection.java:77)
at io.debezium.connector.postgresql.connection.PostgresConnection.<init>(PostgresConnection.java:87)
at io.debezium.connector.postgresql.PostgresConnector.validate(PostgresConnector.java:102)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:277)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:534)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:531)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:267)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:216)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: FATAL: database "test" does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532)
at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2644)
at org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:137)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:255)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:217)
at org.postgresql.Driver.makeConnection(Driver.java:458)
at org.postgresql.Driver.connect(Driver.java:260)
at io.debezium.jdbc.JdbcConnection.lambda$patternBasedFactory$1(JdbcConnection.java:190)
at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:788)
at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:783)
at io.debezium.connector.postgresql.connection.PostgresConnection.determineDatabaseCharset(PostgresConnection.java:431)
... 13 more
[2020-07-08 09:24:35,128] INFO 127.0.0.1 - - [08/Jul/2020:01:24:34 +0000] "POST /connectors/ HTTP/1.1" 500 73 330 (org.apache.kafka.connect.runtime.rest.RestServer:60)
it seems to me that the database named test either does not exist or is not visible by the user pg.
Couple of things that are inaccurate in your payload. Keep resolvable name instead of localhost. Secondly, place correct db namespace.
"database.hostname": "FQDN",
"database.server.name": "test_table_name",
It may be the case where, PostgreSQL host has not validated the authentication with pg user. It needs to have an entry into pg_hba.conf (on PostgreSQL server) and establish trust/auth from the client machine, i.e. Kafka-connector.
# host DATABASE USER ADDRESS METHOD [OPTIONS]
# hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
host test pg Kafka.connector.server.ip/32 md5
hostssl test pg Kafka.connector.server.ip/32 md5
Then, restart the PostgreSQL server to get auth for pg user in effect, in my case pg_ctl reload.
As, the curl happens as a REST API calls, get Kafka-rest(8082) and Kafka-connect-rest(8083) ports added in the firewall settings of PostgreSQL server.
Yea, the message is differently misleading. In my case the problem was in closed ports between kafka-connect and database server.

How do you connect over SSL to Postgres in Loopback v3

My datasource.json file looks like this...
{
"db": {
"name": "db",
"connector": "memory"
},
"mydb": {
"host": "mydbhost.db.ondigitalocean.com",
"port": 25060,
"url": "",
"database": "mydb-staging",
"password": "mypassword",
"name": "mydb",
"user": "myuser",
"connector": "postgresql",
"ssl" : true
}
}
But DigitalOcean managed Postgres provides you with a CA file to use.
Where do I put it?
How do i configure LB3 to know about it?
Loopback docs say https://loopback.io/doc/en/lb3/PostgreSQL-connector.html
The PostgreSQL connector uses node-postgres as the driver. For more information about configuration parameters, see node-postgres documentation. https://node-postgres.com/features/ssl
I just don't understand how to setup LB.
When I start my server up i get...
Unhandled rejection error: permission denied for database mydb-staging
If you are using the database services on digital ocean, only the default "doadmin" user can Read and Write on any database, any other added user can only read data.

How to connect Loopback API with mongdb port in Google Kubernetes Engine

I am following this blog to deploy MongoDB in GKE and I came to a point where I need to connect my Loopback-API image in different pod but the same cluster to talk to the database.
Since the local development works as aspected with this datasource.json as following:
{
"db": {
"host": "database",
"port": 27017,
"url": "",
"database": "test",
"password": "",
"name": "mongoDS",
"user": "",
"connector": "mongodb"
}
}
In the tutorial, it is written that,
the connection string URI would be:
“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?”
I am not sure how to implement it to the above datasource.json. Any help will be appreciated.
The tutorial creates a headless service with the name "mongo" in the default ns.
Replace your "host": "database" with "host": "mongo" in your pod's datasource.json.
First, in Loopback's datasource, the name attribute is the same as the key of datasource. Second, the host attribute ought to be the name of pod that contains mongo DB.

Bluemix - error connecting loopback starter with mongoDB compose

I am trying to setup the loopback starter app on Bluemix with MongoDB. I have set up the MongoDB instance in Compose. However, when I keep getting a connection error even though I have followed all instructions and can even connect using the mongo shell.
{ [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
Take a look here: http://www.technicaladvices.com/2015/10/06/deploying-your-existing-strongloop-application-to-ibm-bluemix/
It shows the details of deploying a "StrongLoop" app in IBM Bluemix.
If the issue is still there open a support request directly from your Bluemix console or you can open a new ticket here: https://support.ng.bluemix.net/gethelp/
I was able to solve the problem by using the following configuration format in datasources.json:
"mongoDs": {
"host": "candidate.53.mongolayer.com",
"port": 10852,
"database": "SiteRite",
"username": "xxxx",
"password": "xxxx",
"name": "mongoDs",
"connector": "mongodb"
}
NOT using the 'url' key and using 'host' and 'port' with a separate 'username' and 'password' for the database is what seemed to have fixed it.