I have installed JDBC connector running confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.2.5 inside my kafka connect connector, but when I try to implement a new sink using I have the following error : Failed to find any class that implements Connector and which name matches io.confluent.connect.jdbc.JdbcSinkConnector
Sink I'm trying to use
{
"name": "jdbc-sink-connector",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "my_topic",
"connection.url": "jdbc:postgresql://ip:port/postgres",
"connection.user": "postgres",
"connection.password": "PASSWORD",
"auto.create": "true"
}
}
I'm using confluentinc/cp-kafka-connect:6.1.0 image
If I build an image with confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.2.5 and use this image it works.
So looks like we need to restart kafka connect after install ?
we need to restart kafka connect after install ?
Yes, the JVM doesn't pick up new plugins until (re)started
Related
I have created a custom Producer Interceptor (AuditProducerInterceptor) which accepts some custom configs(application_id, type etc.). I have generated a jar from the AuditProducerInterceptor project and placed the jar inside kafka-connect at /usr/share/java/monitoring-interceptors. When i try to post JDBC-Source connector with below configurations, my audit interceptor is not triggered.
{
"name": "jdbc-source-xx-xxxx-xxx-xxx",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:sqlserver://{{ip}}:1433;databaseName=XX;useNTLMv2=true",
"connection.user": "SA",
"connection.password": "Admin1234",
"producer.interceptor.classes": "com.optum.payer.common.kafka.audit.interceptor.AuditProducerInterceptor",
"topic.prefix": "MyTestTopic",
"query": "SELECT ID, chart_id, request_id, UpdatedDate FROM xxx.xxx WITH (NOLOCK)",
"mode": "timestamp",
"timestamp.column.name": "UpdatedDate",
"producer.audit.application.id": "HelloApplication",
"producer.audit.type": "test type",
"poll.interval.ms": "10",
"tasks.max": "1",
"batch.max.rows": "100",
"validate.non.null": "false",
"numeric.mapping":"best_fit",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "http://{{ip}}:8081",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://{{ip}}:8081"
}}
As u can see in the configuration, i have added below props in connector config to trigger the custom interceptor. But I dont see any logs in Kafka-Connect related to AuditProducerInterceptor.
"producer.interceptor.classes": "com.optum.payer.common.kafka.audit.interceptor.AuditProducerInterceptor"
"producer.audit.application.id": "HelloApplication",
"producer.audit.type": "test type"
I tried adding these three config in kafka-connect config and I am able to trigger the interceptor. But I want to trigger the interceptor via JDBC source connector so that i can pass the custom props(application_id,type etc) via connector.
Please help me solve this issue
If you have allowed client overrides (enabled by default) in the Connect worker, you'll want to use producer.override prefix
From docs
Starting with 2.3.0, client configuration overrides can be configured individually per connector by using the prefixes producer.override. and consumer.override. for Kafka sources or Kafka sinks respectively.
I'm pretty new to Kafka and Kafka Connect world. I am trying to implement CDC using Kafka (on MSK), Kafka Connect (using the Debezium connector for PostgreSQL) and an RDS Postgres instance. Kafka Connect runs in a K8 pod in our cluster deployed in AWS.
Before diving into the details of the configuration used, I'll try to summarise the problem:
Once the connector starts, it sends messages to the topic as expected (snahpshot)
Once we make any change to a table (Create, Update, Delete), no messages are sent to the topic. We would expect to see messages about the changes made to the table.
My connector config looks like:
{
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.user": "root",
"database.dbname": "insights",
"slot.name": "cdc_organization",
"tasks.max": "1",
"column.blacklist": "password, access_key, reset_token",
"database.server.name": "insights",
"database.port": "5432",
"plugin.name": "wal2json_rds_streaming",
"schema.whitelist": "public",
"table.whitelist": "public.kafka_connect_cdc_test",
"key.converter.schemas.enable": "false",
"database.hostname": "de-test-sre-12373.cbplqnioxomr.eu-west-1.rds.amazonaws.com",
"database.password": "MYSECRETPWD",
"value.converter.schemas.enable": "false",
"name": "source-postgres",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"snapshot.mode": "initial"
}
We have tried different configurations for the plugin.name property: wal2josn, wal2json_streaming and wal2json_rds_streaming.
There's no problem of connection between the connector and the DB as we already saw messages flowing through as soon as the connector starts.
Is there a configuration issue with the connector described above that prevent us to see messages related to new changes appearing in the topic?
Thanks
Your connector config looks a bit confusing. I'm pretty new to Kafka as well so I don't really know the issue but this is my connector config that works for me.
{
"name":"<connector_name>",
"config": {
"connector.class":"io.debezium.connector.postgresql.PostgresConnector",
"database.server.name":"<server>",
"database.port":"5432",
"database.hostname":"<host>",
"database.user":"<user>",
"database.dbname":"<password>",
"tasks.max":"1",
"database.history.kafka.boostrap.servers":"localhost:9092",
"database.history.kafka.topic":"<kafka_topic_name>",
"plugin.name":"pgoutput",
"include.schema.changes":"true"
}
}
If this configuration didn't work aswell, try look up the log console; sometimes the error isn't the last write of the console
I'm trying to create a kafka sink connector that uses a protobuf value converter. I've got a version of this configuration working with JSON, however I now need to change it to use protobuf messages.
I'm trying to create a connector with the following request:
curl -X POST localhost:8083/connectors -H "Content-Type: application/json" -d '
{
"name": "jdbc-sink-connector",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"topics": "TEST_PROTO",
"connection.url": "${DB_URL}",
"value.converter": "io.confluent.connect.protobuf.ProtobufConverter",
"key.converter": "io.confluent.connect.protobuf.ProtobufConverter",
"auto.create": true,
"auto.evolve": true,
"type": "sink",
"connection.user": "{DB_USER}",
"connection.password": "${DB_PASS}"
}
}
This gives the following 400 error message:
Invalid value io.confluent.connect.protobuf.ProtobufConverter for configuration value.converter: Class io.confluent.connect.protobuf.ProtobufConverter could not be found
I'm not quite understanding why I'm not able to include this here. From what I can see the documentation suggests this is an appropriate value: https://docs.confluent.io/current/connect/userguide.html
Please can anyone help?
I guess that in this case, you are missing these configurations:
value.converter.schema.registry.url
key.converter.schema.registry.url
key.converter.schemas.enable
value.converter.schemas.enable
I addition to these also try using the latest jars of jdbc and latest version for confluent platform. If this doesn't works please let me know.
Adding more to the above answer for clarity, you need to mention the values for the below keys while configuring kafka-connect.
value.converter = "io.confluent.connect.protobuf.ProtobufConverter"
key.converter = "io.confluent.connect.protobuf.ProtobufConverter"
value.converter.schema.registry.url = URL (You should have schema registry service installed and all the producers registering the schema to the service registry before writing to the broker)
key.converter.schema.registry.url = Can be the same URL as above
key.converter.schemas.enable = true (if using Protobuf)
value.converter.schemas.enable = true (if using Protobuf)
To verify if the converter is loaded successfully, you can check the INFO logs.
I have a Kafka topic of Avro-serialized value.
I am trying to set up a JDBC(postgres) sink connector to dump these messages in the postgres table.
But, I am getting below error.
"org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.avro.AvroConverter for configuration value.converter: Class io.confluent.connect.avro.AvroConverter could not be found."
My Sink.json is
{"name": "postgres-sink",
"config": {
"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max":"1",
"topics": "<topic_name>",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "instaclustr_schema_registry_host:8085",
"connection.url": "jdbc:postgresql://postgres:5432/postgres?currentSchema=local",
"connection.user": "postgres",
"connection.password": "postgres",
"auto.create": "true",
"auto.evolve":"true",
"pk.mode":"none",
"table.name.format": "<table_name>"
}
}
Also, I have made changes in the connect-distributed.properties(bootstrap servers).
The command I am running is -
curl -X POST -H "Content-Type: application/json" --data #postgres-sink.json https://<instaclustr_schema_registry_host>:8083/connectors
io.confluent.connect.avro.AvroConverter is not part of the Apache Kafka distribution. You can either just run Apache Kafka as part of Confluent Platform (which ships with the converter and is easier) or you can download it separately and install it yourself.
I am trying to use Debezium to connect to a mysql database on my local machine.
Trying with the following command to call kafka:
sudo kafka/bin/connect-standalone.shsh kafka/config/connect-standalone.properties kafka/config/connector.properties
Here is the config in connector.properties:
{
"name": "inventory-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": "127.0.0.1",
"tasks.max": "1",
"database.port": "3306",
"database.user": "debezium",
"database.password": "Password#123",
"database.server.id": "1",
"database.server.name": "fullfillment",
"database.whitelist": "inventory",
"database.history.kafka.bootstrap.servers": "localhost:9092",
"database.history.kafka.topic": "dbhistory.fullfillment",
"include.schema.changes": "true",
"type": "null"
}
}
Getting the following error while running the mentioned command:
[2018-12-07 10:58:17,102] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {"config"={, "type"="null", "database.user"="debezium",, "database.port"="3306",, "include.schema.changes"="true",, "database.server.name"="fullfillment",, "connector.class"="io.debezium.connector.mysql.MySqlConnector",, "tasks.max"="1",, "database.history.kafka.topic"="dbhistory.fullfillment",, "database.server.id"="1",, "database.whitelist"="inventory",, "name"="inventory-connector",, "database.hostname"="127.0.0.1",, {=, "database.password"="Password#123",, }=, "database.history.kafka.bootstrap.servers"="localhost:9092",} contains no connector type
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:110)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {"config"={, "type"="null", "database.user"="debezium",, "database.port"="3306",, "include.schema.changes"="true",, "database.server.name"="fullfillment",, "connector.class"="io.debezium.connector.mysql.MySqlConnector",, "tasks.max"="1",, "database.history.kafka.topic"="dbhistory.fullfillment",, "database.server.id"="1",, "database.whitelist"="inventory",, "name"="inventory-connector",, "database.hostname"="127.0.0.1",, {=, "database.password"="Password#123",, }=, "database.history.kafka.bootstrap.servers"="localhost:9092",} contains no connector type
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:259)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:189)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
Any help will be highly appreciated.
connector.properites for standalone mode requires property file format. So please take config section and rewrite it like
connector.class=io.debezium.connector.mysql.MySqlConnector
.
.
.
You have a JSON file, not a property file.
This is meant to be used with connect-distributed mode. And POSTed via HTTP to the Kafka Connect REST API, not as a CLI argument.
For connect-standalone, you provide both the Connect worker properties and the connector properties files at the same time, as Java .properties files.
connector.properties files format should be yml,not json.