Kafka Connect JDBC Sink Connector: Class Loader Not Found - apache-kafka

Using Confluent Docker all in one package with tag 5.4.1; I am struggling to get a jdbc sink connector up and running.
When I launch the following connector:
{
"name": "mySink",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"topics": [
"myTopic"
],
"connection.url": "jdbc:sqlserver://sqlserver:1433;databaseName=myDB",
"connection.user": "user",
"connection.password": "**********",
"dialect.name": "SqlServerDatabaseDialect",
"insert.mode": "insert",
"table.name.format": "TableSink",
"pk.mode": "kafka",
"fields.whitelist": [
"offset",
"value"
],
"auto.create": "true"
}
(Some attributes edited)
I get the following exception on from the connect container:
ERROR Plugin class loader for connector: 'io.confluent.connect.jdbc.JdbcSinkConnector' was not found.
I have the following environment variable for connect:
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
And if I check in the container I have:
/usr/share/java/kafka-connect-jdbc
With the following files:
./jtds-1.3.1.jar
./common-utils-5.4.0.jar
./slf4j-api-1.7.26.jar
./kafka-connect-jdbc-5.4.0.jar
./postgresql-9.4.1212.jar
./sqlite-jdbc-3.25.2.jar
./mssql-jdbc-8.2.0.jre8.jar
./mssql-jdbc-8.2.0.jre13.jar
./mssql-jdbc-8.2.0.jre11.jar
The only changes I have made from the base connect image are the mssql jdbc drivers. These are working fine for a jdbc source connector.
Extra information as requested:
Output from curl -s localhost:8083/connector-plugins|jq '.[].class'
"io.confluent.connect.activemq.ActiveMQSourceConnector"
"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector"
"io.confluent.connect.gcs.GcsSinkConnector"
"io.confluent.connect.ibm.mq.IbmMQSourceConnector"
"io.confluent.connect.jdbc.JdbcSinkConnector"
"io.confluent.connect.jdbc.JdbcSourceConnector"
"io.confluent.connect.jms.JmsSourceConnector"
"io.confluent.connect.s3.S3SinkConnector"
"io.confluent.connect.storage.tools.SchemaSourceConnector"
"io.confluent.kafka.connect.datagen.DatagenConnector"
"org.apache.kafka.connect.file.FileStreamSinkConnector"
"org.apache.kafka.connect.file.FileStreamSourceConnector"
"org.apache.kafka.connect.mirror.MirrorCheckpointConnector"
"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector"
"org.apache.kafka.connect.mirror.MirrorSourceConnector"
docker-compose from:
https://github.com/confluentinc/examples/tree/5.4.1-post/cp-all-in-one
Images:
confluentinc/cp-ksql-cli:5.4.1
confluentinc/cp-enterprise-control-center:5.4.1
confluentinc/cp-ksql-server:5.4.1
cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0
confluentinc/cp-kafka-rest:5.4.1
confluentinc/cp-schema-registry:5.4.1
confluentinc/cp-server:5.4.1
confluentinc/cp-zookeeper:5.4.1

Related

Error in Confluent Kafka Source Connector Tasks (DatagenConnector) Data Serialization into Avro Format

I am trying to produce data from Data Source Connector with Confluent kafka DatagenConnector I am converting the value into Avro format using confluent schema registry. The configuration I am using for creating the Source connector is:
{
"connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
"kafka.topic": "inventories_un3",
"quickstart": "inventory",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": false,
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8083",
"value.converter.schemas.enable": true,
"max.interval": 1000,
"iterations": 10000000,
"tasks.max": "1",
"compatibility": "NONE",
"auto.register.schemas":false,
"use.latest.version": true
}
The Schema registered is:
{"schema": "{"type":"record","name":"Payment","namespace":"my.examples","fields":[{"name":"id","type":"long"},{"name":"quantity","type":"long"}, {"name":"productid","type":"long"}]}"}
Getting the following errors:
"id": 0,
"state": "FAILED",
"worker_id": "kafka-connect:8083",
"trace": "org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)\n\tat org
g.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: org.apache.kafka.connect.errors.DataException: Failed to serialize Avro data from topic inventories_un3 :\n\tat io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:93)\n\tat org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$3(WorkerSourceTask.java:329)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\n\t... 11 more\nCaused by: org.apache.kafka.common.errors.SerializationException: Error serializing Avro message\n\tat io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:154)\n\tat io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:153)\n\tat io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:86)\n\t... 15 more\nCaused by: java.net.ConnectException: Connection refused (Connection refused)\n\tat java.base/java.net.PlainSocketImpl.socketConnect(Native Method)\n\tat java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)\n\tat java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)\n\tat java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)\n\tat java.base/java.net.Socket.connect(Socket.java:615)\n\tat java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)\n\tat java.base/sun.net.www.http.HttpClient.(HttpClient.java:242)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1258)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1192)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1086)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1020)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1372)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1347)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:268)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:367)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:544)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:532)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:490)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:257)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:366)

Filter condition not working in kafka http sink connector

I'm using confluent kafka http sink connector, I want to filter this condition - if item.productID=12 then allow this event or ignore. But my filter condition not working
"transforms.filterExample1.filter.condition": "$.[*][?(#.item.productID == '12')]",
What I'm doing wrong here?
Could you please help me to fix this issue?
{
"connector.class": "io.confluent.connect.http.HttpSinkConnector",
"confluent.topic.bootstrap.servers": "localhost:9092",
"topics": "http-messages",
"tasks.max": "1",
"http.api.url": "http://localhost:8080/api/messages",
"reporter.bootstrap.servers": "localhost:9092",
"transforms.filter.type": "org.apache.kafka.connect.transforms.Filter",
"transforms": "filterExample1",
"transforms.filterExample1.type": "io.confluent.connect.transforms.Filter$Value",
"transforms.filterExample1.filter.condition": "$.[*][?(#.item.productID == '12')]",
"transforms.filterExample1.filter.type": "include",
"transforms.filterExample1.missing.or.null.behavior": "fail",
"reporter.error.topic.name": "error-responses",
"reporter.result.topic.name": "success-responses",
"reporter.error.topic.replication.factor": "1",
"confluent.topic.replication.factor": "1",
"value.converter.schemas.enable": "false",
"name": "HttpSink",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"reporter.result.topic.replication.factor": "1"
}
My Event
[
{
"name":"apple",
"salary":"3243",
"item":{
"productID":"12"
}
}
]
Since your data is in a JSON array, this transform wont work.
Tested with local data using the latest version of that transform, and saw this log from the Connect server
Caused by: org.apache.kafka.connect.errors.DataException: Only Map objects supported in absence of schema for [filtering record without schema], found: java.util.ArrayList
at io.confluent.connect.transforms.util.Requirements.requireMap(Requirements.java:30) ~[?:?]
at io.confluent.connect.transforms.Filter.shouldDrop(Filter.java:218) ~[?:?]
at io.confluent.connect.transforms.Filter.apply(Filter.java:161) ~[?:?]
"Map Objects" implying JSON objects
Also, you have a setting transforms.filter.type that's not doing anything

Kafka Connect with Mysql using Kafka with version 2.12-2.4.1

I am completely new to Kafka, trying to get the data from Mysql using Kafka, For that I have used two jars kafka-connect-jdbc-5.3.1 and mysql-connector-java-8.0.17 kept inside the lib folder of Kafka
(Path: ....\kafka_2.12-2.4.1\libs), I am using Windows10 with java-8.
I followed this tutorial : https://www.youtube.com/watch?v=r7LUbtOFcQI
Once I start the connector getting below issue. Any solutions will be appreciated.
E:\2020\kafka_2.12-2.4.1>.\bin\windows\connect-distributed.bat .\config\connect-distributed.properties
[2020-04-30 17:46:49,773] WARN could not get type for name org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter from any class loader (org.reflections.Reflections)
org.reflections.ReflectionsException: could not get type for name org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
at org.reflections.Reflections.<init>(Reflections.java:126)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.<init>(DelegatingClassLoader.java:428)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:327)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:263)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:211)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:204)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:60)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
Caused by: java.lang.ClassNotFoundException: org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388)
... 10 more
[2020-04-30 17:46:49,837] WARN could not get type for name org.osgi.framework.BundleListener from any class loader (org.reflections.Reflections)
org.reflections.ReflectionsException: could not get type for name org.osgi.framework.BundleListener
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
at org.reflections.Reflections.<init>(Reflections.java:126)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.<init>(DelegatingClassLoader.java:428)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:327)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:263)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:211)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:204)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:60)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
Caused by: java.lang.ClassNotFoundException: org.osgi.framework.BundleListener
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388)
... 10 more
[2020-04-30 17:46:49,848] WARN could not get type for name io.netty.internal.tcnative.CertificateVerifier from any class loader (org.reflections.Reflections)
org.reflections.ReflectionsException: could not get type for name io.netty.internal.tcnative.CertificateVerifier
The post request I am sending
{
"name": "jdbc_source_mysql_03",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:mysql://localhost:3306/user",
"connection.user": "root",
"connection.password": "root",
"topic.prefix": "mysql-02-",
"table.whitelist": "test",
"mode": "incrementing",
"incrementing.column.name": "id",
"poll.interval.ms": "1000",
"timestamp.column.name": "modified",
"tasks.max": "2",
"name": "jdbc_source_mysql_03"
},
"tasks": [],
"type": "source"

Issue with Oracle JDBC Source Connector

We have Oracle Source from there need to get data, facing error in Avro and Json format.
Connector File
{
"name": "LITERAL_VALUES",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"key.serializer": "io.confluent.kafka.serializers.KafkaAvroSerializer",
"value.serializer": "io.confluent.kafka.serializers.KafkaAvroSerializer",
"connection.user": "<user>",
"connection.password": "<Password>",
"tasks.max": "1",
"connection.url": "jdbc:oracle:thin:#<server>:<Port>/<Schema>",
"mode": "bulk",
"topic.prefix": "LITERAL_VALUES",
"batch.max.rows":1000,
"numeric.mapping":"best_fit",
"query":"SELECT abc from xyz"
}
}
Error while consuming with Avro format
DataException: Cannot deserialize type int64 as type float64
Error while consuming with JSON format
WARN task [0_0] Skipping record due to deserialization error. topic=[LITERAL_VALUES_JSON] partition=[0] offset=[12823] (org.apache.kafka.streams.processor.internals.RecordDeserializer:86)
org.apache.kafka.common.errors.SerializationException: KsqlJsonDeserializer failed to deserialize data for topic: LITERAL_VALUES_JSON
Caused by: java.io.CharConversionException: Invalid UTF-32 character 0xf01ae03 (above 0x0010ffff) at char #1, byte #7)
at com.fasterxml.jackson.core.io.UTF32Reader.reportInvalid(UTF32Reader.java:195)
at com.fasterxml.jackson.core.io.UTF32Reader.read(UTF32Reader.java:158)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._loadMore(ReaderBasedJsonParser.java:243)
Tried to create connector file with "table.whitelist" property and consume with ksql
Unable to verify the AVRO schema is compatible with KSQL. Subject not found. io.confluent.rest.exceptions.RestNotFoundException: Subject not found.
io.confluent.rest.exceptions.RestNotFoundException: Subject not found.
at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.subjectNotFoundException(Errors.java:50)
Checked rest schema
{
"subject": "RAW-LITERAL_VALUES-value",
"version": 1,
"id": 16,
"schema": "{\"type\":\"record\",\"name\":\"LITERAL_VALUES\",\"fields\":[{\"name\":\"LITERAL_ID\",\"type\":[\"null\",{\"type\":\"bytes\",\"scale\":127,\"precision\":64,\"connect.version\":1,\"connect.parameters\":{\"scale\":\"127\"},\"connect.name\":\"org.apache.kafka.connect.data.Decimal\",\"logicalType\":\"decimal\"}],\"default\":null},{\"name\":\"LITERAL_NAME\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"LITERAL_VALUE\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"SOURCE_SYSTEM_ID\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"SOURCE_SYSTEM_INSTANCE_ID\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"EFF_STRT_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"EFF_END_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"STRT_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"END_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"CRTD_BY\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"CRTD_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"LST_UPD_BY\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"LST_UPD_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}],\"connect.name\":\"LITERAL_VALUES\"}"
}
Any help is highly appreciated.

replication slot already exists

Whenever I restart the debezium kafka-connect container, or deploy another instance, I get the following error:
io.debezium.jdbc.JdbcConnectionException: ERROR: replication slot "debezium" already exists
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:136)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.<init>(PostgresReplicationConnection.java:79)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.<init>(PostgresReplicationConnection.java:38)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$ReplicationConnectionBuilder.build(PostgresReplicationConnection.java:349)
at io.debezium.connector.postgresql.PostgresTaskContext.createReplicationConnection(PostgresTaskContext.java:80)
at io.debezium.connector.postgresql.RecordsStreamProducer.<init>(RecordsStreamProducer.java:75)
at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:112)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:157)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: ERROR: replication slot "debezium" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2412)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2125)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:297)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:301)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:287)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:260)
at org.postgresql.replication.fluent.logical.LogicalCreateSlotBuilder.make(LogicalCreateSlotBuilder.java:48)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:102)
... 14 more
I'm using this image: https://github.com/debezium/docker-images/tree/master/connect/0.8
And have config for it like this:
{
"name":"record-loader-connector",
"config":{
"connector.class":"io.debezium.connector.postgresql.PostgresConnector",
"database.dbname":"record_loader?ssl",
"database.user":"postgres",
"database.hostname":"redacted",
"database.history.kafka.bootstrap.servers":"redacted",
"database.history.kafka.topic":"dbhistory.recordloader",
"database.password":"redacted",
"name":"record-loader-connector",
"database.server.name":"recordLoaderDb",
"database.port":"20023",
"table.whitelist":".*sync"
},
"tasks":[
{
"connector":"record-loader-connector",
"task":0
}
],
"type":"source"
}
I've noticed these two config options (slot.name and slot.drop_on_stop), but it is not clear to me if/how I should change them:
http://debezium.io/docs/connectors/postgresql/#connector-properties
If you deploy multiple instances of the Debezium Postgres connector, you must make sure to use distinct replication slot names. You can specify a name when setting up the connector:
{
"name": "inventory-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "postgres",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname" : "postgres",
"database.server.name": "dbserver1",
"database.whitelist": "inventory",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory",
"slot.name" : "my-slot-name"
}
}
I can't reproduce the issue you describe when restarting a given connector instance. It should detect that the slot already exists and re-use that one (one possible cause may be that you altered the logical decoding plug-in, too ("decoderbufs" vs. "wal2json")?. If you have a reproducer for this, could you please open an entry in our bug tracker?
To proceed, you can manually delete the slot in Postgres:
select pg_drop_replication_slot('debezium');