Issue with Oracle JDBC Source Connector - apache-kafka

We have Oracle Source from there need to get data, facing error in Avro and Json format.
Connector File
{
"name": "LITERAL_VALUES",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"key.serializer": "io.confluent.kafka.serializers.KafkaAvroSerializer",
"value.serializer": "io.confluent.kafka.serializers.KafkaAvroSerializer",
"connection.user": "<user>",
"connection.password": "<Password>",
"tasks.max": "1",
"connection.url": "jdbc:oracle:thin:#<server>:<Port>/<Schema>",
"mode": "bulk",
"topic.prefix": "LITERAL_VALUES",
"batch.max.rows":1000,
"numeric.mapping":"best_fit",
"query":"SELECT abc from xyz"
}
}
Error while consuming with Avro format
DataException: Cannot deserialize type int64 as type float64
Error while consuming with JSON format
WARN task [0_0] Skipping record due to deserialization error. topic=[LITERAL_VALUES_JSON] partition=[0] offset=[12823] (org.apache.kafka.streams.processor.internals.RecordDeserializer:86)
org.apache.kafka.common.errors.SerializationException: KsqlJsonDeserializer failed to deserialize data for topic: LITERAL_VALUES_JSON
Caused by: java.io.CharConversionException: Invalid UTF-32 character 0xf01ae03 (above 0x0010ffff) at char #1, byte #7)
at com.fasterxml.jackson.core.io.UTF32Reader.reportInvalid(UTF32Reader.java:195)
at com.fasterxml.jackson.core.io.UTF32Reader.read(UTF32Reader.java:158)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._loadMore(ReaderBasedJsonParser.java:243)
Tried to create connector file with "table.whitelist" property and consume with ksql
Unable to verify the AVRO schema is compatible with KSQL. Subject not found. io.confluent.rest.exceptions.RestNotFoundException: Subject not found.
io.confluent.rest.exceptions.RestNotFoundException: Subject not found.
at io.confluent.kafka.schemaregistry.rest.exceptions.Errors.subjectNotFoundException(Errors.java:50)
Checked rest schema
{
"subject": "RAW-LITERAL_VALUES-value",
"version": 1,
"id": 16,
"schema": "{\"type\":\"record\",\"name\":\"LITERAL_VALUES\",\"fields\":[{\"name\":\"LITERAL_ID\",\"type\":[\"null\",{\"type\":\"bytes\",\"scale\":127,\"precision\":64,\"connect.version\":1,\"connect.parameters\":{\"scale\":\"127\"},\"connect.name\":\"org.apache.kafka.connect.data.Decimal\",\"logicalType\":\"decimal\"}],\"default\":null},{\"name\":\"LITERAL_NAME\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"LITERAL_VALUE\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"SOURCE_SYSTEM_ID\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"SOURCE_SYSTEM_INSTANCE_ID\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"EFF_STRT_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"EFF_END_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"STRT_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"END_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"CRTD_BY\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"CRTD_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null},{\"name\":\"LST_UPD_BY\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"LST_UPD_DT\",\"type\":[\"null\",{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}],\"default\":null}],\"connect.name\":\"LITERAL_VALUES\"}"
}
Any help is highly appreciated.

Related

Error in MSK Kafka Connect sink connector due to Schema Registry

I currently have the following data in my Kafka topic:
{"tran_slip":"00002060","tran_amount":"111.22"}
{"tran_slip":"00000005","tran_amount":"123"}
{"tran_slip":"00000006","tran_amount":"123"}
{"tran_slip":"00000007","tran_amount":"123"}
Since the data in my Kafka topic does not have a schema, I figured I can force a schema using AWS Glue Schema Registry.
So I created an Avro Schema in the following manner:
{
"type": "record",
"namespace": "int_trans",
"name": "transaction",
"fields": [
{
"name": "tran_slip",
"type": "string"
},
{
"name": "tran_amount",
"type": "string"
}
]
}
Now I created a confluent sink connector on MSK Kafka Connect to sink data from a Kafka topi back to an Oracle DB with the below properties:
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
value.converter.schemaAutoRegistrationEnabled=true
connection.password=******
transforms.extractKeyFromStruct.type=org.apache.kafka.connect.transforms.ExtractField$Key
tasks.max=1
key.converter.region=*******
transforms=RenameField
key.converter.schemaName=KeySchema
value.converter.avroRecordType=GENERIC_RECORD
internal.key.converter.schemas.enable=false
value.converter.schemaName=ValueSchema
auto.evolve=false
transforms.RenameField.type=org.apache.kafka.connect.transforms.ReplaceField$Value
key.converter.avroRecordType=GENERIC_RECORD
value.converter=com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter
insert.mode=upsert
key.converter=org.apache.kafka.connect.storage.StringConverter
transforms.RenameField.renames=tran_slip:TRAN_SLIP, tran_amount:TRAN_AMOUNT
table.name.format=abc.transactions_sink
topics=aws-db.abc.transactions
batch.size=1
value.converter.registry.name=registry_transactions
value.converter.region=*****
key.converter.registry.name=registry_transactions
key.converter.schemas.enable=false
internal.key.converter=com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter
delete.enabled=false
key.converter.schemaAutoRegistrationEnabled=true
connection.user=*******
internal.value.converter.schemas.enable=false
value.converter.schemas.enable=true
internal.value.converter=com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter
auto.create=false
connection.url=*********
pk.mode=record_value
pk.fields=tran_slip
With these settings I keep getting the following error:
org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter.toConnectData(AWSKafkaAvroConverter.java:118)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertValue(WorkerSinkTask.java:545)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:501)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:501)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:478)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:328)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:238)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.amazonaws.services.schemaregistry.exception.AWSSchemaRegistryException: Didn't find secondary deserializer.
at com.amazonaws.services.schemaregistry.deserializers.SecondaryDeserializer.deserialize(SecondaryDeserializer.java:65)
at com.amazonaws.services.schemaregistry.deserializers.avro.AWSKafkaAvroDeserializer.deserializeByHeaderVersionByte(AWSKafkaAvroDeserializer.java:150)
at com.amazonaws.services.schemaregistry.deserializers.avro.AWSKafkaAvroDeserializer.deserialize(AWSKafkaAvroDeserializer.java:114)
at com.amazonaws.services.schemaregistry.kafkaconnect.AWSKafkaAvroConverter.toConnectData(AWSKafkaAvroConverter.java:116)
Can someone please guide me on what I am doing wrong in the configurations since I'm rather new to this topic?

Error in Confluent Kafka Source Connector Tasks (DatagenConnector) Data Serialization into Avro Format

I am trying to produce data from Data Source Connector with Confluent kafka DatagenConnector I am converting the value into Avro format using confluent schema registry. The configuration I am using for creating the Source connector is:
{
"connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
"kafka.topic": "inventories_un3",
"quickstart": "inventory",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": false,
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8083",
"value.converter.schemas.enable": true,
"max.interval": 1000,
"iterations": 10000000,
"tasks.max": "1",
"compatibility": "NONE",
"auto.register.schemas":false,
"use.latest.version": true
}
The Schema registered is:
{"schema": "{"type":"record","name":"Payment","namespace":"my.examples","fields":[{"name":"id","type":"long"},{"name":"quantity","type":"long"}, {"name":"productid","type":"long"}]}"}
Getting the following errors:
"id": 0,
"state": "FAILED",
"worker_id": "kafka-connect:8083",
"trace": "org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)\n\tat org
g.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: org.apache.kafka.connect.errors.DataException: Failed to serialize Avro data from topic inventories_un3 :\n\tat io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:93)\n\tat org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$3(WorkerSourceTask.java:329)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\n\t... 11 more\nCaused by: org.apache.kafka.common.errors.SerializationException: Error serializing Avro message\n\tat io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:154)\n\tat io.confluent.connect.avro.AvroConverter$Serializer.serialize(AvroConverter.java:153)\n\tat io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:86)\n\t... 15 more\nCaused by: java.net.ConnectException: Connection refused (Connection refused)\n\tat java.base/java.net.PlainSocketImpl.socketConnect(Native Method)\n\tat java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)\n\tat java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)\n\tat java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)\n\tat java.base/java.net.Socket.connect(Socket.java:615)\n\tat java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)\n\tat java.base/sun.net.www.http.HttpClient.(HttpClient.java:242)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1258)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1192)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1086)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1020)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1372)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1347)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:268)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:367)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:544)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:532)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:490)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:257)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:366)

Filter condition not working in kafka http sink connector

I'm using confluent kafka http sink connector, I want to filter this condition - if item.productID=12 then allow this event or ignore. But my filter condition not working
"transforms.filterExample1.filter.condition": "$.[*][?(#.item.productID == '12')]",
What I'm doing wrong here?
Could you please help me to fix this issue?
{
"connector.class": "io.confluent.connect.http.HttpSinkConnector",
"confluent.topic.bootstrap.servers": "localhost:9092",
"topics": "http-messages",
"tasks.max": "1",
"http.api.url": "http://localhost:8080/api/messages",
"reporter.bootstrap.servers": "localhost:9092",
"transforms.filter.type": "org.apache.kafka.connect.transforms.Filter",
"transforms": "filterExample1",
"transforms.filterExample1.type": "io.confluent.connect.transforms.Filter$Value",
"transforms.filterExample1.filter.condition": "$.[*][?(#.item.productID == '12')]",
"transforms.filterExample1.filter.type": "include",
"transforms.filterExample1.missing.or.null.behavior": "fail",
"reporter.error.topic.name": "error-responses",
"reporter.result.topic.name": "success-responses",
"reporter.error.topic.replication.factor": "1",
"confluent.topic.replication.factor": "1",
"value.converter.schemas.enable": "false",
"name": "HttpSink",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"reporter.result.topic.replication.factor": "1"
}
My Event
[
{
"name":"apple",
"salary":"3243",
"item":{
"productID":"12"
}
}
]
Since your data is in a JSON array, this transform wont work.
Tested with local data using the latest version of that transform, and saw this log from the Connect server
Caused by: org.apache.kafka.connect.errors.DataException: Only Map objects supported in absence of schema for [filtering record without schema], found: java.util.ArrayList
at io.confluent.connect.transforms.util.Requirements.requireMap(Requirements.java:30) ~[?:?]
at io.confluent.connect.transforms.Filter.shouldDrop(Filter.java:218) ~[?:?]
at io.confluent.connect.transforms.Filter.apply(Filter.java:161) ~[?:?]
"Map Objects" implying JSON objects
Also, you have a setting transforms.filter.type that's not doing anything

Kafka Schemaregistry Protobuf Unsupported root schema of type STRING

I am using Kafka Connect with the Google PubSub Connector to write messages from gcp PubSub into Kafka Topics.
My Connector has the following configuration:
{
"name": "MyTopicSourceConnector",
"config": {
"connector.class": "com.google.pubsub.kafka.source.CloudPubSubSourceConnector",
"tasks.max": "10",
"kafka.topic": "myTopic",
"cps.project": "my-project-id",
"cps.subscription": "myTopic-sub",
"name": "MyTopicSourceConnector",
"key.converter": "io.confluent.connect.protobuf.ProtobufConverter",
"key.converter.schema.registry.url": "http://myurl-schema-registry:8081",
"value.converter": "io.confluent.connect.protobuf.ProtobufConverter",
"value.converter.schema.registry.url": "http://myurl-schema-registry:8081"
}
}
The proto message value schema looks like this:
syntax = "proto3";
message value_myTopic {
bytes message = 1;
string notificationConfig = 2;
string eventTime = 3;
string bucketId = 4;
string payloadFormat = 5;
string eventType = 6;
string objectId = 7;
string objectGeneration = 8;
}
This setup works when I am using avro or json (with the appropriate converters) but with Protobuf my connector is throwing the following error message right after deploying it and fails:
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:292)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:245)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Unsupported root schema of type STRING
at io.confluent.connect.protobuf.ProtobufData.rawSchemaFromConnectSchema(ProtobufData.java:315)
at io.confluent.connect.protobuf.ProtobufData.fromConnectSchema(ProtobufData.java:304)
at io.confluent.connect.protobuf.ProtobufData.fromConnectData(ProtobufData.java:109)
at io.confluent.connect.protobuf.ProtobufConverter.fromConnectData(ProtobufConverter.java:83)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$1(WorkerSourceTask.java:292)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
I don't know why you have this issue.
In case you are interested, instead of the Confluent Protobuf converter, there is a community one, from Blue Apron, and it juste necessitate that you add another config to specify which Protobuf (de)serialization class to use.
Example from Snowflake of the 2 covnerters can be seen here.

Kafka Connect JDBC Sink Connector: Class Loader Not Found

Using Confluent Docker all in one package with tag 5.4.1; I am struggling to get a jdbc sink connector up and running.
When I launch the following connector:
{
"name": "mySink",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"topics": [
"myTopic"
],
"connection.url": "jdbc:sqlserver://sqlserver:1433;databaseName=myDB",
"connection.user": "user",
"connection.password": "**********",
"dialect.name": "SqlServerDatabaseDialect",
"insert.mode": "insert",
"table.name.format": "TableSink",
"pk.mode": "kafka",
"fields.whitelist": [
"offset",
"value"
],
"auto.create": "true"
}
(Some attributes edited)
I get the following exception on from the connect container:
ERROR Plugin class loader for connector: 'io.confluent.connect.jdbc.JdbcSinkConnector' was not found.
I have the following environment variable for connect:
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
And if I check in the container I have:
/usr/share/java/kafka-connect-jdbc
With the following files:
./jtds-1.3.1.jar
./common-utils-5.4.0.jar
./slf4j-api-1.7.26.jar
./kafka-connect-jdbc-5.4.0.jar
./postgresql-9.4.1212.jar
./sqlite-jdbc-3.25.2.jar
./mssql-jdbc-8.2.0.jre8.jar
./mssql-jdbc-8.2.0.jre13.jar
./mssql-jdbc-8.2.0.jre11.jar
The only changes I have made from the base connect image are the mssql jdbc drivers. These are working fine for a jdbc source connector.
Extra information as requested:
Output from curl -s localhost:8083/connector-plugins|jq '.[].class'
"io.confluent.connect.activemq.ActiveMQSourceConnector"
"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector"
"io.confluent.connect.gcs.GcsSinkConnector"
"io.confluent.connect.ibm.mq.IbmMQSourceConnector"
"io.confluent.connect.jdbc.JdbcSinkConnector"
"io.confluent.connect.jdbc.JdbcSourceConnector"
"io.confluent.connect.jms.JmsSourceConnector"
"io.confluent.connect.s3.S3SinkConnector"
"io.confluent.connect.storage.tools.SchemaSourceConnector"
"io.confluent.kafka.connect.datagen.DatagenConnector"
"org.apache.kafka.connect.file.FileStreamSinkConnector"
"org.apache.kafka.connect.file.FileStreamSourceConnector"
"org.apache.kafka.connect.mirror.MirrorCheckpointConnector"
"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector"
"org.apache.kafka.connect.mirror.MirrorSourceConnector"
docker-compose from:
https://github.com/confluentinc/examples/tree/5.4.1-post/cp-all-in-one
Images:
confluentinc/cp-ksql-cli:5.4.1
confluentinc/cp-enterprise-control-center:5.4.1
confluentinc/cp-ksql-server:5.4.1
cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0
confluentinc/cp-kafka-rest:5.4.1
confluentinc/cp-schema-registry:5.4.1
confluentinc/cp-server:5.4.1
confluentinc/cp-zookeeper:5.4.1