Confluent Replicator Failed to Reconfigure Connectors Task? - apache-kafka

I have used mirror maker in the past and not Replicator and am getting an error but now sure where to start to debug it.
Here is the error:
[2019-08-12 18:04:09,672] ERROR Failed to reconfigure connector's
tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:958)
org.apache.kafka.connect.errors.ConnectException: Could not obtain timely topic metadata update from source cluster
at io.confluent.connect.replicator.TopicMonitorThreadWithZk.assignments(TopicMonitorThreadWithZk.java:138)
at io.confluent.connect.replicator.ReplicatorSourceConnector.taskConfigs(ReplicatorSourceConnector.java:99)
at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:317)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:997)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:950)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:914)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:110)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:924)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:920)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Related

When Kafka Topic partition reassignment, Flink job fails continuously

env
kafka 1.0.1
flink 1.7.1
trouble
I use topic with 200 partitions. and flink uses this topic.
Recently, I do manual partition reassignment.
When i reassigned partitions, Flink continuosly fails with this error.
error1.
[2021-07-28 18:21:15,926] WARN Attempting to send response via channel for which there is no open connection, connection id ..(kafka.network.Processor)
error2.
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for -126: 30042 ms has passed since batch creation plus linger time
error3.
java.lang.Exception: Error while triggering checkpoint 656 for Source: Custom Source -> Sink: ... (32/200)
at org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1174)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not perform checkpoint 656 for operator Source: Custom Source -> Sink: ... (32/200).
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:570)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:116)
at org.apache.flink.runtime.taskmanager.Task$1.run(Task.java:1163)
... 5 more
Caused by: java.lang.Exception: Could not complete snapshot 656 for operator Source: Custom Source -> Sink: ... (32/200).
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:422)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1113)
at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1055)
at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:729)
at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:641)
at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:564)
... 7 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for ...-86: 30049 ms has passed since batch creation plus linger time
And When i restarted failed job, this error occurs continuously.
ClassLoader info: URL ClassLoader:
file: '/blobStore-29c572a3-4ed4-48a6-b604-d93b7e4a9a10/job_8bd41a7e0690e75bd61d148d89dca963/blob_p-5c10d03a5cbb09c9a9459f1bc2a70804d0b08290-26b5562cbe83b0403b06717637e7ab47' (invalid JAR: /blobStore-29c572a3-4ed4-48a6-b604-d93b7e4a9a10/job_8bd41a7e0690e75bd61d148d89dca963/blob_p-5c10d03a5cbb09c9a9459f1bc2a70804d0b08290-26b5562cbe83b0403b06717637e7ab47 (Too many open files))
Class not resolvable through given classloader.
So I restarted all mesos and flink cluster with zookeeper clearance.
Is there any cause to look for?
There were network issues with certain brokers in the cluster.
If a request for a specific partition is processed slowly due to a network issue, it is expected that the message will be displayed.
Subsequently, the job corresponding to the partition does not work properly, and it seems that the checkpoint issue of flink occurs.
This problem was solved by replacing the equipment of the broker.

Getting so many Exceptions while running the kafka connect worker

Getting so many exceptions while running the Kafka Connect worker.
I have set all the worker properties and all the jar paths looks fine.
The exceptions are below:
2020-07-23 18:41:58 WARN Reflections:104 - could not create Dir
using jarFile from url
file:/kafka/bin/../clients/build/libs/kafka-clients*.jar. skipping.
java.lang.NullPointerException
at java.util.zip.ZipFile.<init>(ZipFile.java:213)
at java.util.zip.ZipFile.<init>(ZipFile.java:155)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:130)
at org.reflections.vfs.Vfs$DefaultUrlTypes$1.createDir(Vfs.java:216)
at org.reflections.vfs.Vfs.fromURL(Vfs.java:99)
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91)
at org.reflections.Reflections.scan(Reflections.java:240)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.scan(DelegatingClassLoader.java:373)
at org.reflections.Reflections$1.run(Reflections.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-07-23 18:41:58 WARN Reflections:377 - could not create Vfs.Dir from url. ignoring the exception and continuing
org.reflections.ReflectionsException: Could not open url connection at org.reflections.vfs.JarInputDir$1$1.<init>(JarInputDir.java:37)
at org.reflections.vfs.JarInputDir$1.iterator(JarInputDir.java:33)
at org.reflections.Reflections.scan(Reflections.java:243)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.scan(DelegatingClassLoader.java:373)
at org.reflections.Reflections$1.run(Reflections.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: /kafka/bin/../clients/build/libs/kafka-clients*.jar (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138) -
kafkaconnectsladev.log
If you are seeing WARN Reflections, then this is a warning, and not an error, and it is safe to ignore.
You can edit the log4j.properties file to silence the warnings, if you want. Using the Confluent Docker images, this is done via the CONNECT_LOG4J_LOGGERS variable
Thanks guys, yes these are all warnings and with newer version of the kafka client library, i am not seeing these.

Consume array of json in avro format and store in s3

I have json array in following manner:
[{"a":74,"b":1519202998533,"c":"Shipped","d":7318},{"a":11,"b":1519202998546,"c":"Shipped","d":40481}]
I have made topic ord_avro_multiple for which i have defined avro format as
./bin/kafka-avro-console-producer --broker-list localhost:9092 --topic ord_avro_multiple --property value.schema='{"type":"array","name":"arrayjson","items":{"type":"record","name":"arraysjs","fields":[{"name":"a","type":"int"},{"name":"b","type":"long"},{"name":"c","type":"string"},{"name":"d","type":"int"}]}}'
When i used kafka-connect-s3 It gave me error when i am trying to write it in s3 bucket using as follows:
[2018-02-21 09:02:51,700] ERROR Task s3-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
org.apache.kafka.connect.errors.ConnectException: com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class org.apache.kafka.connect.data.Struct and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: java.util.ArrayList[0])
at io.confluent.connect.s3.format.json.JsonRecordWriterProvider$1.write(JsonRecordWriterProvider.java:81)
at io.confluent.connect.s3.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:402)
at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:204)
at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class org.apache.kafka.connect.data.Struct and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: java.util.ArrayList[0])
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:284)
at com.fasterxml.jackson.databind.SerializerProvider.mappingException(SerializerProvider.java:1110)
at com.fasterxml.jackson.databind.SerializerProvider.reportMappingProblem(SerializerProvider.java:1135)
at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.failForEmpty(UnknownSerializer.java:69)
at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.serialize(UnknownSerializer.java:32)
at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:119)
at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:79)
at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:18)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:292)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:2493)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:378)
at io.confluent.connect.s3.format.json.JsonRecordWriterProvider$1.write(JsonRecordWriterProvider.java:77)
... 14 more
[2018-02-21 09:02:51,704] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2018-02-21 09:02:51,705] ERROR Task s3-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
How can i solve this issue?

kafka couchbase sink connector getting disconnected after dumping some records

I've installed confluent_3.3.0 and started zookeper, schema-registry and kafka broker .
And downloaded couchbase connector from below link
https://github.com/couchbase/kafka-connect-couchbase
Running sink connector using below command
./bin/connect-standalone etc/kafka/connect-standalone.properties /home/nayangiri/couch-connect-test/kafka-connect-couchbase/config/quickstart-couchbase-sink.properties
After running connector, I'm starting publishing JSON using kafka-python library.
The problem is, connector is getting disconnected without dumping all published messages with below error
[2017-11-07 20:12:39,815] WARN This transcoder (JsonBinaryTranscoder) does not support mutation tokens - this method is a stub and needs to be implemented on custom transcoders. (com.couchbase.client.java.transcoder.AbstractTranscoder:150)
[2017-11-07 20:12:44,821] WARN This transcoder (JsonBinaryTranscoder) does not support mutation tokens - this method is a stub and needs to be implemented on custom transcoders. (com.couchbase.client.java.transcoder.AbstractTranscoder:150)
[2017-11-07 20:12:44,821] WARN This transcoder (JsonBinaryTranscoder) does not support mutation tokens - this method is a stub and needs to be implemented on custom transcoders. (com.couchbase.client.java.transcoder.AbstractTranscoder:150)
[2017-11-07 20:12:44,823] ERROR Task test-couchbase-sink-1 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
com.couchbase.client.java.error.CannotRetryException: maximum number of attempts reached after 5 retries
at com.couchbase.client.java.util.retry.RetryWithDelayHandler.call(RetryWithDelayHandler.java:101)
at com.couchbase.client.java.util.retry.RetryWithDelayHandler.call(RetryWithDelayHandler.java:42)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
at rx.internal.operators.OperatorZip$Zip.tick(OperatorZip.java:252)
at rx.internal.operators.OperatorZip$Zip$InnerSubscriber.onNext(OperatorZip.java:323)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77)
at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:302)
at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:284)
at rx.internal.operators.NotificationLite.accept(NotificationLite.java:135)
at rx.subjects.SubjectSubscriptionManager$SubjectObserver.emitNext(SubjectSubscriptionManager.java:253)
at rx.subjects.BehaviorSubject.onNext(BehaviorSubject.java:160)
at rx.observers.SerializedObserver.onNext(SerializedObserver.java:91)
at rx.subjects.SerializedSubject.onNext(SerializedSubject.java:67)
at rx.internal.operators.OnSubscribeRedo$2$1.onError(OnSubscribeRedo.java:237)
at rx.internal.operators.OperatorMerge$MergeSubscriber.reportError(OperatorMerge.java:266)
at rx.internal.operators.OperatorMerge$MergeSubscriber.checkTerminate(OperatorMerge.java:818)
at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:579)
at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
at rx.internal.operators.OperatorMerge$InnerSubscriber.onError(OperatorMerge.java:852)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onError(OnSubscribeMap.java:88)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:73)
at rx.observers.Subscribers$5.onNext(Subscribers.java:235)
at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onNext(OnSubscribeDoOnEach.java:101)
at rx.internal.producers.SingleProducer.request(SingleProducer.java:65)
at rx.Subscriber.setProducer(Subscriber.java:211)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.subjects.AsyncSubject.onCompleted(AsyncSubject.java:103)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.completeResponse(AbstractGenericHandler.java:390)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.access$000(AbstractGenericHandler.java:72)
at com.couchbase.client.core.endpoint.AbstractGenericHandler$1.call(AbstractGenericHandler.java:408)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.UnsupportedOperationException
at com.couchbase.connect.kafka.util.JsonBinaryTranscoder.newDocument(JsonBinaryTranscoder.java:40)
at com.couchbase.connect.kafka.util.JsonBinaryTranscoder.newDocument(JsonBinaryTranscoder.java:30)
at com.couchbase.client.java.transcoder.AbstractTranscoder.newDocument(AbstractTranscoder.java:133)
at com.couchbase.client.java.CouchbaseAsyncBucket$16.call(CouchbaseAsyncBucket.java:568)
at com.couchbase.client.java.CouchbaseAsyncBucket$16.call(CouchbaseAsyncBucket.java:560)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
... 19 more
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: com.couchbase.client.core.message.kv.UpsertResponse.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(OnErrorThrowable.java:118)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:73)
... 19 more
[2017-11-07 20:12:44,830] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2017-11-07 20:12:44,830] ERROR Task test-couchbase-sink-1 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2017-11-07 20:12:44,831] **ERROR Task is being killed and will not recover until manually restarted** (org.apache.kafka.connect.runtime.WorkerTask:149)
[2017-11-07 20:12:44,836] INFO Closed bucket test (com.couchbase.client.core.config.ConfigurationProvider:115)
[2017-11-07 20:12:44,836] INFO Disconnected from Node 10.103.2.76/localhost (com.couchbase.client.core.node.Node:115)
[2017-11-07 20:12:44,839] INFO [null][KeyValueEndpoint]: Got notified from Channel as inactive, attempting reconnect. (com.couchbase.client.core.endpoint.Endpoint:115)
Thank you for Reading
Thanks for raising this issue. This is a regression in version 3.2.0 of the connector. It is being tracked as KAFKAC-83.
The fix is included in version 3.2.1, scheduled for release on November 21, 2017 released on November 8, 2017.
In the meantime you may wish to temporarily downgrade to version 3.1.3, or build the connector from the latest source code.
PSA: The Couchbase forums have a dedicated section for discussion related to the Kafka connector.

DataException: Converting byte[] to Kafka Connect data failed due to serialization error

I am using Kafka connect to persist data from Kafka broker to elastic search using Confluent Platform.
I have written a SinkConnector to persist data to elastic search.
Connect-avro-standalone.properties configuration are:
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
But when I push a valid json from the avro producer on to the Kafka it results in below exception.
2016-10-05 15:02:25,225] ERROR Task kafka-elasticsearch-sink1-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:328)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:356)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: java.lang.ArrayIndexOutOfBoundsException: 6
Caused by: java.lang.ArrayIndexOutOfBoundsException: 6
at com.fasterxml.jackson.core.io.UTF32Reader.read(UTF32Reader.java:138)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.loadMore(ReaderBasedJsonParser.java:153)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:1854)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:571)
I had two different versions of jackson-databind jar in the classpath 2.5.4 and 2.6.3. Deleted 2.5.4 version of the jar from classpath and it fixed the issue.