Getting so many Exceptions while running the kafka connect worker - apache-kafka

Getting so many exceptions while running the Kafka Connect worker.
I have set all the worker properties and all the jar paths looks fine.
The exceptions are below:
2020-07-23 18:41:58 WARN Reflections:104 - could not create Dir
using jarFile from url
file:/kafka/bin/../clients/build/libs/kafka-clients*.jar. skipping.
java.lang.NullPointerException
at java.util.zip.ZipFile.<init>(ZipFile.java:213)
at java.util.zip.ZipFile.<init>(ZipFile.java:155)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:130)
at org.reflections.vfs.Vfs$DefaultUrlTypes$1.createDir(Vfs.java:216)
at org.reflections.vfs.Vfs.fromURL(Vfs.java:99)
at org.reflections.vfs.Vfs.fromURL(Vfs.java:91)
at org.reflections.Reflections.scan(Reflections.java:240)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.scan(DelegatingClassLoader.java:373)
at org.reflections.Reflections$1.run(Reflections.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-07-23 18:41:58 WARN Reflections:377 - could not create Vfs.Dir from url. ignoring the exception and continuing
org.reflections.ReflectionsException: Could not open url connection at org.reflections.vfs.JarInputDir$1$1.<init>(JarInputDir.java:37)
at org.reflections.vfs.JarInputDir$1.iterator(JarInputDir.java:33)
at org.reflections.Reflections.scan(Reflections.java:243)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.scan(DelegatingClassLoader.java:373)
at org.reflections.Reflections$1.run(Reflections.java:198)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: /kafka/bin/../clients/build/libs/kafka-clients*.jar (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138) -
kafkaconnectsladev.log

If you are seeing WARN Reflections, then this is a warning, and not an error, and it is safe to ignore.
You can edit the log4j.properties file to silence the warnings, if you want. Using the Confluent Docker images, this is done via the CONNECT_LOG4J_LOGGERS variable

Thanks guys, yes these are all warnings and with newer version of the kafka client library, i am not seeing these.

Related

kafka ignite sink connector

I am trying to replicate data from Kafka topic to apache ignite cluster. I have added ignite-core.jar along with all other required files in plugins and it's loaded in connector as per the logs. Still I'm facing this error. Any suggestions would be helpful. Thank you.
INFO Instantiated connector string-ignite-connector with version 6.1.1-ccs of type class org.apache.ignite.stream.kafka.connect.IgniteSinkConnector (org.apache.kafka.connect.runtime.Worker:284)
[2021-05-15 08:52:43,717] INFO Finished creating connector string-ignite-connector (org.apache.kafka.connect.runtime.Worker:310)
[2021-05-15 08:52:43,719] ERROR WorkerConnector{id=string-ignite-connector} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:193)
java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:185)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:210)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:349)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:332)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:140)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.apache.ignite.internal.util.typedef.internal.A
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 12 more
[2021-05-15 08:52:43,723] ERROR Failed to create job for myconfig/ignite-connector.properties (org.apache.kafka.connect.cli.ConnectStandalone:110)
[2021-05-15 08:52:43,723] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:121)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to transition connector string-ignite-connector to state STARTED
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:115)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:99)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:118)
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to transition connector string-ignite-connector to state STARTED
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:335)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:140)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:185)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:210)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:349)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:332)
... 7 more
Caused by: java.lang.ClassNotFoundException: org.apache.ignite.internal.util.typedef.internal.A
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 12 more
It looks like you have ignite-kafka module in your classpath, but not ignite-core and its dependencies, which is also needed. My understanding ends there.

Confluent Replicator Failed to Reconfigure Connectors Task?

I have used mirror maker in the past and not Replicator and am getting an error but now sure where to start to debug it.
Here is the error:
[2019-08-12 18:04:09,672] ERROR Failed to reconfigure connector's
tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:958)
org.apache.kafka.connect.errors.ConnectException: Could not obtain timely topic metadata update from source cluster
at io.confluent.connect.replicator.TopicMonitorThreadWithZk.assignments(TopicMonitorThreadWithZk.java:138)
at io.confluent.connect.replicator.ReplicatorSourceConnector.taskConfigs(ReplicatorSourceConnector.java:99)
at org.apache.kafka.connect.runtime.Worker.connectorTaskConfigs(Worker.java:317)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:997)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:950)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:914)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:110)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:924)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:920)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Zeppeling throwing NullPointerException while configuring

I am trying to set up zeppelin-0.8.0 on my windos 8 r2 OS. I have already running spark on my console i.e. SPARK_HOME and JAVA_HOME, HADOOP_HOME set up and running fine. But while I am trying to execute printl("hello") in zeppelin spark interpreter it is throwing bellow error ...
I already set SPARK_HOME and JAVA_HOME in zeppelin-env.cmd file.
Error
DEBUG [2019-01-22 10:05:34,129] ({pool-2-thread-2} RemoteInterpreterManagedProcess.java[start]:153) - callbackServer is serving now
INFO [2019-01-22 10:05:34,143] ({pool-2-thread-2} RemoteInterpreterManagedProcess.java[start]:190) - Run interpreter process [C:\Software\Zepplin\zepplin\bin\interpreter.cmd, -d, C:\Software\Zepplin\zepplin/interpreter/spark, -c, 10.188.16
DEBUG [2019-01-22 10:05:34,419] ({Exec Stream Pumper} RemoteInterpreterManagedProcess.java[processLine]:298) - When was unexpected at this time.
INFO [2019-01-22 10:05:34,435] ({Exec Default Executor} RemoteInterpreterManagedProcess.java[onProcessFailed]:250) - Interpreter process failed {}
org.apache.commons.exec.ExecuteException: Process exited with an error: 255 (Exit value: 255)
at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:404)
at org.apache.commons.exec.DefaultExecutor.access$200(DefaultExecutor.java:48)
at org.apache.commons.exec.DefaultExecutor$1.run(DefaultExecutor.java:200)
at java.lang.Thread.run(Thread.java:748)
ERROR [2019-01-22 10:06:34,177] ({pool-2-thread-2} Job.java[run]:190) - Job failed
java.lang.RuntimeException: When was unexpected at this time.
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterManagedProcess.start(RemoteInterpreterManagedProcess.java:205)
at org.apache.zeppelin.interpreter.ManagedInterpreterGroup.getOrCreateInterpreterProcess(ManagedInterpreterGroup.java:64)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getOrCreateInterpreterProcess(RemoteInterpreter.java:111)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:164)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:407)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:307)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
ERROR [2019-01-22 10:06:52,103] ({pool-2-thread-2} NotebookServer.java[afterStatusChange]:2294) - Error
java.lang.RuntimeException: When was unexpected at this time.
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterManagedProcess.start(RemoteInterpreterManagedProcess.java:205)
at org.apache.zeppelin.interpreter.ManagedInterpreterGroup.getOrCreateInterpreterProcess(ManagedInterpreterGroup.java:64)

Consume array of json in avro format and store in s3

I have json array in following manner:
[{"a":74,"b":1519202998533,"c":"Shipped","d":7318},{"a":11,"b":1519202998546,"c":"Shipped","d":40481}]
I have made topic ord_avro_multiple for which i have defined avro format as
./bin/kafka-avro-console-producer --broker-list localhost:9092 --topic ord_avro_multiple --property value.schema='{"type":"array","name":"arrayjson","items":{"type":"record","name":"arraysjs","fields":[{"name":"a","type":"int"},{"name":"b","type":"long"},{"name":"c","type":"string"},{"name":"d","type":"int"}]}}'
When i used kafka-connect-s3 It gave me error when i am trying to write it in s3 bucket using as follows:
[2018-02-21 09:02:51,700] ERROR Task s3-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
org.apache.kafka.connect.errors.ConnectException: com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class org.apache.kafka.connect.data.Struct and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: java.util.ArrayList[0])
at io.confluent.connect.s3.format.json.JsonRecordWriterProvider$1.write(JsonRecordWriterProvider.java:81)
at io.confluent.connect.s3.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:402)
at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:204)
at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class org.apache.kafka.connect.data.Struct and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: java.util.ArrayList[0])
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:284)
at com.fasterxml.jackson.databind.SerializerProvider.mappingException(SerializerProvider.java:1110)
at com.fasterxml.jackson.databind.SerializerProvider.reportMappingProblem(SerializerProvider.java:1135)
at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.failForEmpty(UnknownSerializer.java:69)
at com.fasterxml.jackson.databind.ser.impl.UnknownSerializer.serialize(UnknownSerializer.java:32)
at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:119)
at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:79)
at com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:18)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:292)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:2493)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:378)
at io.confluent.connect.s3.format.json.JsonRecordWriterProvider$1.write(JsonRecordWriterProvider.java:77)
... 14 more
[2018-02-21 09:02:51,704] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2018-02-21 09:02:51,705] ERROR Task s3-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
How can i solve this issue?

kafka couchbase sink connector getting disconnected after dumping some records

I've installed confluent_3.3.0 and started zookeper, schema-registry and kafka broker .
And downloaded couchbase connector from below link
https://github.com/couchbase/kafka-connect-couchbase
Running sink connector using below command
./bin/connect-standalone etc/kafka/connect-standalone.properties /home/nayangiri/couch-connect-test/kafka-connect-couchbase/config/quickstart-couchbase-sink.properties
After running connector, I'm starting publishing JSON using kafka-python library.
The problem is, connector is getting disconnected without dumping all published messages with below error
[2017-11-07 20:12:39,815] WARN This transcoder (JsonBinaryTranscoder) does not support mutation tokens - this method is a stub and needs to be implemented on custom transcoders. (com.couchbase.client.java.transcoder.AbstractTranscoder:150)
[2017-11-07 20:12:44,821] WARN This transcoder (JsonBinaryTranscoder) does not support mutation tokens - this method is a stub and needs to be implemented on custom transcoders. (com.couchbase.client.java.transcoder.AbstractTranscoder:150)
[2017-11-07 20:12:44,821] WARN This transcoder (JsonBinaryTranscoder) does not support mutation tokens - this method is a stub and needs to be implemented on custom transcoders. (com.couchbase.client.java.transcoder.AbstractTranscoder:150)
[2017-11-07 20:12:44,823] ERROR Task test-couchbase-sink-1 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)
com.couchbase.client.java.error.CannotRetryException: maximum number of attempts reached after 5 retries
at com.couchbase.client.java.util.retry.RetryWithDelayHandler.call(RetryWithDelayHandler.java:101)
at com.couchbase.client.java.util.retry.RetryWithDelayHandler.call(RetryWithDelayHandler.java:42)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
at rx.internal.operators.OperatorZip$Zip.tick(OperatorZip.java:252)
at rx.internal.operators.OperatorZip$Zip$InnerSubscriber.onNext(OperatorZip.java:323)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77)
at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:302)
at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:284)
at rx.internal.operators.NotificationLite.accept(NotificationLite.java:135)
at rx.subjects.SubjectSubscriptionManager$SubjectObserver.emitNext(SubjectSubscriptionManager.java:253)
at rx.subjects.BehaviorSubject.onNext(BehaviorSubject.java:160)
at rx.observers.SerializedObserver.onNext(SerializedObserver.java:91)
at rx.subjects.SerializedSubject.onNext(SerializedSubject.java:67)
at rx.internal.operators.OnSubscribeRedo$2$1.onError(OnSubscribeRedo.java:237)
at rx.internal.operators.OperatorMerge$MergeSubscriber.reportError(OperatorMerge.java:266)
at rx.internal.operators.OperatorMerge$MergeSubscriber.checkTerminate(OperatorMerge.java:818)
at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:579)
at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
at rx.internal.operators.OperatorMerge$InnerSubscriber.onError(OperatorMerge.java:852)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onError(OnSubscribeMap.java:88)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:73)
at rx.observers.Subscribers$5.onNext(Subscribers.java:235)
at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onNext(OnSubscribeDoOnEach.java:101)
at rx.internal.producers.SingleProducer.request(SingleProducer.java:65)
at rx.Subscriber.setProducer(Subscriber.java:211)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.subjects.AsyncSubject.onCompleted(AsyncSubject.java:103)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.completeResponse(AbstractGenericHandler.java:390)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.access$000(AbstractGenericHandler.java:72)
at com.couchbase.client.core.endpoint.AbstractGenericHandler$1.call(AbstractGenericHandler.java:408)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.UnsupportedOperationException
at com.couchbase.connect.kafka.util.JsonBinaryTranscoder.newDocument(JsonBinaryTranscoder.java:40)
at com.couchbase.connect.kafka.util.JsonBinaryTranscoder.newDocument(JsonBinaryTranscoder.java:30)
at com.couchbase.client.java.transcoder.AbstractTranscoder.newDocument(AbstractTranscoder.java:133)
at com.couchbase.client.java.CouchbaseAsyncBucket$16.call(CouchbaseAsyncBucket.java:568)
at com.couchbase.client.java.CouchbaseAsyncBucket$16.call(CouchbaseAsyncBucket.java:560)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
... 19 more
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: com.couchbase.client.core.message.kv.UpsertResponse.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(OnErrorThrowable.java:118)
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:73)
... 19 more
[2017-11-07 20:12:44,830] ERROR Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerSinkTask:456)
[2017-11-07 20:12:44,830] ERROR Task test-couchbase-sink-1 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:148)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:457)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2017-11-07 20:12:44,831] **ERROR Task is being killed and will not recover until manually restarted** (org.apache.kafka.connect.runtime.WorkerTask:149)
[2017-11-07 20:12:44,836] INFO Closed bucket test (com.couchbase.client.core.config.ConfigurationProvider:115)
[2017-11-07 20:12:44,836] INFO Disconnected from Node 10.103.2.76/localhost (com.couchbase.client.core.node.Node:115)
[2017-11-07 20:12:44,839] INFO [null][KeyValueEndpoint]: Got notified from Channel as inactive, attempting reconnect. (com.couchbase.client.core.endpoint.Endpoint:115)
Thank you for Reading
Thanks for raising this issue. This is a regression in version 3.2.0 of the connector. It is being tracked as KAFKAC-83.
The fix is included in version 3.2.1, scheduled for release on November 21, 2017 released on November 8, 2017.
In the meantime you may wish to temporarily downgrade to version 3.1.3, or build the connector from the latest source code.
PSA: The Couchbase forums have a dedicated section for discussion related to the Kafka connector.