NoClassDefFoundError on creating IgniteSinkConnector - apache-kafka

I am trying to get a distributed setup for the ignite-connector to run. Sadly, it does not work. I was able to grab the log on creation of the connector via the api.
API POST payload to /connectors
{
"name": "ignite-connector",
"config": {
"connector.class": "org.apache.ignite.stream.kafka.connect.IgniteSinkConnector",
"tasks.max": "2",
"topics": "someTopic1",
"cacheName": "myCache",
"cacheAllowOverwrite": true,
"igniteCfg":"/opt/ignite/examples/config/example-cache.xml"}
}
}
I set up the ignite-connector as a plugin. I built an uber-jar from the repo and put it to a separate direcotry and included it as plugin in the .properties file I am using to start connect-distributed.sh.
I set the classpath for the jobs for both the connetor and kafka I am managing with systemd:
Environment=CLASSPATH=/opt/kafka/ignite-connector/*
Following the full error log:
[2022-11-17 19:49:30,268] INFO [ignite-connector|worker] SinkConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = ignite-connector
predicates = []
tasks.max = 2
topics = [someTopic1]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.SinkConnectorConfig:376)
[2022-11-17 19:49:30,272] INFO [ignite-connector|worker] EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = ignite-connector
predicates = []
tasks.max = 2
topics = [someTopic1]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:376)
[2022-11-17 19:49:30,276] INFO [ignite-connector|worker] Instantiated connector ignite-connector with version 3.3.1 of type class org.apache.ignite.stream.kafka.connect.IgniteSinkConnector (org.apache.kafka.connect.runtime.Worker:322)
[2022-11-17 19:49:30,276] INFO [ignite-connector|worker] Finished creating connector ignite-connector (org.apache.kafka.connect.runtime.Worker:347)
[2022-11-17 19:49:30,277] ERROR [ignite-connector|worker] WorkerConnector{id=ignite-connector} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:201)
java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:193)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:218)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:363)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:346)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:146)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-11-17 19:49:30,277] INFO [Worker clientId=connect-1, groupId=connect-cluster] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1687)
[2022-11-17 19:49:30,280] ERROR [ignite-connector|worker] [Worker clientId=connect-1, groupId=connect-cluster] Failed to start connector 'ignite-connector' (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1811)
org.apache.kafka.connect.errors.ConnectException: Failed to start connector: ignite-connector
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$startConnector$35(DistributedHerder.java:1782)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:349)
at org.apache.kafka.connect.runtime.WorkerConnector.doRun(WorkerConnector.java:146)
at org.apache.kafka.connect.runtime.WorkerConnector.run(WorkerConnector.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to transition connector ignite-connector to state STARTED
... 8 more
Caused by: java.lang.NoClassDefFoundError: org/apache/ignite/internal/util/typedef/internal/A
at org.apache.ignite.stream.kafka.connect.IgniteSinkConnector.start(IgniteSinkConnector.java:55)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:193)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:218)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:363)
at org.apache.kafka.connect.runtime.WorkerConnector.doTransitionTo(WorkerConnector.java:346)
... 7 more
The mentioned class (A) is included in the ignite-core-2.9.1.jar that is bundeld in the uberJar in the Plugin directory.
Any pointers are appreciated

There seems to be a misunderstanding what "plugins" are. Those only are classes defined as implementations of Converters, Transforms, and Connectors.
Internal Ignite classes are none of these, so they wouldn't be loaded into the plugin.path classloader.
To fix this, you'll need to ensure you export CLASSPATH=/path/to/ignite-files/*.jar and you can use jar -tf commands to validate the class exists in any specific JAR before running Connect process.
It is not a hack; that's just how Java Classloader works.

Related

java.nio.file.InvalidPathException : "Illegal char <:> at index 2" error with Kafka Connector in Mule 4 Munits

I am facing a very tough situation running MUnit on my project. I am using Mule 4.3.0 and Anypoint Studio 7.4.
Apparently, the point of error is occuring while loading a certs/cacerts file as a property used in some Apache Kakfa Connector TLS Configuration.
Its working absolutely fine when running the normal Mule code (having the TLS Context).
But fails to work, when running MUnit.
I have tried many ways to get this resolved with my team, but couldn't fix it. Although similar errors been reported by other developers occasionally, I couldn't concluded that this as a possible bug with the mule runtime or especially a Kafka Connector problem.
Finally, once removing the TLS context in Kafka Config, MUnit is working fine. But without TLS enabled, my project is basically useless
I need your help in resolving this and making my test work, but with TLS configurations present in its rightful place. Also, please take a look at this question in the forums: Mule Kafka Problem
Given below are the two snapshots of the Configuration used, and the error reported in the MUnit console:
Kafka Consumer TLS Configuration Snapshot:
Error Snapshot:
Compete error report given below:
INFO 2020-10-21 18:44:12,077 [main] org.mule.munit.remote.container.SuiteRunDispatcher: Suite errortopic-db-test-suite.xml will not be deployed: Suite was filtered from running
INFO 2020-10-21 18:44:12,078 [munit.01] org.mule.munit.runner.remote.api.server.RunnerServer: Waiting for client connection
INFO 2020-10-21 18:44:12,086 [munit.01] org.mule.munit.runner.remote.api.server.RunnerServer: Client connection received from 127.0.0.1 - true
WARN 2020-10-21 18:44:19,766 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:19,767 [munit.01] org.mule.runtime.api.tls.AbstractTlsContextFactoryBuilderFactory: Loaded TlsContextFactoryBuilderFactory implementation 'org.mule.runtime.module.tls.api.DefaultTlsContextFactoryBuilderFactory' from classloader 'java.net.URLClassLoader#7fd8c559'
INFO 2020-10-21 18:44:21,684 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-HTTP_Request_configuration_oauth
INFO 2020-10-21 18:44:21,736 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-HTTP_Request_configuration-By
WARN 2020-10-21 18:44:21,800 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,802 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-Apache_Kafka_Consumer_configuration
WARN 2020-10-21 18:44:21,810 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,872 [munit.01] org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager: Initialising Bean: org.mule.runtime.module.extension.internal.runtime.config.ConfigurationProviderToolingAdapter-Apache_Kafka_Producer_configuration
WARN 2020-10-21 18:44:21,879 [munit.01] org.mule.runtime.core.internal.security.tls.TlsProperties: File tls-default.conf not found, using default configuration.
INFO 2020-10-21 18:44:21,917 [munit.01] org.apache.kafka.clients.producer.ProducerConfig: ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [hiding intentional]
buffer.memory = 1024000
client.dns.lookup = default
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class com.mulesoft.connectors.kafka.internal.model.serializer.InputStreamSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = PLAIN
security.protocol = SASL_SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = SunJSSE
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
ssl.truststore.password = [hidden]
ssl.truststore.type = jks
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class com.mulesoft.connectors.kafka.internal.model.serializer.InputStreamSerializer
INFO 2020-10-21 18:44:21,985 [munit.01] org.apache.kafka.common.security.authenticator.AbstractLogin: Successfully logged in.
INFO 2020-10-21 18:44:21,996 [munit.01] org.apache.kafka.clients.producer.KafkaProducer: [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms.
org.mule.runtime.api.exception.MuleRuntimeException: org.mule.runtime.api.lifecycle.InitialisationException: The consumer has an invalid configuration
Caused by: org.mule.runtime.api.lifecycle.InitialisationException: The consumer has an invalid configuration
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:434)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
at com.mulesoft.connectors.kafka.internal.connection.provider.ProducerConnectionProvider.initialise(ProducerConnectionProvider.java:437)
at com.mulesoft.connectors.kafka.internal.connection.provider.KafkaConnectionProvider.initialise(KafkaConnectionProvider.java:129)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.module.extension.internal.runtime.config.ClassLoaderConnectionProviderWrapper.initialise(ClassLoaderConnectionProviderWrapper.java:96)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.core.internal.connection.AbstractConnectionProviderWrapper.initialise(AbstractConnectionProviderWrapper.java:113)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.doInitialise(LifecycleAwareConfigurationInstance.java:297)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationInstance.initialise(LifecycleAwareConfigurationInstance.java:145)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:117)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.lambda$null$0(LifecycleAwareConfigurationProvider.java:83)
at org.mule.runtime.core.privileged.lifecycle.AbstractLifecycleManager.invokePhase(AbstractLifecycleManager.java:132)
at org.mule.runtime.core.internal.lifecycle.DefaultLifecycleManager.fireInitialisePhase(DefaultLifecycleManager.java:46)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.lambda$initialise$1(LifecycleAwareConfigurationProvider.java:81)
at org.mule.runtime.core.api.util.ExceptionUtils.tryExpecting(ExceptionUtils.java:224)
at org.mule.runtime.core.api.util.ClassUtils.withContextClassLoader(ClassUtils.java:966)
at org.mule.runtime.module.extension.internal.runtime.config.LifecycleAwareConfigurationProvider.initialise(LifecycleAwareConfigurationProvider.java:80)
at org.mule.runtime.core.api.lifecycle.LifecycleUtils.initialiseIfNeeded(LifecycleUtils.java:56)
at org.mule.runtime.core.api.util.func.CheckedConsumer.accept(CheckedConsumer.java:19)
at org.mule.runtime.core.internal.lifecycle.phases.DefaultLifecyclePhase.applyLifecycle(DefaultLifecyclePhase.java:115)
at org.mule.runtime.core.internal.lifecycle.phases.MuleContextInitialisePhase.applyLifecycle(MuleContextInitialisePhase.java:73)
at org.mule.runtime.config.internal.SpringRegistryLifecycleManager$SpringContextInitialisePhase.applyLifecycle(SpringRegistryLifecycleManager.java:128)
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleManager.doApplyLifecycle(RegistryLifecycleManager.java:175)
at org.mule.runtime.core.internal.lifecycle.RegistryLifecycleManager.applyPhase(RegistryLifecycleManager.java:146)
at org.mule.runtime.config.internal.SpringRegistry.applyLifecycle(SpringRegistry.java:289)
at org.mule.runtime.core.internal.registry.MuleRegistryHelper.applyLifecycle(MuleRegistryHelper.java:339)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:287)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.lambda$applyLifecycle$4(LazyMuleArtifactContext.java:250)
at org.mule.runtime.core.internal.context.DefaultMuleContext.withLifecycleLock(DefaultMuleContext.java:531)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.applyLifecycle(LazyMuleArtifactContext.java:248)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:329)
at org.mule.runtime.config.internal.LazyMuleArtifactContext.initializeComponents(LazyMuleArtifactContext.java:317)
at org.mule.munit.runner.config.TestComponentLocator.initializeComponents(TestComponentLocator.java:63)
at org.mule.munit.runner.model.builders.SuiteBuilder.build(SuiteBuilder.java:78)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.buildSuite(RunMessageHandler.java:108)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.parseSuiteMessage(RunMessageHandler.java:94)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.parseAndRun(RunMessageHandler.java:81)
at org.mule.munit.runner.remote.api.server.RunMessageHandler.handle(RunMessageHandler.java:75)
at org.mule.munit.runner.remote.api.server.RunnerServer.handleClientMessage(RunnerServer.java:145)
at org.mule.munit.runner.remote.api.server.RunnerServer.run(RunnerServer.java:91)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.mule.service.scheduler.internal.AbstractRunnableFutureDecorator.doRun(AbstractRunnableFutureDecorator.java:111)
at org.mule.service.scheduler.internal.RunnableFutureDecorator.run(RunnableFutureDecorator.java:54)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: java.nio.file.InvalidPathException: Illegal char <:> at index 2: \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:172)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:157)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:73)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:442)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:423)
... 56 more
Caused by: java.nio.file.InvalidPathException: Illegal char <:> at index 2: \C:/Users/xxx/AppData/Local/Temp/munit-temp-dir/munitworkingdir5345007588634892776/container/apps/app/cacerts
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.lastModifiedMs(SslEngineBuilder.java:298)
at org.apache.kafka.common.security.ssl.SslEngineBuilder$SecurityStore.<init>(SslEngineBuilder.java:275)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.createTruststore(SslEngineBuilder.java:182)
at org.apache.kafka.common.security.ssl.SslEngineBuilder.<init>(SslEngineBuilder.java:100)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:95)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:168)
... 61 more
This has been identified as a bug with the Anypoint Studio.
MuleSoft Support has suggested the below.
Upgrade the studio version to the latest patch 4.3.0-20200925 which has all the fixes included.
The patches are cumulative so all fixes from the august release will be available in the latest too.
The distributions are stored in the Mulesoft Nexus EE Repo repo so you will need to ensure you have the credentials configured in your settings.xml.
Follow this documentation Patching, which explains how to patch MUNITS using maven or studio.
Note: The first time execution may take a while because the Runtime artifacts will be downloaded to your local repo.

Confluent BigQuerySinkConnector Schema Registry error

I try to create Kafka to BigQuery data pipeline using Confluent BigQuerySinkConnector. Test environment is a cp-all-in-one docker container. I added on it (it does not exist as default). All definitions I did on Google BigQuery side (I hope...). But it just gives Schema Registry error that I cannot understand why it occurs. I created a table which is named as rest_avro on BigQuery dataset. The topic schema:
{
"fields": [
{
"name": "name",
"type": "string"
},
{
"name": "age",
"type": [
"null",
"int"
]
}
],
"name": "User",
"type": "record"
}
I defined this schema on BigQuery table manually.
There is no error when connector running.
My connector configuration is loaded successfully
[2020-08-19 13:21:46,803] INFO SinkConnectorConfig values:
config.action.reload = restart
connector.class = com.wepay.kafka.connect.bigquery.BigQuerySinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = kcbq-connect1
tasks.max = 1
topics = [rest-avro]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.SinkConnectorConfig)
[2020-08-19 13:21:46,803] INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = com.wepay.kafka.connect.bigquery.BigQuerySinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = kcbq-connect1
tasks.max = 1
topics = [rest-avro]
topics.regex =
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig)
[2020-08-19 13:21:46,804] INFO [Worker clientId=connect-1, groupId=compose-connect-group] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2020-08-19 13:21:55,940] INFO [Consumer clientId=connector-consumer-kcbq-connect1-0, groupId=connect-kcbq-connect1] Seeking to offset 8 for partition rest-avro-0 (org.apache.kafka.clients.consumer.KafkaConsumer)
But console puts errors like below. Do you have any idea?
[2020-08-19 13:09:45,663] ERROR Task failed with org.apache.kafka.connect.errors.ConnectException error: Exception encountered while trying to fetch latest schema metadata from Schema Registry (com.wepay.kafka.connect.bigquery.write.batch.KCBQThreadPoolExecutor)
Exception in thread "pool-3-thread-79" org.apache.kafka.connect.errors.ConnectException: Exception encountered while trying to fetch latest schema metadata from Schema Registry
at com.wepay.kafka.connect.bigquery.schemaregistry.schemaretriever.SchemaRegistrySchemaRetriever.retrieveSchema(SchemaRegistrySchemaRetriever.java:67)
at com.wepay.kafka.connect.bigquery.SchemaManager.updateSchema(SchemaManager.java:58)
at com.wepay.kafka.connect.bigquery.write.row.AdaptiveBigQueryWriter.attemptSchemaUpdate(AdaptiveBigQueryWriter.java:129)
at com.wepay.kafka.connect.bigquery.write.row.AdaptiveBigQueryWriter.performWriteRequest(AdaptiveBigQueryWriter.java:96)
at com.wepay.kafka.connect.bigquery.write.row.BigQueryWriter.writeRows(BigQueryWriter.java:117)
at com.wepay.kafka.connect.bigquery.write.batch.TableWriter.run(TableWriter.java:77)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:153)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:188)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getLatestVersion(RestService.java:359)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getLatestVersion(RestService.java:351)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getLatestSchemaMetadata(CachedSchemaRegistryClient.java:136)
at com.wepay.kafka.connect.bigquery.schemaregistry.schemaretriever.SchemaRegistrySchemaRetriever.retrieveSchema(SchemaRegistrySchemaRetriever.java:63)
... 8 more
The problem says that BigQuery Sink Connector is unable to retrieve the current schema from Schema Registry container.
It seems that the schemaRegistryLocation field has been misconfigured on BigQuery Sink connector properties. Most probably it has been set to be http://localhost:8081.
As the docker-compose.yml shipped with Confluent's repository says, this endpoint needs to be defined as http://schema-registry:8081 on dockerized environments.
With below properties I managed to create the BigQuery Sink Connector;
{
"name": "customer-connect1",
"config": {
"connector.class": "com.wepay.kafka.connect.bigquery.BigQuerySinkConnector",
"tasks.max": "1",
"topics": "dbserver1.venue_organization.customers",
"topicsToTables": "dbserver1.venue_organization.customers=customers",
"sanitizeTopics": "true",
"autoCreateTables": "true",
"autoUpdateSchemas": "true",
"schemaRetriever": "com.wepay.kafka.connect.bigquery.schemaregistry.schemaretriever.SchemaRegistrySchemaRetriever",
"schemaRegistryLocation": "http://schema-registry:8081",
"bufferSize": "100000",
"maxWriteSize": "10000",
"tableWriteWait": "1000",
"project": "venue-organization",
"datasets": ".*=venue_organization",
"keyfile": " /data/venue-organization-service-account.json"
}
}
More in here: Google BigQuery Sink Connector Configuration Properties

Cassandra Sink Connector : Error while attempting to create/find topic(s) '_confluent-command'

Cannot create a Kafka -> Cassandra Sink connector using Ksqldb :
CREATE SINK CONNECTOR cassandra WITH( "connector.class" = 'io.confluent.connect.cassandra.CassandraSinkConnector', "tasks.max" = '1', "topics" = 'tst', "cassandra.contact.points" = 'cassandra', "cassandra.keyspace" = 'test', "cassandra.write.mode" = 'Update', "confluent.topic.bootstrap.servers" = 'kafka:9092' );
ERROR [CASS|worker] WorkerConnector{id=CASS} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:118)
org.apache.kafka.connect.errors.ConnectException: Error while attempting to create/find topic(s) '_confluent-command'
at org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:262)
at io.confluent.license.LicenseStore$1.run(LicenseStore.java:161)
at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:128)
at io.confluent.license.LicenseStore.start(LicenseStore.java:190)
at io.confluent.license.LicenseManager.<init>(LicenseManager.java:155)
at io.confluent.license.LicenseManager.<init>(LicenseManager.java:140)
at io.confluent.connect.utils.licensing.ConnectLicenseManager$Builder.lambda$build$0(ConnectLicenseManager.java:210)
at io.confluent.connect.utils.licensing.ConnectLicenseManager.registerOrValidateLicense(ConnectLicenseManager.java:255)
at io.confluent.connect.cassandra.CassandraSinkConnector.doStart(CassandraSinkConnector.java:50)
at io.confluent.connect.cassandra.CassandraSinkConnector.start(CassandraSinkConnector.java:45)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:110)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:135)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:257)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1190)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:126)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1206)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$14.call(DistributedHerder.java:1202)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:229)
... 21 more
Caused by: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
The Confluent Cassandra sink connector Replication factor had a default value = 3.
Modifying the default value in the connector config solved the problem!
"confluent.topic.replication.factor" = '1',

Kafka Confluent error - java.net.BindException: Address already in use

I am running Kafka via Confluent platform. I have followed the steps as per mentioned, java.net.BindException: Address already in use
As per documentation here, https://docs.confluent.io/2.0.0/quickstart.html#quickstart
start zookeeper,
$ ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties
start kafka,
$ ./bin/kafka-server-start ./etc/kafka/server.properties
next when I run schema-registry command,
$ ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
I have observed error as, java.net.BindException: Address already in use
I am running all this locally in a macbook. Could somoene please help me to solve this address already in use error?
Console log:
EFGHS-MER648W:confluent-4.0.0 user$ sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
Password:
[2018-01-09 13:09:03,510] INFO SchemaRegistryConfig values:
metric.reporters = []
kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.ssl.trustmanager.algorithm = PKIX
authentication.realm =
ssl.keystore.type = JKS
kafkastore.topic = _schemas
metrics.jmx.prefix = kafka.schema.registry
kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
kafkastore.topic.replication.factor = 3
ssl.truststore.password = [hidden]
kafkastore.timeout.ms = 500
host.name = 192.168.0.13
kafkastore.bootstrap.servers = []
schema.registry.zk.namespace = schema_registry
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
kafkastore.sasl.kerberos.service.name =
schema.registry.resource.extension.class =
ssl.endpoint.identification.algorithm =
compression.enable = false
kafkastore.ssl.truststore.type = JKS
avro.compatibility.level = backward
kafkastore.ssl.protocol = TLS
kafkastore.ssl.provider =
kafkastore.ssl.truststore.location =
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
kafkastore.ssl.keystore.type = JKS
ssl.truststore.type = JKS
kafkastore.ssl.truststore.password = [hidden]
access.control.allow.origin =
ssl.truststore.location =
ssl.keystore.password = [hidden]
port = 8081
kafkastore.ssl.keystore.location =
master.eligibility = true
ssl.client.auth = false
kafkastore.ssl.keystore.password = [hidden]
kafkastore.security.protocol = PLAINTEXT
ssl.trustmanager.algorithm =
authentication.method = NONE
request.logger.name = io.confluent.rest-utils.requests
ssl.key.password = [hidden]
kafkastore.zk.session.timeout.ms = 30000
kafkastore.sasl.mechanism = GSSAPI
kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
kafkastore.ssl.key.password = [hidden]
zookeeper.set.acl = false
schema.registry.inter.instance.protocol = http
authentication.roles = [*]
metrics.num.samples = 2
ssl.protocol = TLS
schema.registry.group.id = schema-registry
kafkastore.ssl.keymanager.algorithm = SunX509
kafkastore.connection.url = localhost:2181
debug = false
listeners = [http://0.0.0.0:8081]
kafkastore.group.id =
ssl.provider =
ssl.enabled.protocols = []
shutdown.graceful.ms = 1000
ssl.keystore.location =
ssl.cipher.suites = []
kafkastore.ssl.endpoint.identification.algorithm =
kafkastore.ssl.cipher.suites =
access.control.allow.methods =
kafkastore.sasl.kerberos.min.time.before.relogin = 60000
ssl.keymanager.algorithm =
metrics.sample.window.ms = 30000
kafkastore.init.timeout.ms = 60000
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:175)
[2018-01-09 13:09:03,749] INFO Logging initialized #629ms (org.eclipse.jetty.util.log:186)
[2018-01-09 13:09:04,202] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://192.168.0.13:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore:103)
[2018-01-09 13:09:04,475] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore:228)
[2018-01-09 13:09:04,482] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:242)
[2018-01-09 13:09:04,567] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:138)
[2018-01-09 13:09:04,567] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:72)
[2018-01-09 13:09:04,651] INFO Wait to catch up until the offset of the last message at 7 (io.confluent.kafka.schemaregistry.storage.KafkaStore:277)
[2018-01-09 13:09:04,675] INFO Joining schema registry with Zookeeper-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:210)
[2018-01-09 13:09:04,682] INFO Created schema registry namespace localhost:2181/schema_registry (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector:161)
[2018-01-09 13:09:04,705] INFO Successfully elected the new master: {"host":"192.168.0.13","port":8081,"master_eligibility":true,"scheme":"http","version":1} (io.confluent.kafka.schemaregistry.masterelector.zookeeper.ZookeeperMasterElector:102)
[2018-01-09 13:09:04,715] INFO Wait to catch up until the offset of the last message at 8 (io.confluent.kafka.schemaregistry.storage.KafkaStore:277)
[2018-01-09 13:09:04,778] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.Application:182)
[2018-01-09 13:09:04,844] INFO jetty-9.2.22.v20170606 (org.eclipse.jetty.server.Server:327)
[2018-01-09 13:09:05,411] INFO HV000001: Hibernate Validator 5.1.3.Final (org.hibernate.validator.internal.util.Version:27)
[2018-01-09 13:09:05,547] INFO Started o.e.j.s.ServletContextHandler#54c62d71{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2018-01-09 13:09:05,555] WARN FAILED NetworkTrafficServerConnector#4879dfad{HTTP/1.1}{0.0.0.0:8081}: java.net.BindException: Address already in use (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:366)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
[2018-01-09 13:09:05,557] WARN FAILED io.confluent.rest.Application$1#388526fb: java.net.BindException: Address already in use (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:366)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
[2018-01-09 13:09:05,558] ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:366)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
O2771-C02K648W:confluent-4.0.0 user$
Please help to solve this error.
Thanks,
When I run command, ps aux | grep schema-registry
O2771-C02K648W:~ user$ ps aux | grep schema-registry
root 20888 0.1 1.5 4584980 255588 ?? S 6:14PM 1:05.15 /usr/bin/java -Xmx512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dschema-registry.log.dir=/Users/user/Downloads/confluent-4.0.0/bin/../logs -Dlog4j.configuration=file:/Users/user/Downloads/confluent-4.0.0/bin/../etc/schema-registry/log4j.properties -cp :/Users/user/Downloads/confluent-4.0.0/bin/../package-schema-registry/target/kafka-schema-registry-package-*-development/share/java/schema-registry/*:/Users/user/Downloads/confluent-4.0.0/bin/../share/java/confluent-common/*:/Users/user/Downloads/confluent-4.0.0/bin/../share/java/rest-utils/*:/Users/user/Downloads/confluent-4.0.0/bin/../share/java/schema-registry/* io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain ./etc/schema-registry/schema-registry.properties
root 20887 0.0 0.0 2456112 3452 ?? S 6:14PM 0:00.02 sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
user 25256 0.0 0.0 2432804 1984 s001 S+ 1:41PM 0:00.00 grep schema-registry
O2771-C02K648W:~ user$
You can try to find out which process is using port:8081 by using below commands
netstat -vanp tcp | grep 8081
For OSX El Capitan and newer (or if your netstat doesn't support -p), use lsof
sudo lsof -i tcp:8081
Or any other command for windows and force kill that process using
kill -9 {PID}
You can also try changing schema registry default port to another port which is not used.
Check the Kafka schema registry port using below command
[root#in-ibmibm3718 /]# netstat -tnpl |grep 8093
tcp 0 0 0.0.0.0:8093 0.0.0.0:* LISTEN 16862/java
[root#in-ibmibm3718 /]# ps -eaf |grep -i 16862
kafka 16862 1 35 14:35 ? 01:15:19 /usr/jdk64/java-1.8.0-openjdk/bin/java -Xmx512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dschema-registry.log.dir=/var/log/kafka -Dlog4j.configuration=file:/opt/IBM/basecamp/basecamp-schema-registry/bin/../etc/schema-registry/log4j.properties -cp :/opt/IBM/basecamp/basecamp-schema-registry/bin/../package-schema-registry/target/kafka-schema-registry-package-*-development/share/java/schema-registry/*:/opt/IBM/basecamp/basecamp-schema-registry/bin/../share/java/confluent-common/*:/opt/IBM/basecamp/basecamp-schema-registry/bin/../share/java/rest-utils/*:/opt/IBM/basecamp/basecamp-schema-registry/bin/../share/java/schema-registry/* io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain /opt/IBM/basecamp/basecamp-schema-registry/etc/schema-registry/schema-registry.properties
Kill the process and start the schema registry again
I had same issue,
just force kill currently working zookeeper process
$ps -ef|grep zookeeper
$kill -9 <process number>
then start again zookeeper and kafka.

Apache Ignite Kafka connection issues

I'm trying to do stream processing and CEP on a Kafka message stream. For this I picked Apache Ignite to realise a prototype first. However I cannot connect to the queue:
Use
kafka_2.11-0.10.1.0
apache-ignite-fabric-1.8.0-bin
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Kafka works properly, I tested it with a consumer.
Then I start ignite, then I run following in a spring boot commandline app.
KafkaStreamer<String, String, String> kafkaStreamer = new KafkaStreamer<>();
Ignition.setClientMode(true);
Ignite ignite = Ignition.start();
Properties settings = new Properties();
// Set a few key parameters
settings.put("bootstrap.servers", "localhost:9092");
settings.put("group.id", "test");
settings.put("zookeeper.connect", "localhost:2181");
settings.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
settings.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
settings.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
settings.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
// Create an instance of StreamsConfig from the Properties instance
kafka.consumer.ConsumerConfig config = new ConsumerConfig(settings);
IgniteCache<String, String> cache = ignite.getOrCreateCache("myCache");
try (IgniteDataStreamer<String, String> stmr = ignite.dataStreamer("myCache")) {
// allow overwriting cache data
stmr.allowOverwrite(true);
kafkaStreamer.setIgnite(ignite);
kafkaStreamer.setStreamer(stmr);
// set the topic
kafkaStreamer.setTopic("test");
// set the number of threads to process Kafka streams
kafkaStreamer.setThreads(1);
// set Kafka consumer configurations
kafkaStreamer.setConsumerConfig(config);
// set decoders
StringDecoder keyDecoder = new StringDecoder(null);
StringDecoder valueDecoder = new StringDecoder(null);
kafkaStreamer.setKeyDecoder(keyDecoder);
kafkaStreamer.setValueDecoder(valueDecoder);
kafkaStreamer.start();
} finally {
kafkaStreamer.stop();
}
When the application starts I get
2017-02-23 10:25:23.409 WARN 1388 --- [ main] kafka.utils.VerifiableProperties : Property bootstrap.servers is not valid
2017-02-23 10:25:23.410 INFO 1388 --- [ main] kafka.utils.VerifiableProperties : Property group.id is overridden to test
2017-02-23 10:25:23.410 WARN 1388 --- [ main] kafka.utils.VerifiableProperties : Property key.deserializer is not valid
2017-02-23 10:25:23.411 WARN 1388 --- [ main] kafka.utils.VerifiableProperties : Property key.serializer is not valid
2017-02-23 10:25:23.411 WARN 1388 --- [ main] kafka.utils.VerifiableProperties : Property value.deserializer is not valid
2017-02-23 10:25:23.411 WARN 1388 --- [ main] kafka.utils.VerifiableProperties : Property value.serializer is not valid
2017-02-23 10:25:23.411 INFO 1388 --- [ main] kafka.utils.VerifiableProperties : Property zookeeper.connect is overridden to localhost:2181
Then
2017-02-23 10:25:24.057 WARN 1388 --- [r-finder-thread] kafka.client.ClientUtils$ : Fetching topic metadata with correlation id 0 for topics [Set(test)] from broker [BrokerEndPoint(0,user.local,9092)] failed
java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[kafka_2.11-0.10.0.1.jar:na]
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80) ~[kafka_2.11-0.10.0.1.jar:na]
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79) ~[kafka_2.11-0.10.0.1.jar:na]
at kafka.producer.SyncProducer.send(SyncProducer.scala:124) ~[kafka_2.11-0.10.0.1.jar:na]
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59) [kafka_2.11-0.10.0.1.jar:na]
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94) [kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66) [kafka_2.11-0.10.0.1.jar:na]
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) [kafka_2.11-0.10.0.1.jar:na]
And reading from the queue doesn't work.
Does anyone have an idea how to fix this?
Edit: If I comment the contents of the finally block then following error comes
[2m2017-02-27 16:42:27.780[0;39m [31mERROR[0;39m [35m29946[0;39m [2m---[0;39m [2m[pool-3-thread-1][0;39m [36m [0;39m [2m:[0;39m Message is ignored due to an error [msg=MessageAndMetadata(test,0,Message(magic = 1, attributes = 0, CreateTime = -1, crc = 2558126716, key = java.nio.HeapByteBuffer[pos=0 lim=1 cap=79], payload = java.nio.HeapByteBuffer[pos=0 lim=74 cap=74]),15941704,kafka.serializer.StringDecoder#74a96647,kafka.serializer.StringDecoder#42849d34,-1,CreateTime)]
java.lang.IllegalStateException: Data streamer has been closed.
at org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.enterBusy(DataStreamerImpl.java:401) ~[ignite-core-1.8.0.jar:1.8.0]
at org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addDataInternal(DataStreamerImpl.java:613) ~[ignite-core-1.8.0.jar:1.8.0]
at org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:667) ~[ignite-core-1.8.0.jar:1.8.0]
at org.apache.ignite.stream.kafka.KafkaStreamer$1.run(KafkaStreamer.java:180) ~[ignite-kafka-1.8.0.jar:1.8.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Thanks!
I think this happens because KafkaStreamer is getting closed right after it's started (kafkaStreamer.stop() call in finally block). kafkaStreamer.start() is not synchronous, it just spins out threads to consume from Kafka and exits.