KeeperErrorCode = NoNode for /brokers/topics/test-topic/partitions - scala

While starting Kafka getting this following error.
KeeperErrorCode = NoNode for /brokers/topics/test-topic/partitions
Any help will be apreciated.

I found my answer, It is because of version mismatch of zookeeper and kafka.
Previously I was using kafka_2.8.0-0.8.0 with zookeeper 3.3.5
but then I installed kafka_2.9.2-0.8.1.1 with zookeeper 3.3.5 and now It's working fine.

Most likely this is because the topic is not created yet. Topic nodes in Zookeeper are created when broker processes the first message to the topic or, alternatively, when the AdminUtils.createTopic(...) call was made.

Related

Kafka Snowflake Connector - Stopping after connector error

I've been checking all the kafka snowflake connector posts but none of them talked about the issue I'm having.
I installed Kafka in local, with zookeper, and I also want to run a Snowflake connector, to copy data from Kafka towards Snowflake.
I run zookeeper, every thing looks right:
zookeeper log
Then I launch the kafka server, looks correct as well:
server log
However when I launch the snowflake-kafka-connector:
sh connect-standalone.sh /usr/local/kafka/kafka_2.11-1.1.0/config/connect-standalone.properties /usr/local/kafka/kafka_2.11-1.1.0/config/SF_connect.properties
, it breaks like this:
[2022-05-27 10:41:37,380] INFO Finished creating connector TEST_CONNECTOR (org.apache.kafka.connect.runtime.Worker:224)
[2022-05-27 10:41:37,380] INFO Skipping reconfiguration of connector kafkatest since it is not running (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:285)
[2022-05-27 10:41:37,381] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.lang.NullPointerException: Cannot invoke "org.apache.kafka.connect.runtime.rest.entities.ConnectorInfo.name()" because the return value of "org.apache.kafka.connect.runtime.Herder$Created.result()" is null
at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:104)
at org.apache.kafka.connect.cli.ConnectStandalone$1.onCompletion(ConnectStandalone.java:98)
at org.apache.kafka.connect.util.ConvertingFutureCallback.onCompletion(ConvertingFutureCallback.java:44)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:185)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2022-05-27 10:41:37,382] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2022-05-27 10:41:37,382] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:211)
I tried to find information on what's the matter, but I can't find anything. Can you please help me on that?
This is the sf_connector.properties file:
sf_connector.properties
Thanks!

Confluent Start -> Schema Registry Failed to Start

When I start Confluent, Schema-registry fails, preventing the process from completing successfully. This is the response I get:
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
Schema Registry failed to start
schema-registry is [DOWN]
Starting kafka-rest
Kafka Rest failed to start
kafka-rest is [DOWN]
Starting connect
connect is [UP]
When I tried to run the processes individually, zookeeper ran without problems. However, when I launched kafka, zookeeper displayed the following error:
Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
Then, when I attempted to run Schema registry, I was hit with a massive list of errors. I'm sure the errors all point to one small thing. Here are some of the errors (many repeat in the same long message):
1.
WARNING: HK2 service reification failed for [org.glassfish.jersey.message.internal.DataSourceProvider] with an exception:
MultiException stack 1 of 2
java.lang.NoClassDefFoundError: javax/activation/DataSource
2.
MultiException stack 2 of 2
java.lang.IllegalArgumentException: Errors were discovered while reifying SystemDescriptor
3.
java.lang.IllegalArgumentException: While attempting to resolve the dependencies of org.glassfish.jersey.server.validation.internal.ValidationBinder$ConfiguredValidatorProvider errors were found
4.
java.lang.NoClassDefFoundError: javax/xml/bind/ValidationException
Some of the errors vary slightly based on location, but for the most part, these 4 errors are printed out dozens of times.
I did my best to make sure no ports were being used by other processes. I also stopped and destroyed all instances of confluent that I've created before. I've played around with Kafka on this computer before, so I theorize that that could have something to do with it, but I've made sure to close all past zookeeper and kafka instances.
I've tried to run confluent on a different computer and didn't run into any issues. Does anyone know what could be the problem? I can send the entire error message and provide any additional details.
Thanks in advance!
Remove Java 9.
I had both Java 9 and Java 8 on my computer. Turns out, Confluent was attempting to use Java 9, which isn't compatible with Confluent. When I deleted everything related to Java 9, Confluent started using Java 8, which solved the problem.
As BluePhantom pointed out, using Java 7 will also do the trick.

Kafka mirrormaker do not start

Im in the process of upgrading our cluster. However Im having issues trying to make the mirrormakers run.
So this machines have kafka-brokers and kafka-mirrormakers running. They have separate init scripts.
The brokers currently are using version 10.1.1.1 and mirrormakers are using version 0.8.2-beta.
Both of them have their own config files and locations
for example brokers are installed in /server/kafka/
mirrormakers are installed under /opt/kafka_mirrormaker.
Here the config lines for brokers following what the upgrade process explained:
inter.broker.protocol.version=0.10.1
log.message.format.version=0.8.2
and for mirrormakers:
inter.broker.protocol.version=0.8.2
log.message.format.version=0.8.2
So I was testing to upgrade this to 10.2.1 I tried the upgrade in one host.
Broker is running fine after I applied the upgrade version 10.2.1 however the mirrormaker dies right away when I tried to start it.
I see this exception on the logs
Exception in thread "main" java.lang.NullPointerException
at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:309)
at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
Exception in thread "MirrorMakerShutdownHook" java.lang.NullPointerException
at kafka.tools.MirrorMaker$.cleanShutdown(MirrorMaker.scala:399)
at kafka.tools.MirrorMaker$$anon$2.run(MirrorMaker.scala:222)
tail: kafka-mirrormaker-repl-sjc2-to-hkg1.out: file truncated
Exception in thread "main" java.lang.NullPointerException
at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:309)
at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
Exception in thread "MirrorMakerShutdownHook" java.lang.NullPointerException
at kafka.tools.MirrorMaker$.cleanShutdown(MirrorMaker.scala:399)
at kafka.tools.MirrorMaker$$anon$2.run(MirrorMaker.scala:222)
and this one
[2017-05-18 17:02:27,936] ERROR Exception when starting mirror maker. (kafka.tools.MirrorMaker$)
org.apache.kafka.common.config.ConfigException: Missing required configuration "bootstrap.servers" which has no default value.
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:436)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:56)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:63)
at org.apache.kafka.clients.producer.ProducerConfig.<init>(ProducerConfig.java:340)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:191)
at kafka.tools.MirrorMaker$MirrorMakerProducer.<init>(MirrorMaker.scala:694)
at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:236)
at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
This bootstrap error is kind of weird due to this already config. The server.properties has localhost:9292 configured as the bootstrap.server
To upgrade this I did broker and mirrormaker at the same time. Im not sure if I should first upgrade all the brokers first and then the mirrormakers.
Any suggestions. Should I follow the same procedure, upgrade first all brokers and then all mirrormakers. once they are upgrade bump the protocols in server.properties. Even though when it seems that the documentation kind of doesnt imply that: http://kafka.apache.org/documentation.html#upgrade
This has being solved.
The reason why they were not starting was due to the options on the configuration files change or were not properly configure

Kafka Connection error in contoller.logs

I am using single node Kafka(v 0.10.2) and single node zookeeper (v 3.4.8) and my controller.log file is filled with this exception
java.io.IOException: Connection to 1 was disconnected before the response was read
at kafka.utils.NetworkClientBlockingOps$.$anonfun$blockingSendAndReceive$3(NetworkClientBlockingOps.scala:114)
at kafka.utils.NetworkClientBlockingOps$.$anonfun$blockingSendAndReceive$3$adapted(NetworkClientBlockingOps.scala:112)
at scala.Option.foreach(Option.scala:257)
at kafka.utils.NetworkClientBlockingOps$.$anonfun$blockingSendAndReceive$1(NetworkClientBlockingOps.scala:112)
at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:136)
at kafka.utils.NetworkClientBlockingOps$.pollContinuously$extension(NetworkClientBlockingOps.scala:142)
at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:192)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:184)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
I googled this exception but was not able to find the root cause for this exception. Can someone suggest me why this error is happening and how to prevent it?
I also encounter same issue in multi-node cluster scenario. It was because of connection shutdown between kafka-node and zookeeper. I would suggest to restart zookeeper server then kafka-node to re-establish the connection therefore broker should be handle pub/sub message transition.
Hope it would raise you from this.

Kafka : Error from SyncGroup, The request timed out

Recently we are experiencing "Error from SyncGroup: The request timed out" frequently with the Java Kafka APIs.
This issue usually happens with few topic or consumer group in Kafka cluster. Does anyone can provide some pointers about this error?
As a workaround, if I change the consumer group name I don't see the error.
Broker Version : 0.9.0
Kafka client version : 0.9.0.1
Exception in thread "main" org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: The request timed out.
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274)
#zer0Id0l
We have had the same problem recently. It happens because some Kafka Streams messages have meta information footprint which is more than a regular one (when you don't use Kafka Streams). To fix the issue, go to __consumer_offsets topic settings and set max.message.bytes param higher than it is by default. For example, in our case we have max.message.bytes = 20971520. That will completely solve your problem.