I am trying to start kafka cluster on my local machine having ubuntu 18.04 with intellij 2019. I have kafka 2.3. I already started zookeeper before it. I am trying to run a shell script having below code :
kafka-server-start.sh $KAFKA_HOME/config/server-0.properties.
I am getting below error :
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/vagrant/app/apache-hive-3.0.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/vagrant/app/kafka23/libs/slf4j-log4j12-1.7.26.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2020-06-08T13:36:09,329 INFO [main] kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean
2020-06-08T13:36:09,548 ERROR [main] kafka.Kafka$ - Exiting Kafka due to fatal exception
java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)[Ljava/lang/Object;
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:43) [kafka_2.12-2.3.0.jar:?]
at kafka.Kafka$.main(Kafka.scala:67) [kafka_2.12-2.3.0.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.12-2.3.0.jar:?]```
Can somebody please help to resolve this issue ?
The issue I found out was multiple sl4j binding coming from my bashrc file. Both hive and kafka sl4j bindings caused the conflict. I commented the relevant hive code in my bashrc and was able to create kafka cluster.
Related
Am trying to stream data from one stream file to another file. It was working earlier and suddenly it providing the error as ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130). Have restarted the zookeeper, kafka-server, schema-registry, source and sink connectors, but still am facing same issue and unable to resolve it. Any suggestion would be helpful.
Source connector:
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=/home/jimmacaulay/Desktop/ETL/Kafka/confluent-5.5.1/data/data/Jim_Source.csv
topic=Jim
Sink connector:
name=local-file-sink
connector.class=FileStreamSink
tasks.max=1
file=/home/jimmacaulay/Desktop/ETL/Kafka/confluent-5.5.1/data/data/Jim_Sink.csv
topics=Jim
Error:
[2020-08-18 06:25:50,482] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)
org.apache.kafka.connect.errors.ConnectException: Unable to initialize REST server
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:217)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:87)
Caused by: java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8083
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.server.Server.doStart(Server.java:385)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:215)
... 1 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
... 8 more
Resolved the error by starting the connect-standalone using source and sink properties together.
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-source_Topic_Jim.properties ../config/connect-file-sink_Topic_Jim.properties
Earlier i was starting it separately as below,
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-source_Topic_Jim.properties
sh connect-standalone ../config/connect-avro-standalone.properties ../config/connect-file-sink_Topic_Jim.properties
Cause of the issue,
When am starting separately connect-standalone is getting started first for source properties using the port number 8083. Again when am starting the sink properties, it tries to use same port number and failes.
Solutions,
Both source and sink properties should be while starting connect-standalone which shares the same port.
Or define the different port numbers in the properties file and start it separately
I am learning how to integrate kafka with apache camel and i encountered the following error.
Any help will be appreciated.I have a file created inside C:/inbox folder and want to consume the text in it using kafka consumer.I am using version 3.1.0 of apache camel.Below is my code
package com.javainuse;
import org.apache.camel.builder.RouteBuilder;
public class SimpleRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
String topicName = "test123";
String kafkaServer = "kafka:localhost:9092";
String zooKeeperHost = "zookeeperHost=localhost&zookeeperPort=2181";
String serializerClass = "serializerClass=kafka.serializer.StringEncoder";
String toKafka = "kafka:localhost:9092;kafka:test123?brokers=localhost:9092;zookeeperHost=localhost;zookeeperPort=2181;groupId=group1";
from("file:C:/inbox?noop=true").to(toKafka);
}
}
And below is the error I am getting
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
org.apache.camel.FailedToStartRouteException: Failed to start route route1 because of Route(route1)[From[file:C:/inbox?noop=true] -> [To[kafka:loc...
at org.apache.camel.impl.engine.BaseRouteService.warmUp(BaseRouteService.java:133)
at org.apache.camel.impl.engine.AbstractCamelContext.doWarmUpRoutes(AbstractCamelContext.java:3246)
at org.apache.camel.impl.engine.AbstractCamelContext.safelyStartRouteServices(AbstractCamelContext.java:3139)
at org.apache.camel.impl.engine.AbstractCamelContext.doStartOrResumeRoutes(AbstractCamelContext.java:2925)
at org.apache.camel.impl.engine.AbstractCamelContext.doStartCamel(AbstractCamelContext.java:2725)
at org.apache.camel.impl.engine.AbstractCamelContext.lambda$doStart$2(AbstractCamelContext.java:2527)
at org.apache.camel.impl.engine.AbstractCamelContext.doWithDefinedClassLoader(AbstractCamelContext.java:2544)
at org.apache.camel.impl.engine.AbstractCamelContext.doStart(AbstractCamelContext.java:2525)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.impl.engine.AbstractCamelContext.start(AbstractCamelContext.java:2421)
at com.javainuse.MainApp.main(MainApp.java:12)
Caused by: org.apache.camel.RuntimeCamelException: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.camel.RuntimeCamelException.wrapRuntimeCamelException(RuntimeCamelException.java:52)
at org.apache.camel.support.ChildServiceSupport.start(ChildServiceSupport.java:67)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:87)
at org.apache.camel.processor.channel.DefaultChannel.doStart(DefaultChannel.java:144)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:73)
at org.apache.camel.processor.Pipeline.doStart(Pipeline.java:154)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.processor.DelegateAsyncProcessor.doStart(DelegateAsyncProcessor.java:78)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.impl.engine.BaseRouteService.startChildService(BaseRouteService.java:339)
at org.apache.camel.impl.engine.BaseRouteService.doWarmUp(BaseRouteService.java:189)
at org.apache.camel.impl.engine.BaseRouteService.warmUp(BaseRouteService.java:131)
... 10 more
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:432)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
at org.apache.camel.component.kafka.KafkaProducer.doStart(KafkaProducer.java:119)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.impl.engine.AbstractCamelContext.internalAddService(AbstractCamelContext.java:1455)
at org.apache.camel.impl.engine.AbstractCamelContext.addService(AbstractCamelContext.java:1391)
at org.apache.camel.processor.SendProcessor.doStart(SendProcessor.java:240)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:87)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler.doStart(RedeliveryErrorHandler.java:1454)
at org.apache.camel.support.ChildServiceSupport.start(ChildServiceSupport.java:60)
... 25 more
Caused by: org.apache.kafka.common.config.ConfigException: Invalid url in bootstrap.servers: localhost:9092;zookeeperHost=localhost;zookeeperPort=2181;groupId=group1
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:58)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:407)
... 37 more
Process finished with exit code 0```
The error stacktrace says that your Kafka consumer URI is invalid (at the bottom of the stacktrace). And it is indeed.
The correct form is kafka:[topicname]?[options] (check Camel-Kafka docs)
So when I look at your URI it should probably be
kafka:test123?brokers=localhost:9092&groupId=group1
Your URI has the following problems:
It contains 2 times kafka:[topicname] what is invalid
One of the kafka:[topicname] is kafka:[brokers], remove it
Semicolons (;) instead of & to delimit options
Zookeeper options for old versions of camel-kafka, remove them
By the way: The line SLF4J: Defaulting to no-operation (NOP) logger implementation on top of your stacktrace says that you use SLF4J logging interface, but you have no implementation added to your project.
If you use Maven, you can add the following dependency to add the SLF4J API as well as Logback as implementation to your project.
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>
Using the following link I have tried running kafka in windows Setting Up and Running Apache Kafka on Windows OS
I am able to run the zookeeper without any errors and when I try to run the command
.\bin\windows\kafka-server-start.bat .\config\server.properties
I am getting the following error
ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.IllegalArgumentException: Error creating broker listeners
from 'http://localhost:9092': No security protocol defined for
listener HTTP
at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:312)
at kafka.server.KafkaConfig.advertisedListeners(KafkaConfig.scala:1334)
at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1396)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1374)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1063)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:1043)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:59)
at kafka.Kafka.main(Kafka.scala)
Please direct me in resolving the above issue
Thanks
I was able to resolve the errors stated by doing the following things
I reinstalled Kafka
I made sure that my environmental path for Java is pointing to a 64 but VM only.
I am trying to setup Confluent-4.1.1 on Ubuntu 16.04. To start the ZooKeeper server, I ran ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties.txt from the root directory of Confluent by following this tutorial.
The error that comes up is-
log4j:ERROR Could not read configuration file from URL [file:./bin/../config/log4j.properties].
java.io.FileNotFoundException: ./bin/../config/log4j.properties (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.<clinit>(QuorumPeerMain.java:64)
log4j:ERROR Ignoring configuration file [file:./bin/../config/log4j.properties].
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I am new to kafka, and I have no clue what this means. Any help in resolving this would be appreciated.
The link you're following is only Apache Kafka, not Confluent, though they should work similarly at least for starting Zookeeper.
If you've downloaded the Confluent distribution, though, and want a single node cluster, you can use the Confluent CLI
To start Zookeeper, Kafka, and the rest of the Confluent Platform, run
./bin/confluent start
Otherwise, the Zookeeper startup script doesn't use a txt file, and it might be unable to detect where you've extracted the tarball, so instead you can use apt like a normal software package
https://docs.confluent.io/current/installation/installing_cp/deb-ubuntu.html
According to the documentation in the link
1:run these commands after changing path-to-confluent with your path
export CONFLUENT_HOME=
export PATH="${CONFLUENT_HOME}/bin:$PATH"
(these commands will make the "confluent" command recognizable from
terminal)
2: run below command
confluent local services start
(this command will start all the services including zookeeper ,
kafka, schema registry etc)
When I start Confluent, Schema-registry fails, preventing the process from completing successfully. This is the response I get:
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
Schema Registry failed to start
schema-registry is [DOWN]
Starting kafka-rest
Kafka Rest failed to start
kafka-rest is [DOWN]
Starting connect
connect is [UP]
When I tried to run the processes individually, zookeeper ran without problems. However, when I launched kafka, zookeeper displayed the following error:
Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
Then, when I attempted to run Schema registry, I was hit with a massive list of errors. I'm sure the errors all point to one small thing. Here are some of the errors (many repeat in the same long message):
1.
WARNING: HK2 service reification failed for [org.glassfish.jersey.message.internal.DataSourceProvider] with an exception:
MultiException stack 1 of 2
java.lang.NoClassDefFoundError: javax/activation/DataSource
2.
MultiException stack 2 of 2
java.lang.IllegalArgumentException: Errors were discovered while reifying SystemDescriptor
3.
java.lang.IllegalArgumentException: While attempting to resolve the dependencies of org.glassfish.jersey.server.validation.internal.ValidationBinder$ConfiguredValidatorProvider errors were found
4.
java.lang.NoClassDefFoundError: javax/xml/bind/ValidationException
Some of the errors vary slightly based on location, but for the most part, these 4 errors are printed out dozens of times.
I did my best to make sure no ports were being used by other processes. I also stopped and destroyed all instances of confluent that I've created before. I've played around with Kafka on this computer before, so I theorize that that could have something to do with it, but I've made sure to close all past zookeeper and kafka instances.
I've tried to run confluent on a different computer and didn't run into any issues. Does anyone know what could be the problem? I can send the entire error message and provide any additional details.
Thanks in advance!
Remove Java 9.
I had both Java 9 and Java 8 on my computer. Turns out, Confluent was attempting to use Java 9, which isn't compatible with Confluent. When I deleted everything related to Java 9, Confluent started using Java 8, which solved the problem.
As BluePhantom pointed out, using Java 7 will also do the trick.