Could not create Kafka topic in windows - apache-kafka

I run bash as administrator like recomended in Cannot create kafka topic , but It did not help (
In http://logging.apache.org/log4j/1.2/faq.html#noconfig :
Why do I see a warning about "No appenders found for logger" and "Please configure log4j properly"?
This occurs when the default configuration files log4j.properties and log4j.xml can not be found and the application performs no explicit configuration. log4j uses Thread.getContextClassLoader().getResource() to locate the default configuration files and does not directly check the file system. Knowing the appropriate location to place log4j.properties or log4j.xml requires understanding the search strategy of the class loader in use. log4j does not provide a default configuration since output to the console or to the file system may be prohibited in some environments. Also see FAQ: Why can't log4j find my properties in a J2EE or WAR application?.
In bash:
$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092
log4j:ERROR Could not read configuration file from URL [file:/c/portable/kafka_2.13-3.2.0/bin/../config/tools-log4j.properties].
java.io.FileNotFoundException: \c\portable\kafka_2.13-3.2.0\bin\..\config\tools-log4j.properties (▒▒▒▒▒▒▒ ▒▒ ▒▒▒▒▒▒▒ ▒▒▒▒▒ ▒▒▒▒▒▒▒▒▒ ▒▒▒▒)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:504)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:119)
at org.slf4j.impl.Reload4jLoggerFactory.<init>(Reload4jLoggerFactory.java:67)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at com.typesafe.scalalogging.Logger$.apply(Logger.scala:31)
at kafka.utils.Log4jControllerRegistration$.<clinit>(Logging.scala:25)
at kafka.admin.TopicCommand$.<clinit>(TopicCommand.scala:44)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
log4j:ERROR Ignoring configuration file [file:/c/portable/kafka_2.13-3.2.0/bin/../config/tools-log4j.properties].
log4j:WARN No appenders could be found for logger (kafka.utils.Log4jControllerRegistration$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Created topic quickstart-events.

The output clearly says Created topic quickstart-events.
Log4j issues are completely separate issue, and can be ignored if the command actually worked.
Ideally, don't put Kafka installation on the C drive. Instead, put it in /opt of your WSL2 session, for example

Related

kafka could not read log4j config file from URL [file:/c/kafka_2.12-2.8.0/bin/../config/tools-log4j.properties]

I started zkper and kafka. Then I tried to run consumer and got this error.
the command i used:
kafka-console-consumer.sh --zookeeper localhost:2181 --topic erjan --from-beginning
The error:
erjan#erjancomputer MINGW64 /c/kafka_2.12-2.8.0/bin/windows
$ kafka-console-consumer.sh --zookeeper localhost:2181 --topic erjan --from-beginning
log4j:ERROR Could not read configuration file from URL [file:/c/kafka_2.12-2.8.0/bin/../config/tools-log4j.properties].
java.io.FileNotFoundException: \c\kafka_2.12-2.8.0\bin\..\config\tools-log4j.properties (The system cannot find the path specified)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at com.typesafe.scalalogging.Logger$.apply(Logger.scala:48)
at kafka.utils.Log4jControllerRegistration$.<init>(Logging.scala:25)
at kafka.utils.Log4jControllerRegistration$.<clinit>(Logging.scala)
at kafka.utils.Logging.$init$(Logging.scala:47)
at kafka.tools.ConsoleConsumer$.<init>(ConsoleConsumer.scala:44)
at kafka.tools.ConsoleConsumer$.<clinit>(ConsoleConsumer.scala)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
log4j:ERROR Ignoring configuration file [file:/c/kafka_2.12-2.8.0/bin/../config/tools-log4j.properties].
log4j:WARN No appenders could be found for logger (kafka.utils.Log4jControllerRegistration$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
zookeeper is not a recognized option
why is it not seeing config/log4j.properties?
The log4j file cannot be found because, I guess, mingw or similar shell environments in Windows aren't really tested in the Kafka source code. That's why there's .bat scripts instead. If you want to use a Linux shell, uninstall Git Bash and use WSL2
Besides that, it's irrelevant to the actual error. You need to use --bootstrap-server localhost:9092 instead of the Zookeeper flag in order to consume from Kafka.
Refer the official documentation (which is for Linux), but the command arguments are all the same, even if you use the windows scripts
http://kafka.apache.org/quickstart
In your case, two dots in the path are being interpreted incorrectly. I'm assuming you're running sh on Windows. These two points are specified in the kafka-run-class.sh file in this variable:
base_dir=$(dirname $0)/..
And then this variable is used here:
LOG4J_DIR="$base_dir/config/tools-log4j.properties"
Two dots mean that you need to take the directory above the directory where sh is located, i.e. higher than bin, i.e. kafka root directory. But you have these two points interpreted as they are. As a solution, you can do this - it will have the same meaning:
LOG4J_DIR="config/tools-log4j.properties"
Or run bat as written above

Kafka Connect JDBC sink connector issue

Getting below error while running JDBC sink connector
[2020-01-08 15:05:39,271] ERROR Plugin class loader for connector: 'io.confluent.connect.jdbc.JdbcSinkConnector' was not found. Returning: org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader#6f2cfcc2 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:165)
[2020-01-08 15:05:39,272] INFO Finished creating connector test-sink (org.apache.kafka.connect.runtime.Worker:273)
[2020-01-08 15:05:39,273] ERROR Plugin class loader for connector: 'io.confluent.connect.jdbc.JdbcSinkConnector' was not found. Returning: org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader#6f2cfcc2 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:165)
[2020-01-08 15:05:39,273] INFO SinkConnectorConfig values:
I have set plugin path properly as per given in the documentation.
I had the same issue with you and just solved it. The point here is you should not copy the connector jar file to the kafka libs directory. You should set the CLASSPATH during running the command like this:
env CLASSPATH=./* connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties config/quickstart-couchbase-source.properties
or set the plugin.path in the worker .properties file.
plugin.path=/path_to_the_plugin_jar_file
Hope this help.

Error starting kafka server in Windows 10

I am facing issue when starting Kafka server in my local machine (Windows 10) using command bin\windows\kafka-server-start \config\server.properties. I am getting error below. I already have zookeeper server running.
[2018-12-26 12:03:14,124] INFO Registered kafka:type=kafka.Log4jController
MBean (kafka.utils.Log4jControllerRegistration$)
[2018-12-26 12:03:14,155] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
java.nio.file.NoSuchFileException: \config\server.properties
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:560)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:42)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)
I'm currently using kafka 2.12-2.1.1 version.
For me it worked when I initialize the kafka server with the following input:
kafka-server-start.bat \Tools\kafka_2.12-2.1.1\config\server.properties
Obs: Command source from the kafka_2.12-2.1.1\bin\windows
Obs2: I created the Tools folder inside C: to put the kafka and zookeeper files inside.
Running kafka-server-start.bat from within:
C:\Apache\kafka_2.12–2.3.1\bin\windows>
and using a relative path like this:
kafka-server-start.bat ../../config/server.properties
worked for me, for both kafka_2.12–2.3.1 and kafka-2.4.0.
If you are running kafka-server-start from kafka home directory, remove "\" before config. It should do the magic.
bin\windows\kafka-server-start config\server.properties
Thanks,
Naveen
For those who have added kafka binary's path to window's environment PATH variable and still stuck : Passing relative path of server.properties (like ..\..\config\server.properties) with respect to binary location (as added in PATH) will not work. You will have to pass absolute path for server.properties file.
It is not able to find the server.properties file in the same folder.
Provide the absolute path as below and it runs successfully.
kafka-server-start.bat C:\DEVTools\kafka_2.12-2.3.1\config\server.properties

Error trying to start zookeeper server- Confluent setup

I am trying to setup Confluent-4.1.1 on Ubuntu 16.04. To start the ZooKeeper server, I ran ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties.txt from the root directory of Confluent by following this tutorial.
The error that comes up is-
log4j:ERROR Could not read configuration file from URL [file:./bin/../config/log4j.properties].
java.io.FileNotFoundException: ./bin/../config/log4j.properties (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.<clinit>(QuorumPeerMain.java:64)
log4j:ERROR Ignoring configuration file [file:./bin/../config/log4j.properties].
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I am new to kafka, and I have no clue what this means. Any help in resolving this would be appreciated.
The link you're following is only Apache Kafka, not Confluent, though they should work similarly at least for starting Zookeeper.
If you've downloaded the Confluent distribution, though, and want a single node cluster, you can use the Confluent CLI
To start Zookeeper, Kafka, and the rest of the Confluent Platform, run
./bin/confluent start
Otherwise, the Zookeeper startup script doesn't use a txt file, and it might be unable to detect where you've extracted the tarball, so instead you can use apt like a normal software package
https://docs.confluent.io/current/installation/installing_cp/deb-ubuntu.html
According to the documentation in the link
1:run these commands after changing path-to-confluent with your path
export CONFLUENT_HOME=
export PATH="${CONFLUENT_HOME}/bin:$PATH"
(these commands will make the "confluent" command recognizable from
terminal)
2: run below command
confluent local services start
(this command will start all the services including zookeeper ,
kafka, schema registry etc)

Loading apache server logs to HDFS using Kafka

I want to load apache server logs to hdfs using Kafka.
creating topic:
./kafka-topics.sh --create --zookeeper 10.25.3.207:2181 --replication-factor 1 --partitions 1 --topic lognew
tailing the apache access log directory:
tail -f /var/log/httpd/access_log |./kafka-console-producer.sh --broker-list 10.25.3.207:6667 --topic lognew
At another terminal (of kafka bin) start consumer:
./kafka-console-consumer.sh --zookeeper 10.25.3.207:2181 --topic lognew --from-beginning
camus.properties file is configured as :
# Needed Camus properties, more cleanup to come
# final top-level data output directory, sub-directory will be dynamically created for each topic pulled
etl.destination.path=/user/root/topics
# HDFS location where you want to keep execution files, i.e. offsets, error logs, and count files
etl.execution.base.path=/user/root/exec
# where completed Camus job output directories are kept, usually a sub-dir in the base.path
etl.execution.history.path=/user/root/camus/exec/history
# Kafka-0.8 handles all zookeeper calls
#zookeeper.hosts=
#zookeeper.broker.topics=/brokers/topics
#zookeeper.broker.nodes=/brokers/ids
# Concrete implementation of the Encoder class to use (used by Kafka Audit, and thus optional for now) `camus.message.encoder.class=com.linkedin.camus.etl.kafka.coders.DummyKafkaMessageEncoder`
# Concrete implementation of the Decoder class to use
#camus.message.decoder.class=com.linkedin.camus.etl.kafka.coders.LatestSchemaKafkaAvroMessageDecoder
# Used by avro-based Decoders to use as their Schema Registry
#kafka.message.coder.schema.registry.class=com.linkedin.camus.example.schemaregistry.DummySchemaRegistry
# Used by the committer to arrange .avro files into a partitioned scheme. This will be the default partitioner for all
# topic that do not have a partitioner specified
#etl.partitioner.class=com.linkedin.camus.etl.kafka.coders.DefaultPartitioner
# Partitioners can also be set on a per-topic basis
#etl.partitioner.class.<topic-name>=com.your.custom.CustomPartitioner
# all files in this dir will be added to the distributed cache and placed on the classpath for hadoop tasks
# hdfs.default.classpath.dir=
# max hadoop tasks to use, each task can pull multiple topic partitions
mapred.map.tasks=30
# max historical time that will be pulled from each partition based on event timestamp
kafka.max.pull.hrs=1
# events with a timestamp older than this will be discarded.
kafka.max.historical.days=3
# Max minutes for each mapper to pull messages (-1 means no limit)
kafka.max.pull.minutes.per.task=-1
# if whitelist has values, only whitelisted topic are pulled. nothing on the blacklist is pulled
#kafka.blacklist.topics=
kafka.whitelist.topics=lognew
log4j.configuration=true
# Name of the client as seen by kafka
kafka.client.name=camus
# Fetch Request Parameters
#kafka.fetch.buffer.size=
#kafka.fetch.request.correlationid=
#kafka.fetch.request.max.wait=
#kafka.fetch.request.min.bytes=
# Connection parameters.
kafka.brokers=10.25.3.207:6667
#kafka.timeout.value=
#Stops the mapper from getting inundated with Decoder exceptions for the same topic
#Default value is set to 10
max.decoder.exceptions.to.print=5
#Controls the submitting of counts to Kafka
#Default value set to true
post.tracking.counts.to.kafka=true
monitoring.event.class=class.that.generates.record.to.submit.counts.to.kafka
# everything below this point can be ignored for the time being, will provide more documentation down the road
##########################
etl.run.tracking.post=false
#kafka.monitor.tier=
#etl.counts.path=
kafka.monitor.time.granularity=10
etl.hourly=hourly
etl.daily=daily
etl.ignore.schema.errors=false
# configure output compression for deflate or snappy. Defaults to deflate
etl.output.codec=deflate
etl.deflate.level=6
#etl.output.codec=snappy
etl.default.timezone=America/Los_Angeles
etl.output.file.time.partition.mins=60
etl.keep.count.files=false
etl.execution.history.max.of.quota=.8
mapred.output.compress=true
mapred.map.max.attempts=1
kafka.client.buffer.size=20971520
kafka.client.so.timeout=60000
#zookeeper.session.timeout=
#zookeeper.connection.timeout=
I get errors when i execute the below command:
hadoop jar camus-example-0.1.0-SNAPSHOT-shaded.jar com.linkedin.camus.etl.kafka.CamusJob -P camus.properties
Below is the error:
[CamusJob] - Fetching metadata from broker 10.25.3.207:6667 with client id camus for 0 topic(s) []
[CamusJob] - failed to create decoder
com.linkedin.camus.coders.MessageDecoderException: com.linkedin.camus.coders.MessageDecoderException: java.lang.NullPointerException
at com.linkedin.camus.etl.kafka.coders.MessageDecoderFactory.createMessageDecoder(MessageDecoderFactory.java:28)
at com.linkedin.camus.etl.kafka.mapred.EtlInputFormat.createMessageDecoder(EtlInputFormat.java:390)
at com.linkedin.camus.etl.kafka.mapred.EtlInputFormat.getSplits(EtlInputFormat.java:264)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at com.linkedin.camus.etl.kafka.CamusJob.run(CamusJob.java:280)
at com.linkedin.camus.etl.kafka.CamusJob.run(CamusJob.java:608)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.linkedin.camus.etl.kafka.CamusJob.main(CamusJob.java:572)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: com.linkedin.camus.coders.MessageDecoderException: java.lang.NullPointerException
at com.linkedin.camus.etl.kafka.coders.KafkaAvroMessageDecoder.init(KafkaAvroMessageDecoder.java:40)
at com.linkedin.camus.etl.kafka.coders.MessageDecoderFactory.createMessageDecoder(MessageDecoderFactory.java:24)
... 22 more
Caused by: java.lang.NullPointerException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:195)
at com.linkedin.camus.etl.kafka.coders.KafkaAvroMessageDecoder.init(KafkaAvroMessageDecoder.java:31)
... 23 more
[CamusJob] - Discarding topic (Decoder generation failed) : avrotopic
[CamusJob] - failed to create decoder
Please, suggest what can be done to resolve this problem.
Thanks in advance
Deepthy
I've never used Camus. But I believe this is a Kafka related error and it has to do with how you're encoding/decoding the message. I believe the important lines in your stack trace are
Caused by: com.linkedin.camus.coders.MessageDecoderException: java.lang.NullPointerException
at com.linkedin.camus.etl.kafka.coders.KafkaAvroMessageDecoder.init(KafkaAvroMessageDecoder.java:40)
at com.linkedin.camus.etl.kafka.coders.MessageDecoderFactory.createMessageDecoder(MessageDecoderFactory.java:24)
How are you telling Kafka to use your Avro encoding? You've commented out the following line in your config,
#kafka.message.coder.schema.registry.class=com.linkedin.camus.example.schemaregistry.DummySchemaRegistry
So are you setting that somewhere else in code? If you're not, I would suggest uncommenting out that config value and setting it to whatever avro class you're trying to decode/encode in.
It might take you some debugging to use the right classpath and such, but I believe this is an easily solvable problem.
EDIT
In responding to your comments, I have a couple comments of my own.
I have never used Camus. So debugging the errors you get from Camus is not something that I'll be able to do very well or at all. So you'll have to spend some time (maybe several hours) researching and trying different things to get it to work.
I doubt DummySchemaRegistry is the correct configuration value that you need. Anything starting with Dummy is probably not a valid configuration option.
Doing a simple google search about camus and schema registry revealed some interesting links, SchemaRegistry Classes, KafkaAvroMessageEncoder. Those are more likely to be the correct config values that you need. Just my guess, because again, I've never used Camus.
This could also be of some use to you. I don't know if you've seen it. But if you haven't, I'm pretty sure googling the specific error you get is probably something you should before coming to Stack overflow.