Error: Could not find or load main class config.zookeeper.properties - apache-kafka

I am trying to execute a sample producer consumer application using Apache Kafka. I downloaded it from https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.0.0/kafka-0.10.0.0-src.tgz . Then I started following the steps given in http://www.javaworld.com/article/3060078/big-data/big-data-messaging-with-kafka-part-1.html.
When I tried to run bin/zookeeper-server-start.sh config/zookeeper.properties, I am getting Error: Could not find or load main class config.zookeeper.properties I googled about the issue but didn't get any useful information on this. Can anyone help me to continue?

You've downloaded the source package. Download the binary package of Kafka and do testing.

You have to download the binary version from the official Kafka web site.

Assuming you have the correct binary version check to see that you do not already have CLASSPATH defined in your environment. If you do and the defined CLASSPATH has a space in it (e.g.C:\Program Files\<>) then neither zookeeper or kafka will start.
To solve this either delete your existing CLASSPATH or modify the startup script that builds the zookeeper and kafka CLASSPATH values, putting your CLASSPATH entry in double quotes before the path is built

Related

How to read external config file when starting a bundle jboss fuse karaf

The problem is simple: i want to print all topics from apache kafka after installing kafka module on karaf. I need to get properties from cfg file which is located in jbossfuse/etc and create a KafkaConsumer object. I want to implement BundleActivator to be able to start method in the moment of installation module.
The question is: how can i get properties from the config file?
I found some solution here: some solution, they said " you can use ConfigAdimn service from OSGi spec. ". How can i use it? All examples with the code are welcome
Karaf uses Felix-FileInstall to read config files: http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html
So if there is a file named kafka.cfg, it will pick it up and register a config with the ConfigAdmin-Service under the pid 'kafka'.
You can fetch the ConfigAdmin-Service and fetch the config using an Activator and read that config from there, but I strongly recommend to use DeclarativeServices or Blueprint instead to interact with the OSGi-Framework, both support injection of configuration if it is available.
Because otherwise you have to deal yourself with the following topics:
there is no ConfigAdmin (yet), maybe because your bundle starts earlier)
the ConfigAdmin changes (for example due to a package refresh or update)
the configuration is not yet registered (because felix has not read it yet)
the configuration gets updated (for example somone changes the file)

Classpath is empty. Please build the project first e.g. by running './gradlew jar -PscalaVersion=2.11.12'

I am not able to run a Apache Kafka service due to a failure while trying to start a Zookeeper instance. I have downloaded and tried it with all 3 availabe downloads at the official site. (binarys and source) When i try to start zookeeper with
./bin/zookeeper-server-start.sh config/zookeeper.properties
I always get the same error message:
Classpath is empty. Please build the project first e.g. by running
'./gradlew jar -PscalaVersion=2.11.12'
The same goes for (after starting a seperate zookeeper (not the build-in from kakfa) instance)
./bin/kafka-server-start.sh config/server.properties
I have tried it under Ubuntu 17.04 and 18.04. When i try this on a virtual machine using Ubuntu 16.04 it works.
Unfortunatly, all i found regarding this problem, was for Windows.
Thank you for any help.
In my case it has nothing to do with the binary or source cause both of them give that same "classpath is empty please build the project first" error. Its because there is a space in the path where kafka resides.
I had the same issue, the problem was I was downloading the source of Kafka. So to make my Kafka server run, I downloaded the Kafka binaries and it worked for me.
Kafka binaries: http://mirror.cc.columbia.edu/pub/software/apache/kafka/1.1.0/
We need to download kafka-binary and not the source
Download Binary from mirror
http://mirrors.estointernet.in/apache/kafka/2.2.0/kafka_2.11-2.2.0.tgz
Go to your terminal and run:
$ ./gradlew jar -PscalaVersion=2.11.12
I had the same issue. I solved it when removed the white spases from my folder name e.g "Kafka binary" -> "Kafka_binary".
I have the same message when I try bin/kafka-topic.sh.
It's just because you have a space in the full path.
Go to the folder and execute "pwd", in the path, you must change the white space of folder by an underscore or use camel case.
I changed the path:
~/Documents/Formation/Moi/Big Data/Logiciels/kafka_2.12-2.4.1
to
~/Documents/Formation/Moi/Logiciels/kafka_binary
and it works (binary sources)
Try echo $CLASSPATH in the terminal, check if there is a Java in this system.
Or maybe you need to install java
Please check scala version installed in your system. It should be scalaVersion=2.11.12.
Otherwise Download the kafka binary with installed scala version.

libzookeeper_mt.so.2 and libmosquitto.so.1 not found

So here is the problem: I installed zookeeper(prereq) for kafka using apt-get. Also, unzipped a tar file onto my image.
When i am running the kafka adapter to connect kafka producer to my ESP(SAS Event stream processing) I am getting an error for following two files not found or shared:
libzookeeper_mt.so.2
libmosquitto.so.1
I did look around for a quite few questions on SO, but didnt find any alternative than someone saying that these files are located in
/usr/local/lib directory
Unfortunately my directory contains only the following files:
vagrant#packer-virtualbox-iso-1421293493:/usr/local/lib$ ls
librdkafka.a librdkafka++.a librdkafka.so librdkafka++.so
librdkafka.so.1 librdkafka++.so.1 pkgconfig python2.7 python3.4 site_ruby
Can anyone tell me where can i locate these 2 files so that i can share them to run my kafka adapters? FYI, following is the link to the documentation of kafka adapters in case anyone wants to know more:
http://go.documentation.sas.com/?docsetId=espca&docsetVersion=4.2&docsetTarget=p0sbfix2ql9xpln1l1x4t9017aql.htm&locale=en
#alvits suggested that the libs are not installed, so i will be trying a separate installation again. Still any help during this will be appreciated!

New download of Kafka already contains the old topic

I was working with a Kafka download pack and was following the Kafka getting started guide. Thus, I created a sample topic called test.
Then when I wanted to try setting some access control lists using the kafka-acls.sh script. For some reason, I did not find that script inside the bin directory of my kafka pack.
So, I downloaded a fresh kafka pack from their website to check and this script was available. (I don't know why or how it wasn't there in the earlier pack)
However, when I started kafka from my new pack and tried to create the same topic test, I get an error saying that the topic already exists.
I am trying to figure out how this is possible even with a freshly downloaded instance? Does kafka save topics in some common directory or something?
Shabir
Found the reason. I figured that if the topics are persisted even across different bundles of Kafka then it must be stored in some place in disk other than the bundle itself.
A little bit of searching proved that the zookeeper stores its details in the directory pointed to by dataDir inside the zookeeper.properties file which is by default /tmp/zookeeper.
Once I deleted this folder and started a fresh Kafka pack all previous topics were gone and it behaved like a new fresh pack.
Thanks
Shabir

Explain Maven artifact differences: kafka-client, kafka_2.11-<kafkserver-version>, scalatest-embedded-kafka_2.11.

Please explain the maven artifact differences and when to use what? for kafka-client, kafka_2.11-, scalatest-embedded-kafka_2.11. Is anything specially used for writing unit tests?
I want to understand when to use what?
In my repo, we have been using kafka_2.9.2-0.8.1.1, currently we are planning to move to kafka broker 0.9.0.1. Hence I used kafka_2.11-0.9.0.1 and also tried kafka_2.10-0.9.0.1.
When the unit tests runs, kafkaTestServer (kafkaserverstartable) always hangs internittently with kafka_2.10 and kafka_2.11
but with kafka_2.9.2-0.8.1.1 - never had hang issue.
if it proceeds, it failed with KafkaConfig init error or ScalaObject not found error.
I m kind of confused about these artifacts? Can anyone explain me about this?
The names encode the use Scala version as well as the uses Kafka version. For example kafka_2.9.2-0.8.1.1 is for Kafka 0.8.1.1 (ie, the suffix after the - is the Kafka version number and the binaries got compiled using Scala 2.9.2.
Thus, if you write code, you want to use the same Scala version as you artifact was compiled with. I assume, that the hanging and error is due to Scala version mismatch.