New download of Kafka already contains the old topic - apache-kafka

I was working with a Kafka download pack and was following the Kafka getting started guide. Thus, I created a sample topic called test.
Then when I wanted to try setting some access control lists using the kafka-acls.sh script. For some reason, I did not find that script inside the bin directory of my kafka pack.
So, I downloaded a fresh kafka pack from their website to check and this script was available. (I don't know why or how it wasn't there in the earlier pack)
However, when I started kafka from my new pack and tried to create the same topic test, I get an error saying that the topic already exists.
I am trying to figure out how this is possible even with a freshly downloaded instance? Does kafka save topics in some common directory or something?
Shabir

Found the reason. I figured that if the topics are persisted even across different bundles of Kafka then it must be stored in some place in disk other than the bundle itself.
A little bit of searching proved that the zookeeper stores its details in the directory pointed to by dataDir inside the zookeeper.properties file which is by default /tmp/zookeeper.
Once I deleted this folder and started a fresh Kafka pack all previous topics were gone and it behaved like a new fresh pack.
Thanks
Shabir

Related

How to read external config file when starting a bundle jboss fuse karaf

The problem is simple: i want to print all topics from apache kafka after installing kafka module on karaf. I need to get properties from cfg file which is located in jbossfuse/etc and create a KafkaConsumer object. I want to implement BundleActivator to be able to start method in the moment of installation module.
The question is: how can i get properties from the config file?
I found some solution here: some solution, they said " you can use ConfigAdimn service from OSGi spec. ". How can i use it? All examples with the code are welcome
Karaf uses Felix-FileInstall to read config files: http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html
So if there is a file named kafka.cfg, it will pick it up and register a config with the ConfigAdmin-Service under the pid 'kafka'.
You can fetch the ConfigAdmin-Service and fetch the config using an Activator and read that config from there, but I strongly recommend to use DeclarativeServices or Blueprint instead to interact with the OSGi-Framework, both support injection of configuration if it is available.
Because otherwise you have to deal yourself with the following topics:
there is no ConfigAdmin (yet), maybe because your bundle starts earlier)
the ConfigAdmin changes (for example due to a package refresh or update)
the configuration is not yet registered (because felix has not read it yet)
the configuration gets updated (for example somone changes the file)

updating kafka dependency in camus is causing messages not read by EtlRecordReader

In my project camus is used for long time and it is never get updated.
The camus project uses kafka version 0.8.2.2. I want to find a workaround to use kafka 1.0.0.
So I cloned the directory and updated the dependency. When I do that the Message here requires additional parameters here.
As given in the github link above, the code compiles but the messages are not read from the kafka due to the condition here.
Is it possible to update the kafka dependency along with appropriate data constructors of kafka.message.Message and make it work.

libzookeeper_mt.so.2 and libmosquitto.so.1 not found

So here is the problem: I installed zookeeper(prereq) for kafka using apt-get. Also, unzipped a tar file onto my image.
When i am running the kafka adapter to connect kafka producer to my ESP(SAS Event stream processing) I am getting an error for following two files not found or shared:
libzookeeper_mt.so.2
libmosquitto.so.1
I did look around for a quite few questions on SO, but didnt find any alternative than someone saying that these files are located in
/usr/local/lib directory
Unfortunately my directory contains only the following files:
vagrant#packer-virtualbox-iso-1421293493:/usr/local/lib$ ls
librdkafka.a librdkafka++.a librdkafka.so librdkafka++.so
librdkafka.so.1 librdkafka++.so.1 pkgconfig python2.7 python3.4 site_ruby
Can anyone tell me where can i locate these 2 files so that i can share them to run my kafka adapters? FYI, following is the link to the documentation of kafka adapters in case anyone wants to know more:
http://go.documentation.sas.com/?docsetId=espca&docsetVersion=4.2&docsetTarget=p0sbfix2ql9xpln1l1x4t9017aql.htm&locale=en
#alvits suggested that the libs are not installed, so i will be trying a separate installation again. Still any help during this will be appreciated!

Delete Topic from kafka within c++ code

I am using c++ kafka implementation for C++ rdkafka/edenhill.
The question is about topic deletion. I'am creating lots of topics with GUID while my program is running, at the end of the program (destructors) I want to clean all those topics (there is no need for them any more).
How can I do it from within my c++ code ?
thanks ahead
Remove the all the folders that has your topic name from kafka-logs directory on kafka server. That you can do it through code in c++.

Error: Could not find or load main class config.zookeeper.properties

I am trying to execute a sample producer consumer application using Apache Kafka. I downloaded it from https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.0.0/kafka-0.10.0.0-src.tgz . Then I started following the steps given in http://www.javaworld.com/article/3060078/big-data/big-data-messaging-with-kafka-part-1.html.
When I tried to run bin/zookeeper-server-start.sh config/zookeeper.properties, I am getting Error: Could not find or load main class config.zookeeper.properties I googled about the issue but didn't get any useful information on this. Can anyone help me to continue?
You've downloaded the source package. Download the binary package of Kafka and do testing.
You have to download the binary version from the official Kafka web site.
Assuming you have the correct binary version check to see that you do not already have CLASSPATH defined in your environment. If you do and the defined CLASSPATH has a space in it (e.g.C:\Program Files\<>) then neither zookeeper or kafka will start.
To solve this either delete your existing CLASSPATH or modify the startup script that builds the zookeeper and kafka CLASSPATH values, putting your CLASSPATH entry in double quotes before the path is built