How to create a durable topic in kafka - apache-kafka

I am new to kafka and am still learning the basics of the same. I want to create a durable topic which is preserved even after the zoopkeeper and/or kafka server shutdown.
What I notice it this - I have a zookeeper and kafka server running on my local macbook. When I shutdown the zookeeper server and again bring it up quickly I can see the previously created topics. But If I restart the system and then restart the zookeeper server - I dont see the topic that I had created earlier.
I am running kafka_2.9.2-0.8.1.1 on my local system.

It happens because /tmp is getting cleaned after reboot resulting in loss of your data.
To fix this modify your Zookeeper dataDir property (in config/zookeeper.properties) and Kafka log.dirs (in config/server.properties) to be somewhere NOT in /tmp.

Related

Is Kafka topic linked with zookeeper and If zookeeper changed will topic disappeare

I was working with Kafka. I downloaded the zookeeper, extracted and started it.
Then I downloaded Kafka, extracted the zipped file and started Kafka. Everything was working good. I created few topics and I was able to send and receive messages. After that I stopped Kafka and Zookeeper. Then I read that Kafka itself provides Zookeeper. So I started Zookeeper that was provided with Kafka. However the data directory for it was different, and then I started Kafka from same configuration file and same data directory location. However after starting Kafka I could not find the topics that I had created.
I just want to know that, does this mean the meta data about the topics is maintained by Zookeeper. I searched Kafka documentation, however, I could not find anything in detail.
https://kafka.apache.org/documentation/
Check this documentation provided by confluent. According to this Apache Kafka® uses ZooKeeper to store persistent cluster metadata and is a critical component of the Confluent Platform deployment. For example, if you lost the Kafka data in ZooKeeper, the mapping of replicas to Brokers and topic configurations would be lost as well, making your Kafka cluster no longer functional and potentially resulting in total data loss.
So, the answer to your question is, yes, the purpose of zookeeper is to store relevant metadata about the kafka brokers, topics, etc,.
Also, since you have just started working on Kafka and Zookeeper, I would like to mention this. By default, Kafka stored it's data in a temp location which get's deleted on system reboot, so you should change that as well.
the answer to your question tag is yes,
1)Initially you started standalone zookeeper from zip file and you stopped the zookeeper, which means the topics that are created are stored in the zookeeper standalone are lost.Now you persistent cluster metadata related to Kafka is lost .
2)second time you started the zookeeper from the package that comes along with Kafka, now the new zookeeper instance does not have any topics information that you created previously, so you need to create newly .
3) suppose in case 1: if you close the terminal and start again the zookeeper from standalone , you no need to create the Topic again ,but if you stopped the zookeeper server from standalone then topics are lost.
in simple : you created two separate zookeeper instances, where topics will not be shared between them .

How to add two more kafka brokers in the local machine if my current running kafka broker already has the data

I have one broker running in my local machine with Windows OS which has 2-3 topics with messages stored. I want to scale up my machine by adding two more broker instances. I have followed all the steps to configure 3 brokers on the same machine by creating different properties file.
My broker=0 getting shutdown when I am starting broker=1 server with below error.
[2019-07-11 13:56:33,580] INFO Stopping serving logs in dir C:\kafka_2.12-2.2.1\data\kafka (kafka.log.LogManager)
[2019-07-11 13:56:33,585] ERROR Shutdown broker because all log dirs in C:\kafka_2.12-2.2.1\data\kafka have failed (kafka.log.LogManager)
Is it possible to add more brokers if my existing broker instance has the data.
Or do I need to delete the data directory and freshly start the broker 0. Is there any possibility to preserve the data without deleting it from the kafka server.
Yes you can add brokers to your cluster and migrate/spread data across all your brokers.
The Expanding your cluster section in the documentation details the steps to achieve this.
After starting the new brokers, you basically need to use the bin/kafka-reassign-partitions.sh tool (other 3rd party tools also exists) to move data onto them.
Please note however that adding brokers on the same machine does not provide a lot of resiliency as if the machine was to go down, all brokers would be affected. But if you want to just play around and learn about Kafka that may be fine.
To run multiple brokers on the same physical machine, it is necessary for each broker in the config to specify a unique broker.id, different log.dirs and ports in listeners.
For example,
config/server{1,2,3}.properties
in every config set difference
broker.id=<id>
log.dirs=/data/kafka<id>
listeners=PLAINTEXT://localhost:909<id>
When all three brokers start, new topics will be created evenly throughout the cluster, but old ones need to be rebalanced.

mapR Kafka cannot start second time round

To date I have either used an existing professional installation for Hadoop with components running, or, installed Kafka and used the also-supplied Zookeeper in a native VM.
I am trying to get the mapR Community Edition Sandbox to run now.
There is a KAFKA library on mapR, but here is no kafka shown when using jps. Seems odd? I managed to get KAFKA to start once.
There is a Zookeeper service on mapR but it uses port 5181, not 2181.
Kafka uses port 9092.
The log.dirs for kafka was set to /tmp/kafka-logs, I changed that to /opt/kafka-logs
The dataDir was also set to /tmp/zookeeper, I changed that to /opt/zookeeper
I also changed the Zookeeper port to 5181 as that is what mapR uses.
It ran once, and then I re-started and I still get this type of error:
java.io.FileNotFoundException: /tmp/kafka-logs/.lock (Permission denied)
I have done chmod 777 where required I think, but I changed the paths to /opt/... from /tmp. So why is it picking /tmp up again?
I have the impression that it keeps on point to /tmp regardless of the updates to the configurations.
I also see a warning - although I do not think this is an issue:
[2019-01-14 13:26:46,355] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
May be because of the mapR Streams I cannot influence it so as to run natively?
OK, I could delete the question as I solved it, but for those on mapR I deduced:
You need to update the port 2181 to 5181 on server.properties immediately. In this case we integrate with an existing zookeeper instance.
Likewise, update the log.dirs for Kafka from /tmp/kafka-logs asap to /opt/kafka-logs.
Likewise, update the dataDir from /tmp/zookeeper asap to /opt/zookeeper.
Trying to fix latterly otherwise leads to all sorts of issues. I ended up just re-installing and doing it right from scratch.
mapR has a faster version called mapR Streams which implements Kafka. I was not wanting to use that for what I was wanting to do, but mapR Sandbox has a lot of up-to-date items straight out of the box -certainly compared to Cloudera.

After reboot KAFKA topic appears to be lost

Having installed KAFKA and having looked at these posts:
kafka loses all topics on reboot
Kafka topic no longer exists after restart
and thus moving kafka-logs to /opt... location, I still note that when I reboot:
I can re-create the topic again.
the kafka-logs directory contains information on topics, offsets etc. but it gets corrupted.
I am wondering how to rectify this.
Testing of new topics prior to reboot works fine.
There can be two potential problems
If it is kafka running in docker, then docker image restart always cleans up the previous state and creates a new cluster hence all topics are lost.
Check the log.dir or Zookeeper data path. If either is set to /tmp directory, it will be cleaned on each reboot. Hence you will lose all logs and topics will be lost.
In this VM I noted the Zookeeper log was defined on /tmp. Changed that to /opt (presume it should be /var though) and the clearing of Kafka data when instance terminated was corrected. Not sure how to explain this completely.

How to save a kafka topic at shutdown

I'm configuring my first kafka network. I can't seem to find any support on saving a configured topic. I know I can configure a topic from the quickstart guide here, but how do I save it? I thought I could add the topic info to a .properties file inside the config dir, but I don't see any support for that.
If I shutdown my machine, my topic is deleted. How do I save the configuration?
Could the topic be deleted because you are using the default broker config? With the default config, Kafka logs are stored under /tmp folder. This folder gets wiped out during a machine reboot. You could change the broker config and pick another location for Kafka logs.