kafka consumer can't get previous unconsumed event - apache-kafka

Step 1: create Topic with only one partition:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Step 2: Produce some message to topic test.
Step 3: Start a consume on topic test. It can get all messages which is pushed in Step 2.
It works fine with topic with 1 partition.
But when I try to use topic with 2 partitions, consumer only get messages which are generated after the consumer is up.
Reproduce:
Step 1: create Topic with only one partition:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test2
Step 2: Produce some message to topic test2.
Step 3: Start a consumer on topic test2. It can't get messages in Step 2.
Step 4: keep consumer on, produce some message to topic test2, then now the consumer can get messages.
Does it work fine? Or I miss something?

auto.offset.reset option's default value is 'latest'
If you want to read the message that was sent before the consumer
set auto.offset.reset:earliest

Related

Kafka configuration min.insync.replicas not working

Its my early days in learning kafka. And I am checking out every kafka property/concept in my local machine.
So I came across this property min.insync.replicas and here is my understanding. Please correct me if I've misunderstood anything.
Once a message is sent to a topic, the message must be written to at least min.insync.replicas number of followers.
min.insync.replicas also includes the leader.
If number of available live brokers( indirectly, in sync replicas ) are less than the specified min.insync.replicas , then producer will raise an exception failing to publish the message.
Following are the steps I followed to create the above scenario
Started 3 brokers in local with broker Ids 0, 1 and 2
created the topic insync and set min.insync.replicas to 2
using the following command
sudo ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic insync --config min.insync.replicas=2
Describe the topic resulted in the following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0
At this point, I made sure the property I've provided is picked by kafka
I started sending messages and consuming them from terminal using following command
Producer: ./kafka-console-producer.sh --broker-list localhost:9092 --topic insync --producer.config ../config/producer.properties
Consumer: ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic insync
At this point, I was able to send and receive messages successfully.
Bought down 2 brokers (0 and 2) and described the topic and resulted in following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 1 Replicas: 2,0,1 Isr: 1
At this point, the In Sync Replicas are just 1(Isr: 1)
Then I tried to produce the message and it worked. I was able to send messages from console-producer and I could see those messages in console consumer.
My Kafka version: kafka_2.10-0.10.0.0
following are the producer properties:
bootstrap.servers=localhost:9092
compression.type=none
batch.size=20
acks=all
I expected the producer to fail with NotEnoughReplicasException as mentioned in this.
public class NotEnoughReplicasException
extends RetriableException
Number of insync replicas for the partition is lower than >min.insync.replicas
but it worked normally.
Am I missing something? How can I create the scenario?
*************** EDIT **********************
Instead of producing the messages from console producer, I tried to generate messages from java code. This time, I got the expected exception in the kafka broker. Although I expected it in the producer (java code). As this experiment is raising more questions, I've posted another question.
is acks set to "all"? if not, try setting it to all
I believe that error is for transactional producer, you may need to add this config:
transactional.id=TID-TEST
if still not working, please check your replicator factor and min insync isr for the internal topic: __transaction_state

kafka consumer group id does not work as expected

I am new people on apache Kafka. When I go through quick start instruction via http://kafka.apache.org/quickstart with latest version kafka_2.12-2.2.0. I got a problem and can't figure it out by myself.
The issue is, on my laptop, I created 3 brokers to simulate cluster situation.
Each broker has its owned server property file. I made below change for each server property file and leave other default value as what it is.
broker.id=1 (server2: broker.id=2; server3: broker.id=3)
listeners=PLAINTEXT://127.0.0.1:9092 (server2: 127.0.0.1:9023; server3: 127.0.0.1:9004)
log.dirs=/tmp/kafka-logs (server2: /tmp/kafka-logs-2; server3: /tmp/kafka-logs-3)
num.partitions=3 (for all servers)
offsets.topic.replication.factor=3 (for all servers)
After I started ZK and those 3 brokers, I (can) create a topic 'TestTopic' with 3 partitions on any broker
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 3 --topic TestTopic
And then I use below command to start 3 consumers in the same group 'rickygroup'.
//consumer one
bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --from-beginning --topic TestTopic —group.id rickygroup —group.name rickygroup
//consumer two
bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9093 --from-beginning --topic TestTopic —group.id rickygroup —group.name rickygroup
//consumer three
bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9094 --from-beginning --topic TestTopic —group.id rickygroup —group.name rickygroup
Now, I use another terminal to publish some messages on Topic 'TestTopic'. The issue is, all of the above 3 consumers will get all and exactly the same messages. My understanding is 3 consumers should consume all messages indifference instead of the same. Otherwise, the consumer group shows repeated consuming instead of balance consuming.
Is there any misunderstanding on consumer group concept by me? or anything I did wrong here?
The console consumer uses --group (with two dashes), not -group.id and/or -group.name, which are not parsed options.

Before consumers for new topic are attached, I create new topic and produce message in apache kafka

Before consumers for new topic are attached, I create new topic and produce a first message in apache kafka.
Then consumers for new topic are attached, but the first message could not be consumed.
Why..?
In this case, already log-end offset=1, commited offset=1, lag=0.
Doesn't "commited offset=1" mean it's already been consumed?
My question is why it has already been consumed.
Let me know if there's anything I'm wrong with.
This is my test case.
# create new topic
$ kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic NEW_TOPIC_NAME
# produce a first message
$ kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic NEW_TOPIC_NAME
> send a first message
# then execute consumer
$ kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic NEW_TOPIC_NAME
> # no consume a first message
But after consumers for new topic are attached, I produce a second message then normally consume.
By default, the kafka-console-consumer starts from the end of the topic.
If you want to consume messages produced before, you can set --from-beginning for example:
kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
--topic NEW_TOPIC_NAME --from-beginning

Kafka gruop consumer is not created

I have started a consumer in a consumer group using following command
ldnpsr000001131$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic rent_test --property group.id=rent_test auto.commit.enable=true auto.commit.interval.ms=100
as I understand, above command will create a consumer group named rent_test, and commtted offset every 100 ms. However, when I tried to list all of the consumer group, the group "rent_test" is not presented.
ldnpsr000001131$ bin/kafka-consumer-groups.sh --list --zookeeper localhost:2181
console-consumer-68623
console-consumer-18287
console-consumer-45392
test
console-consumer-9009
KafkaMirror-test
console-consumer-25049
kafka-mirror
console-consumer-61946
console-consumer-940
console-consumer-11318
KafkaMirror
console-consumer-43035
console-consumer-99202
consumer-test
console-consumer-42642
console-consumer-19085
console-consumer-7142
KafkaMirror-test-1
console-consumer-82299
console-consumer-81448
console-consumer-26487
console-consumer-71474
flink
console-consumer-4692
Please advise?
If you are using old consumer, do not specify group.id in property. In 0.10.0.1, you have to specify it in a consumer config file and set consumer.config:
bin/kafka-console-consumer.sh --zookeeper zkHost:2181 --topic test-topic --consumer.config <config file path>

Is there a way to purge the topic in Kafka?

I pushed a message that was too big into a kafka message topic on my local machine, now I'm getting an error:
kafka.common.InvalidMessageSizeException: invalid message size
Increasing the fetch.size is not ideal here, because I don't actually want to accept messages that big.
Temporarily update the retention time on the topic to one second:
kafka-topics.sh \
--zookeeper <zkhost>:2181 \
--alter \
--topic <topic name> \
--config retention.ms=1000
And in newer Kafka releases, you can also do it with kafka-configs --entity-type topics
kafka-configs.sh \
--zookeeper <zkhost>:2181 \
--entity-type topics \
--alter \
--entity-name <topic name> \
--add-config retention.ms=1000
then wait for the purge to take effect (duration depends on size of the topic). Once purged, restore the previous retention.ms value.
To purge the queue you can delete the topic:
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
then re-create it:
bin/kafka-topics.sh --create --zookeeper localhost:2181 \
--replication-factor 1 --partitions 1 --topic test
While the accepted answer is correct, that method has been deprecated. Topic configuration should now be done via kafka-configs.
kafka-configs --zookeeper localhost:2181 --entity-type topics --alter --add-config retention.ms=1000 --entity-name MyTopic
Configurations set via this method can be displayed with the command
kafka-configs --zookeeper localhost:2181 --entity-type topics --describe --entity-name MyTopic
Here are the steps to follow to delete a topic named MyTopic:
Describe the topic, and take note of the broker ids
Stop the Apache Kafka daemon for each broker ID listed.
Connect to each broker (from step 1), and delete the topic data folder, e.g. rm -rf /tmp/kafka-logs/MyTopic-0. Repeat for other partitions, and all replicas
Delete the topic metadata: zkCli.sh then rmr /brokers/MyTopic
Start the Apache Kafka daemon for each stopped machine
If you miss you step 3, then Apache Kafka will continue to report the topic as present (for example when if you run kafka-list-topic.sh).
Tested with Apache Kafka 0.8.0.
Tested in Kafka 0.8.2, for the quick-start example:
First, Add one line to server.properties file under config folder:
delete.topic.enable=true
then, you can run this command:
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
Then recreate it, for clients to continue operations against an empty topic
Following command can be used to delete all the existing messages in kafka topic:
kafka-delete-records --bootstrap-server <kafka_server:port> --offset-json-file delete.json
The structure of the delete.json file should be following:
{
"partitions": [
{
"topic": "foo",
"partition": 1,
"offset": -1
}
],
"version": 1
}
where offset :-1 will delete all the records
(This command has been tested with kafka 2.0.1
From kafka 1.1
Purge a topic
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-type topics --entity-name tp_binance_kline --add-config retention.ms=100
wait at least 1 minute, to be secure that kafka purge the topic
remove the configuration, and then go to default value
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-type topics --entity-name tp_binance_kline --delete-config retention.ms
kafka don't have direct method for purge/clean-up topic (Queues), but can do this via deleting that topic and recreate it.
first of make sure sever.properties file has and if not add delete.topic.enable=true
then, Delete topic
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic myTopic
then create it again.
bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic myTopic --partitions 10 --replication-factor 2
Following #steven appleyard answer I executed the following commands on Kafka 2.2.0 and they worked for me.
bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name <topic-name> --describe
bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name <topic-name> --alter --add-config retention.ms=1000
bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name <topic-name> --alter --delete-config retention.ms
UPDATE: This answer is relevant for Kafka 0.6. For Kafka 0.8 and later see answer by #Patrick.
Yes, stop kafka and manually delete all files from corresponding subdirectory (it's easy to find it in kafka data directory). After kafka restart the topic will be empty.
A lot of great answers over here but among them, I didn't find one about docker. I spent some time to figure out that using the broker container is wrong for this case (obviously!!!)
## this is wrong!
docker exec broker1 kafka-topics --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:258)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:254)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:112)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1826)
at kafka.admin.TopicCommand$ZookeeperTopicService$.apply(TopicCommand.scala:280)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:53)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
and I should have used zookeeper:2181 instead of --zookeeper localhost:2181 as per my compose file
## this might be an option, but as per comment below not all zookeeper images can have this script included
docker exec zookeper1 kafka-topics --zookeeper localhost:2181 --alter --topic mytopic --config retention.ms=1000
the correct command would be
docker exec broker1 kafka-configs --zookeeper zookeeper:2181 --alter --entity-type topics --entity-name dev_gdn_urls --add-config retention.ms=12800000
Hope it will save someone's time.
Also, be aware that the messages won't be deleted immediately and it will happen when the segment of the log will be closed.
Sometimes, if you've a saturated cluster (too many partitions, or using encrypted topic data, or using SSL, or the controller is on a bad node, or the connection is flaky, it'll take a long time to purge said topic.
I follow these steps, particularly if you're using TLS.
1: Run with kafka tools :
kafka-configs.sh --alter --entity-type topics --zookeeper zookeeper01.kafka.com --add-config retention.ms=1 --entity-name <topic-name>
2: Run:
kafka-console-consumer --consumer-property security.protocol=SSL --consumer-property ssl.truststore.location=/etc/schema-registry/secrets/trust.jks --consumer-property ssl.truststore.password=password --consumer-property ssl.keystore.location=/etc/schema-registry/secrets/identity.jks --consumer-property ssl.keystore.password=password --consumer-property ssl.key.password=password --bootstrap-server broker01.kafka.com:9092 --topic <topic-name> --new-consumer --from-beginning
3: Set topic retention back to the original setting, once topic is empty.
kafka-configs.sh --alter --entity-type topics --zookeeper zookeeper01.kafka.com --add-config retention.ms=604800000 --entity-name <topic-name>
Hope this helps someone, as it isn't easily advertised.
The simplest approach is to set the date of the individual log files to be older than the retention period. Then the broker should clean them up and remove them for you within a few seconds. This offers several advantages:
No need to bring down brokers, it's a runtime operation.
Avoids the possibility of invalid offset exceptions (more on that below).
In my experience with Kafka 0.7.x, removing the log files and restarting the broker could lead to invalid offset exceptions for certain consumers. This would happen because the broker restarts the offsets at zero (in the absence of any existing log files), and a consumer that was previously consuming from the topic would reconnect to request a specific [once valid] offset. If this offset happens to fall outside the bounds of the new topic logs, then no harm and the consumer resumes at either the beginning or the end. But, if the offset falls within the bounds of the new topic logs, the broker attempts to fetch the message set but fails because the offset doesn't align to an actual message.
This could be mitigated by also clearing the consumer offsets in zookeeper for that topic. But if you don't need a virgin topic and just want to remove the existing contents, then simply 'touch'-ing a few topic logs is far easier and more reliable, than stopping brokers, deleting topic logs, and clearing certain zookeeper nodes.
Thomas' advice is great but unfortunately zkCli in old versions of Zookeeper (for example 3.3.6) do not seem to support rmr. For example compare the command line implementation in modern Zookeeper with version 3.3.
If you are faced with an old version of Zookeeper one solution is to use a client library such as zc.zk for Python. For people not familiar with Python you need to install it using pip or easy_install. Then start a Python shell (python) and you can do:
import zc.zk
zk = zc.zk.ZooKeeper('localhost:2181')
zk.delete_recursive('brokers/MyTopic')
or even
zk.delete_recursive('brokers')
if you want to remove all the topics from Kafka.
Besides updating retention.ms and retention.bytes, I noticed topic cleanup policy should be "delete" (default), if "compact", it is going to hold on to messages longer, i.e., if it is "compact", you have to specify delete.retention.ms also.
$ ./bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-name test-topic-3-100 --entity-type topics
Configs for topics:test-topic-3-100 are retention.ms=1000,delete.retention.ms=10000,cleanup.policy=delete,retention.bytes=1
Also had to monitor earliest/latest offsets should be same to confirm this successfully happened, can also check the du -h /tmp/kafka-logs/test-topic-3-100-*
$ ./bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list "BROKER:9095" --topic test-topic-3-100 --time -1 | awk -F ":" '{sum += $3} END {print sum}'
26599762
$ ./bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list "BROKER:9095" --topic test-topic-3-100 --time -2 | awk -F ":" '{sum += $3} END {print sum}'
26599762
The other problem is, you have to get current config first so you remember to revert after deletion is successful:
./bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-name test-topic-3-100 --entity-type topics
The workaround of temporarily reducing the retention time for a topic, suggested by user644265 in this answer still works but recent versions of kafka-configs will warn that the --zookeeper option has been deprecated:
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka
Use --bootstrap-server instead; for example
kafka-configs --bootstrap-server localhost:9092 --alter --entity-type topics --entity-name my_topic --add-config retention.ms=100
and
kafka-configs --bootstrap-server localhost:9092 --alter --entity-type topics --entity-name my_topic --delete-config retention.ms
To clean up all the messages from a particular topic using your application group (GroupName should be same as application kafka group name).
./kafka-path/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topicName --from-beginning --group application-group
Another, rather manual, approach for purging a topic is:
in the brokers:
stop kafka broker
sudo service kafka stop
delete all partition log files (should be done on all brokers)
sudo rm -R /kafka-storage/kafka-logs/<some_topic_name>-*
in zookeeper:
run zookeeper command line interface
sudo /usr/lib/zookeeper/bin/zkCli.sh
use zkCli to remove the topic metadata
rmr /brokers/topic/<some_topic_name>
in the brokers again:
restart broker service
sudo service kafka start
./kafka-topics.sh --describe --zookeeper zkHost:2181 --topic myTopic
This should give retention.ms configured. Then you can use above alter command to change to 1second (and later revert back to default).
Topic:myTopic PartitionCount:6 ReplicationFactor:1 Configs:retention.ms=86400000
From Java, using the new AdminZkClient instead of the deprecated AdminUtils:
public void reset() {
try (KafkaZkClient zkClient = KafkaZkClient.apply("localhost:2181", false, 200_000,
5000, 10, Time.SYSTEM, "metricGroup", "metricType")) {
for (Map.Entry<String, List<PartitionInfo>> entry : listTopics().entrySet()) {
deleteTopic(entry.getKey(), zkClient);
}
}
}
private void deleteTopic(String topic, KafkaZkClient zkClient) {
// skip Kafka internal topic
if (topic.startsWith("__")) {
return;
}
System.out.println("Resetting Topic: " + topic);
AdminZkClient adminZkClient = new AdminZkClient(zkClient);
adminZkClient.deleteTopic(topic);
// deletions are not instantaneous
boolean success = false;
int maxMs = 5_000;
while (maxMs > 0 && !success) {
try {
maxMs -= 100;
adminZkClient.createTopic(topic, 1, 1, new Properties(), null);
success = true;
} catch (TopicExistsException ignored) {
}
}
if (!success) {
Assert.fail("failed to create " + topic);
}
}
private Map<String, List<PartitionInfo>> listTopics() {
Properties props = new Properties();
props.put("bootstrap.servers", kafkaContainer.getBootstrapServers());
props.put("group.id", "test-container-consumer-group");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
Map<String, List<PartitionInfo>> topics = consumer.listTopics();
consumer.close();
return topics;
}
If you want to do this programmatically within a Java Application you can use the AdminClient's API deleteRecords. Using the AdminClient allows you to delete records on a partition and offset level.
According to the JavaDocs this operation is supported by brokers with version 0.11.0.0 or higher.
Here is a simple example:
String brokers = "localhost:9092";
String topicName = "test";
TopicPartition topicPartition = new TopicPartition(topicName, 0);
RecordsToDelete recordsToDelete = RecordsToDelete.beforeOffset(5L);
Map<TopicPartition, RecordsToDelete> topicPartitionRecordToDelete = new HashMap<>();
topicPartitionRecordToDelete.put(topicPartition, recordsToDelete);
// Create AdminClient
final Properties properties = new Properties();
properties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
AdminClient adminClient = AdminClient.create(properties);
try {
adminClient.deleteRecords(topicPartitionRecordToDelete).all().get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} finally {
adminClient.close();
}
you have to enable this on config
echo "delete.topic.enable=true" >> /opt/kafka/config/server.properties
sudo systemctl stop kafka
sudo systemctl start kafka
purge the topic
/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic flows
create the topic
# /opt/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:2181 --replication-factor 1 --partitions 1 --topic Test
read the topic
# /opt/kafka/bin/kafka-console-consumer.sh localhost:9092 --topic flows --from-beginning
if you are using confluentinc/cp-kafka containers here is the command to delete the topic.
docker exec -it <kafka-container-id> kafka-topics --zookeeper zookeeper:2181 --delete --topic <topic-name>
Success response:
Topic <topic-name> is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
I'm using Kafka 2.13 tools. Now --zookeeper is unrecognized option for kafka-topics.sh . To delete a topic:
bin/kafka-topics.sh --bootstrap-server [kafka broker]:9092 --delete --topic [topic name]
Just take into account that to create the same topic again you may need to way a while if you had a lot of data in the deleted topic. When you try to create the same topic, you may get the error:
ERROR org.apache.kafka.common.errors.TopicExistsException: Topic
'[topic name]' is marked for deletion.
Just in case someone is looking for an updated answer (in 2022), I found the following will work for Kafka version 3.3.1. This will change the configuration for "your-topic" so that messages are retained for 1000ms. After messages are purged, then you can set back to a different value.
bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name your-topic --alter --add-config retention.ms=1000
have you considered having your app simply use a new renamed topic? (i.e. a topic that is named like the original topic but with a "1" appended at the end).
That would also give your app a fresh clean topic.