For an example, say I have a topic with 4 partitions. I send 4k messages to this topic. Each partition gets 1k messages. Due to outside factors, 3 of the consumers process all 1k of their messages respectively. However, the 4th partition was only able to get through 200 messages, leaving 800 messages left to process. Is there a mechanism to allow me to "rebalance" the data in the topic to say give partition 1-3 200 of partition 4s data leaving all partitions with 200 messages a piece of process?
I am not looking for a way adding additional nodes to the consumer group and have kafka balance the partitions.
Added output from reassign partitions:
Current partition replica assignment
{
"version": 1,
"partitions": [
{
"topic": "MyTopic",
"partition": 0,
"replicas": [
0
],
"log_\ndirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 1,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 4,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 3,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"p\nartition": 2,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 5,
"replicas": [
0
],
"log_dirs": [
"any"
]
}
]
}
Proposed partition reassignment configuration
{
"version": 1,
"partitions": [
{
"topic": "MyTopic",
"partition": 3,
"replicas": [
0
],
"log_ dirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 0,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 5,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 2,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"p artition": 4,
"replicas": [
0
],
"log_dirs": [
"any"
]
},
{
"topic": "MyTopic",
"partition": 1,
"replicas": [
0
],
"log_dirs": [
"any"
]
}
]
}
The partition is assigned when a message is produced. They are never automatically moved between partitions. In general, for each partition there can be multiple consumers (with different consumer group id) consuming at different paces so the broker can't move the messages between partitions based on the slowness of a consumer (group). There are a few things you can try though:
more partitions, hoping for a fairer distribution of load (you can have more partitions than consumers)
have producers explicitly set the partition on each message to produce a distribution between partitions that the consumers can better cope with
have consumers monitor their lag and actively unsubscribe from partitions when they fall behind so as to let other consumers pick up the load.
Couple of things which you can do to improve the performance
Increase number of partitions
Increase the consumer groups which are consuming the partitions.
The first will rebalance the load on your partitions and the second will increase the parallelism on your partitions to consume messages quickly.
I hope this helps. You can refer to this link for more understanding
https://xyu.io/2016/02/29/balancing-kafka-on-jbod/
Kafka consumers are part of consumer groups. A group has one or more consumers in it. Each partition gets assigned to one consumer.
If you have more consumers than partitions, then some of your consumers will be idle. If you have more partitions than consumers, more than one partition may get assigned to a single consumer.
Whenever a new consumer joins, a rebalance gets initiated and the new consumer is assigned some partitions previously assigned to other consumers.
For example, if there are 20 partitions all being consumed by one consumer, and another consumer joins, there'll be a rebalance.
During rebalance, the consumer group "pauses".
Related
So I created a kafka topic and the messages are getting deleted in just 24 hours.. I need it to stay as long as specified retention.ms (28 days).. Here is config:
{
"name": "foo",
"partitions": 1,
"replicas": 3,
"retention": 2419200000,
"cleanupPolicies": [
"delete"
],
"configuration": {
"compression.type": "producer",
"min.insync.replicas": "2",
"message.downconversion.enable": "true",
"segment.jitter.ms": "0",
"cleanup.policy": "delete",
"flush.ms": "9223372036854775800",
"segment.bytes": "1073741824",
"retention.ms": "2419200000",
"flush.messages": "9223372036854775800",
"message.format.version": "2.8-IV1",
"max.compaction.lag.ms": "9223372036854775800",
"file.delete.delay.ms": "60000",
"max.message.bytes": "2000000",
"min.compaction.lag.ms": "0",
"message.timestamp.type": "CreateTime",
"preallocate": "false",
"index.interval.bytes": "4096",
"min.cleanable.dirty.ratio": "0.5",
"unclean.leader.election.enable": "true",
"retention.bytes": "-1",
"delete.retention.ms": "86400000",
"segment.ms": "604800000",
"message.timestamp.difference.max.ms": "9223372036854775800",
"segment.index.bytes": "10485760"
}
}
This is 24 hours
"delete.retention.ms": "86400000",
When reading the kafka topic which contains lots of CDC events produced by Kafka-Connect using debezium and the data source is in a mongodb collection with TTL, I saw some of the CDC events are null, those are in between the deletion events. what does it really mean?
As I understand all the CDC events should have the CDC event structure, even the deletion events as well, why there are events with null value?
null,
{
"after": null,
"patch": null,
"source": {
"version": "0.9.3.Final",
"connector": "mongodb",
"name": "test",
"rs": "rs1",
"ns": "testestest",
"sec": 1555060472,
"ord": 297,
"h": 1196279425766381600,
"initsync": false
},
"op": "d",
"ts_ms": 1555060472177
},
null,
{
"after": null,
"patch": null,
"source": {
"version": "0.9.3.Final",
"connector": "mongodb",
"name": "test",
"rs": "rs1",
"ns": "testestest",
"sec": 1555060472,
"ord": 298,
"h": -2199232943406075600,
"initsync": false
},
"op": "d",
"ts_ms": 1555060472177
}
I use https://debezium.io/docs/connectors/mongodb/ without flattening any event, and use the config as follows:
{
"connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
"mongodb.hosts": "live.xxx.xxx:27019",
"mongodb.name": "testmongodb",
"collection.whitelist": "testest",
"tasks.max": 4,
"snapshot.mode": "never",
"poll.interval.ms": 15000
}
These are so-called tombstone events used for correct compaction of deleted events - see https://kafka.apache.org/documentation/#compaction
Compaction also allows for deletes. A message with a key and a null payload will be treated as a delete from the log. This delete marker will cause any prior message with that key to be removed (as would any new message with that key), but delete markers are special in that they will themselves be cleaned out of the log after a period of time to free up space. The point in time at which deletes are no longer retained is marked as the "delete retention point" in the above diagram.
I know I can clean Kafka topic on a broker by either deleting logs under /data/kafka-logs/topic/* or by setting retention.ms config to 1000. I want to know how can clean topics in a multi-node cluster. Should I stop Kafka process on each broker, delete logs and start Kafka or only leader broker would suffice? If I want to clean by setting retension.ms to 1000, do I need to set it on each broker?
To delete all messages in a specific topic, you can run kafka-delete-records.sh
For example, I have a topic called test, which has 4 partitions.
Create a Json file , for example j.json:
{
"partitions": [
{
"topic": "test",
"partition": 0,
"offset": -1
}, {
"topic": "test",
"partition": 1,
"offset": -1
}, {
"topic": "test",
"partition": 2,
"offset": -1
}, {
"topic": "test",
"partition": 3,
"offset": -1
}
],
"version": 1
}
now delete all messages by this command :
/opt/kafka/confluent-4.1.1/bin/kafdelete-records --bootstrap-server 192.168.XX.XX:9092 --offset-json-file j.json
After executing the command, this message will be displayed
Records delete operation completed:
partition: test-0 low_watermark: 7
partition: test-1 low_watermark: 7
partition: test-2 low_watermark: 7
partition: test-3 low_watermark: 7
if you want to delete one topic, you can use kafka-topics :
for example, I want to delete test topic :
/opt/kafka/confluent-4.0.0/bin/kafka-topics --zookeeper 109.XXX.XX.XX:2181 --delete --topic test
You do not need to restart Kafka
I am trying to increase the replication factor of a topic in Apache Kafka.In order to do so I am using the command
kafka-reassign-partitions --zookeeper ${zookeeperid} --reassignment-json-file ${aFile} --execute
Initially my topic has a replication factor of 1 and has 5 partitions, I am trying to increase it's replication factor to 3.There are quite a bit of messages in my topic. When I run the above command the error is - "There is an existing assignment running".
My json file looks like this :
{
"version": 1,
"partitions": [
{
"topic": "IncreaseReplicationTopic",
"partition": 0,
"replicas": [2,4,0]
},{
"topic": "IncreaseReplicationTopic",
"partition": 1,
"replicas": [3,2,1]
}, {
"topic": "IncreaseReplicationTopic",
"partition": 2,
"replicas": [4,1,0]
}, {
"topic": "IncreaseReplicationTopic",
"partition": 3,
"replicas": [0,1,3]
}, {
"topic": "IncreaseReplicationTopic",
"partition": 4,
"replicas": [1,4,2]
}
]
}
I am not able to figure out where I am getting wrong. Any pointers will be greatly appreciated.
This message means that there is already another assignment of any topic is being executed.
Please try it again after some time. Then you won't see this message
I am trying to monitor celery queue so that if no of tasks increases in a queue i can chose to spawn more worker.
How can i do this with or without Flower(the celery monitoring tool)
eg: I can get a list of all the workers like this
curl -X GET http://localhost:5555/api/workers
{
"celery#ip-172-0-0-1": {
"status": true,
"queues": [
"tasks"
],
"running_tasks": 0,
"completed_tasks": 0,
"concurrency": 1
},
"celery#ip-172-0-0-2": {
"status": true,
"queues": [
"tasks"
],
"running_tasks": 0,
"completed_tasks": 5,
"concurrency": 1
},
"celery#ip-172-0-0-3": {
"status": true,
"queues": [
"tasks"
],
"running_tasks": 0,
"completed_tasks": 5,
"concurrency": 1
}
}
similarly i need a list of tasks pending by queue name so i can start a worker on that queue.
Thanks for not down voting this question.
Reserved tasks does not make sense here. it only includes the portion of received but not running ones.
We could use rabbitmq-management to monitor the queue if using RabbitMQ as broker. celery document also provide some ways to do the same things.