For microservice, how to add nodes for Kafka and tb-rule-engine at runtime, will it impact availability? - hash

like how re-hash after adding nodes. ​Will there be some period of time of unavailability?
There are no official documents for this answer, so I wonder if Thingsboard supports smooth dynamic expansion.

ThingsBoard Rule Engine uses Zookeeper to detect the addition/deletion of sibling nodes. The data stream of Rule Engine messages is stored in Kafka topic with a configurable amount of partitions. When you add the node, the rule engine PartitionService.recalculatePartitions method is executed on each node to get the list of partitions that this node is responsible for. So, old Rule Engine nodes will stop consuming a certain partition and the new rule engine node will start consuming those partitions. During a short period of time, you may notice a slight degradation in processing speed (due to the warm-up of the new node) but this should not affect the data processing logic itself.
As a side effect, some of the messages may be processed twice in this case. For example, rule engine A polled 1000 messages from the queue but has not committed the offset yet. The messages are now traveling through the rule chains. Then the 2nd node started and starts reading the same partition. It will start processing the uncommitted messages again.
If you must avoid duplicates in processing - configure your queue to process messages one-by-one (Sequentially, with the size of the pack equal to 1). Although this will cause performance degradation.

Related

How to expand microservices? If Kafka is used

I have built a micro service platform based on kubernetes, but Kafka is used as MQ in the service. Now a very confusing question has arisen. Kubernetes is designed to facilitate the expansion of micro services. However, when the expansion exceeds the number of Kafka partitions, some micro services cannot be consumed. What should I do?
This is a Kafka limitation and has nothing to do with your service scheduler.
Kafka consumer groups simply cannot scale beyond the partition count. So, if you have a single partitioned topic because you care about strict event ordering, then only one replica of your service can be active and consuming from the topic, and you'd need to handle failover in specific ways that is outside the scope of Kafka itself.
If your concern is the k8s autoscaler, then you can look into the KEDA autoscaler for Kafka services
Kafka, as OneCricketeer notes, bounds the parallelism of consumption by the number of partitions.
If you couple processing with consumption, this limits the number of instances which will be performing work at any given time to the number of partitions to be consumed. Because the Kafka consumer group protocol includes support for reassigning partitions consumed by a crashed (or non-responsive...) consumer to a different consumer in the group, running more instances of the service than there are partitions at least allows for the other instances to be hot spares for fast failover.
It's possible to decouple processing from consumption. The broad outline of could be to have every instance of your service join the consumer group. Up to the number of instances consuming will actually consume from the topic. They can then make a load-balanced network request to another (or the same) instance based on the message they consume to do the processing. If you allow the consumer to have multiple requests in flight, this expands your scaling horizon to max-in-flight-requests * number-of-partitions.
If it happens that the messages in a partition don't need to be processed in order, simple round-robin load-balancing of the requests is sufficient.
Conversely, if it's the case that there are effectively multiple logical streams of messages multiplexed into a given partition (e.g. if messages are keyed by equipment ID; the second message for ID A needs to be processed after the first message, but could be processed in any order relative to messages from ID B), you can still do this, but it needs some care around ensuring ordering. Additionally, given the amount of throughput you should be able to get from a consumer of a single partition, needing to scale out to the point where you have more processing instances than partitions suggests that you'll want to investigate load-balancing approaches where if request B needs to be processed after request A (presumably because request A could affect the result of request B), A and B get routed to the same instance so they can leverage local in-memory state rather than do a read-from-db then write-to-db pas de deux.
This sort of architecture can be implemented in any language, though maintaining a reasonable level of availability and consistency is going to be difficult. There are frameworks and toolkits which can deliver a lot of this functionality: Akka (JVM), Akka.Net, and Protoactor all implement useful primitives in this area (disclaimer: I'm employed by Lightbend, which maintains and provides commercial support for one of those, though I'd have (and actually have) made the same recommendations prior to my employment there).
When consuming messages from Kafka in this style of architecture, you will definitely have to make the choice between at-most-once and at-least-once delivery guarantees and that will drive decisions around when you commit offsets. Note particularly that you need to be careful, if doing at-least-once, to not commit until every message up to that offset has been processed (or discarded), lest you end up with "at-least-zero-times", which isn't a useful guarantee. If doing at-least-once, you may also want to try for effectively-once: at-least-once with idempotent processing.

Kafka consumer design to process huge volume of data with multi instance

I am trying to design Kafka consumers, and I have a road block on how to design the process. I am thinking of two options:
1. Process records directly from Kafka.
2. Staging table write from Kafka and process records.
Approach 1: Process Key messages on the go from Kafka:
• Read messages one at a time from Kafka & if no records to process break the loop (configurable messages to process)
• Execute business rules.
• Apply changes to consumer database.
• Update Kafka offset to read after processing message.
• Insert into staging table (used for PD guide later on)
Questions with above approach:
• Is it OK to subscribe to a partition and keep the lock open on Kafka partition until configurable messages are processed
and then apply business rules, apply changes to database. All happens in the same process, any performance issues doing this way ?
• Is it OK to manually commit the offset to Kafka? (Performance issues with manual offset commit).
Approach 2: Staging table write from Kafka and process records
Process 1: Consuming events from Kafka and put in staging table.
Process 2: Reading staging table (configurable rows), execute business rules, apply consumer database changes
& update the status of processed records in staging table. (we may have multiple process to do this step)
I see a lot of downside on this approach:
• We are missing the advantage of offset handling provided by Kafka and we are doing manual update of processed records in staging table.
• Locking & Blocking on staging tables for multi instance, as we are trying to insert & do updates after processing in the same staging table
(note: I can design separate tables and move this data there and process them but that could is introducing multiple processes again.
How can I design Kafka with multi instance consumer and huge data to process, which design is appropriate, is it good to read data on the go from Kafka and process the messages or stage it to a table and write another job to process these messages ?
This is how I think we can get the best throughput without worrying about the loss of messages-
Maximize the number of partitions.
Deploy the consumers(at max the number of partitions, even less if your consumers can operate multi-threaded without any problem.)
Read single-threadedly from within each consumer(with auto offset commit) and put the messages in a Blocking Queue which you can control based upon the number of actual processing threads in each consumer.
If the processing fails, you can retry for success or else put messages in a dead-letter queue. Don't forget the implementation of shut down hookups for processing already consumed messages.
If you want to ensure ordering like processing events with the same key one after the another or on any other factor from a single partition, you can use a deterministic executor. I have written a basic ExecutorService in Java that can execute multiple messages in a deterministic way without compromising on the multi-threading of logically separate events. Link- https://github.com/mukulbansal93/deterministic-threading
To answer your questions-
Is it ok to subscribe to a partition and keep the lock open on Kafka partition until configurable messages are processed and then apply business rules, apply changes to database. All happens in the same process, any performance issues doing this way? I don't see much performance issues here as you are processing in bulk. However, it is possible that one of your consumed messages is taking a long time while others get processes. In that case, you will not read other messages from Kafka leading to a performance bottleneck.
Is it ok to manually commit the offset to Kafka? (Performance issues with manual offset commit). This is definitely going to be the least throughput approach as offset committing is an expensive operation.
The first approach where you consume the data and update a table accordingly sounds like the right way.
Kafka guarantees
At least once: you may get the same message twice.
that means that you need the messages to be idempotent -> set amount to x and not add an amount to the previous value.
order (per partition): Kafka promise that you consume messages in the same order the messages were produced - per partition. Like a queue per partition.
if when you say "Execute business rules" you need to also read previous writes, that means you need to process them one by one.
How to define the partitions
If you define one partition you won't have a problem with conflicts but you will only have one consumer and that doesn't scale.
if you arbitrarily define multiple partitions then you may lose the order.
why is that a problem?
you need to define the partitions according to your business model:
For example, let's say that every message updates some user's DB. when you process a message you want to read the user row, check some fields, and then update (or not) according to that field.
that means that if you define the partition by user-id -> (user-id % number of partitions)
you guarantee that you won't have a race condition between two updates on the same user and you can scale to multiple machines/processes/threads. each consumer in-charge of some set of users but it's always the same users.
The design of your consumer depends on your usecase.
If there are other downstream processes that is expecting the same data and has a limitation to connect to your kafka cluster. In this case having a staging table is a good idea.
I think in your case approach 1 with a little alteration is a good way to go.
However you dont need to break the loop if there are no new messages in the topic.
Also, theres a consumer property that helps to configure the number of records that you want to poll from kafka in a single request (default 500) you might want to change it to a lower number if each message takes a long time to process (To avoid timeout or unwanted repartitioning issues).
Since you mentioned the amount of data is huge I would recommend having more partitions for concurrency if processing order doesnot matter for you. Concurrency can be achieved my creating a consumer group with instance count no more than the number of partitions for the topic. (If the consumer instance count is more than the number of partitions the extra instances will be ideal)
If order does matter, The producer should ideally send logically grouped messages with the same message key so that all messages with the same key land in the same partition.
About offset commiting, if you sync commit each message to kafka you will definitely have performance impact. Usually in offset is commited for each consumed batch of record. eg poll 500 records-> process -> commit the batch of records.
However, If you need to send out a commit for each message you might want to opt for Async Commit.
Additionally, when partitions are assigned to a consumer group instance it doesnot lock the partitions. Other consumer groups can subscribe to the same topic and consume messages concurrently.

On demand horizontally scaling event driven architectures

What is the best way to horizontally scale an event driven architecture when load increases?
Many people suggest using Kakfa as the message queue source for EDA
however Kafka only allows one consumer in a consumer group per
partition. Repartitioning especially during heavy load situations
can be costly and time consuming.
Having many consumers in a consumer group that take work and
acknowledge quickly would give some horizontal scaling but now
message order needs to be considered as well as load completion.
With RabbitMQ queues can be created and deleted on the fly however
that would require an additional orchestrator to help manage and
distribute load.
Also none of this addresses the load balancing problem that comes with the territory.
Any help would be appreciated. Thanks
A bit late in answering but here goes,
Your reasoning that the scaling should occur at the message bus layer is not entirely correct. If we take an end to end scenario, increased load means increase in incoming request to the front end (the API layer). See reference event driven architecture in link below.
Assuming some form of auto-scaling present (Kubernetes replication factor, Amazon autoscaler) the front end will scale out to handle extra load. After initial pre-processing the service will post the event to the message queue in event driven architecture.
In Kafka specifically the topic partition is a unit of scale out since one producer can write to one partition. Typically you would define the number of partition in advance based on throughput of single partition.
As the reference article mentions if single partition throughput is p and you need t as throughput then you need t/p partitions.
If t is the throughput of normal expected load you can create provision in advance for 2x, 3x, 10x throughput then normal by creating as many partitions.
Typically throughput on a single partition is in excess of 10 Mb/s.

Open source multi region, consistent at-least-once FIFO solution: Dedicated Queue (e.g. Kafka) vs Database (e.g. Cassandra, RethinkDB)?

I've been searching for a FIFO solution where producers and consumers can be deployed in multiple data-centers, in different regions (e.g. >20ms ping). Obviously paying the price of increased latency, the main goal is to handle transparently the increased latency, spikes in latency, link failures.
This theoretical use-case is like this:
Super Fast Producer --sticky-load-balancing-with-fail-over--> Multi-Region Processors -->
Queue(FIFO based on order established by the producer) --> Multi-Region Consumers with fail-over
Consumers should not consume from the same "queue" at the same time, however, let's not consider the scaling aspect here. If the replication and fail-over work well for one "queue" the partitioning can be applied even at the application level with a decent amount of effort.
Thoughts:
In order for fail-over to work correctly, the Queue (e.g. messages, consumer offsets) must be active-active synchronously replicated between data centers. I don't see how an active-standby asynchronous topology can work without losing messages or break FIFO in failure scenarios.
Kafka stretch cluster would be perfect, although it can span multiple availability zones (<2ms ping and stable connections), most people advise against multiple regions (>15ms ping, unstable connections).
Confluent Platform 5.4 with the synchronous replication feature is in Preview, we could fail-over consumers at the application level in case the local cluster is down. Since data is replicated synchronously we should not break FIFO or lose messages during fail-over. In order to ensure a more active-active setup, we could rotate the Consumers periodically between data centers (e.g. once or twice a day in off-peak hours).
A DB (like Cassandra) can handle consistency across multiple data-center/regions. However, a queue use-case is an anti-pattern (Using Cassandra as a Queue).
The first point would be about the pure insert/delete workload which will make the DB work really hard to remove tombstones. It is sub-optimal use of the DB, but if it can handle the workload reliably than it is not a problem IMHO
The second point is about polling, consumers will generate a large amount of quorum reads just for polling the DB even if there is no data. Again IMHO Cassandra will handle this reliably even if it is a poor use of its capabilities.
Using a DB with notifications, like CouchDB/RethinkDB. CouchDB's replication is asynchronous so I do not see how Consumers can have a consistent view of the queue. For RethinkDB I am not sure how reliable it works across regions with majority reads and writes.
Have you deployed such "queues" in production, which would you choose?
Kafka supports 2 patterns Publish-Subscribe and Message Queue. There are some places discussed the differences. here
The problem you stated can be solved using Kafka. The FIFO queue can be implemented using the topic/partition/key message. All messages with the same key will belong to the same partition hence we can achieve the FIFO attribute. In case you want to increase the consuming throughput, you just need to increase the total of partitions per topic and increase number of consumers.
Other queues such as RabbitMQ are not easy, though. For load balancing the workload, we must use the separate queue which increasing the management cost.
You can implement many kinds of delivery semantics such as at-most-once, at-least-once, exactly-once (literally) at the producer side and the consumer side. Kafka also supports multi-center deployments.
Cassandra is not designed for queue modeling, and as you said using Cassandra as a queue is an anti-pattern. It can turn quick into a nightmare.
The main problem with the queue is the deletes (Cassandra doesn't perform well for frequently updated data anyway).
Here is a link that might help you understanding delete/queue.
https://lostechies.com/ryansvihla/2014/10/20/domain-modeling-around-deletes-or-using-cassandra-as-a-queue-even-when-you-know-better/

KafkaIO uneven partition consumption after a while

I have a simple dataflow pipeline (job id 2018-05-15_06_17_40-8591349846083543299) with 1 min worker and 7 max workers that does the following:
Consume from 4 Kafka topics using KafkaIO. Each topic is represented differently and is a separate PCollection
Perform transformation on each PCollection to output a standard representation PCollection.
Merge the 4 PCollection using Flatten.pCollections
Window into hourly with the following trigger:
Repeatedly
.forever(
AfterFirst.of(
AfterPane.elementCountAtLeast(40000),
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardMinutes(5))
)
)
.orFinally(AfterWatermark.pastEndOfWindow())
Write these events to GCS using AvroIO windowed writes with 14 shards.
When the pipeline is launched initially everything is fine, but after several hours later, the System Lag increases dramatically in the AvroIO:GroupIntoShards step.
Upon further investigation one of the topics is lagging behind many hours (this topic has the greatest events per second when compared to the other 3). Looking at the logs I see Closing idle reader for S12-000000000000000a which is understandable. However, the topic's consumer group offsets for the 36 partitions is in a state where for some partitions the offset is very low, but some are very high. The log-end-offset is more or less evenly distributed and the records we are producing are around the same size.
Questions:
If the System Lag is high in a certain step, does that prevent the Kafka consumers from consuming?
Any possible reason for the uneven distribution in Kafka offsets?
The PCollection's that is merged have different traffic patterns, some low and some high. Would adding the AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardMinutes(5) trigger effectively start writing to GCS for each (window, shard) after 5 minutes when an event is first seen in a window?
Updating the pipeline using the same code / configuration brings it back into a normal state where the consumed rate is much higher (due to the lag before the restart) than the produced rate.
Addressing 3 questions raised (I left a comment about the specific job):
No, system lag does not prevent Kafka from consuming.
In general if there is lots of work for downstream stages ready to be processed, that can delay upstream work from starting. But that is not KafkaIO specific.
Does not seem to be the case here. In general, assuming there is no skew among Kafka partitions themselves, heavy skew in Beam processing can cause readers assigned to workers that are doing more work than others.
I think yes. I think firstElementInPane() applies to element from any of the sources.