How to expand microservices? If Kafka is used - kubernetes

I have built a micro service platform based on kubernetes, but Kafka is used as MQ in the service. Now a very confusing question has arisen. Kubernetes is designed to facilitate the expansion of micro services. However, when the expansion exceeds the number of Kafka partitions, some micro services cannot be consumed. What should I do?

This is a Kafka limitation and has nothing to do with your service scheduler.
Kafka consumer groups simply cannot scale beyond the partition count. So, if you have a single partitioned topic because you care about strict event ordering, then only one replica of your service can be active and consuming from the topic, and you'd need to handle failover in specific ways that is outside the scope of Kafka itself.
If your concern is the k8s autoscaler, then you can look into the KEDA autoscaler for Kafka services

Kafka, as OneCricketeer notes, bounds the parallelism of consumption by the number of partitions.
If you couple processing with consumption, this limits the number of instances which will be performing work at any given time to the number of partitions to be consumed. Because the Kafka consumer group protocol includes support for reassigning partitions consumed by a crashed (or non-responsive...) consumer to a different consumer in the group, running more instances of the service than there are partitions at least allows for the other instances to be hot spares for fast failover.
It's possible to decouple processing from consumption. The broad outline of could be to have every instance of your service join the consumer group. Up to the number of instances consuming will actually consume from the topic. They can then make a load-balanced network request to another (or the same) instance based on the message they consume to do the processing. If you allow the consumer to have multiple requests in flight, this expands your scaling horizon to max-in-flight-requests * number-of-partitions.
If it happens that the messages in a partition don't need to be processed in order, simple round-robin load-balancing of the requests is sufficient.
Conversely, if it's the case that there are effectively multiple logical streams of messages multiplexed into a given partition (e.g. if messages are keyed by equipment ID; the second message for ID A needs to be processed after the first message, but could be processed in any order relative to messages from ID B), you can still do this, but it needs some care around ensuring ordering. Additionally, given the amount of throughput you should be able to get from a consumer of a single partition, needing to scale out to the point where you have more processing instances than partitions suggests that you'll want to investigate load-balancing approaches where if request B needs to be processed after request A (presumably because request A could affect the result of request B), A and B get routed to the same instance so they can leverage local in-memory state rather than do a read-from-db then write-to-db pas de deux.
This sort of architecture can be implemented in any language, though maintaining a reasonable level of availability and consistency is going to be difficult. There are frameworks and toolkits which can deliver a lot of this functionality: Akka (JVM), Akka.Net, and Protoactor all implement useful primitives in this area (disclaimer: I'm employed by Lightbend, which maintains and provides commercial support for one of those, though I'd have (and actually have) made the same recommendations prior to my employment there).
When consuming messages from Kafka in this style of architecture, you will definitely have to make the choice between at-most-once and at-least-once delivery guarantees and that will drive decisions around when you commit offsets. Note particularly that you need to be careful, if doing at-least-once, to not commit until every message up to that offset has been processed (or discarded), lest you end up with "at-least-zero-times", which isn't a useful guarantee. If doing at-least-once, you may also want to try for effectively-once: at-least-once with idempotent processing.

Related

Streaming audio streams trough MQ (scalability)

my question is rather specific, so I will be ok with a general answer, which will point me in the right direction.
Description of the problem:
I want to deliver specific task data from multiple producers to a particular consumer working on the task (both are docker containers run in k8s). The relation is many to many - any producer can create a data packet for any consumer. Each consumer is processing ~10 streams of data at any given moment, while each data stream consists of 100 of 160b messages per second (from different producers).
Current solution:
In our current solution, each producer has a cache of a task: (IP: PORT) pair values for consumers and uses UDP data packets to send the data directly. It is nicely scalable but rather messy in deployment.
Question:
Could this be realized in the form of a message queue of sorts (Kafka, Redis, rabbitMQ...)? E.g., having a channel for each task where producers send data while consumer - well consumes them? How many streams would be feasible to handle for the MQ (i know it would differ - suggest your best).
Edit: Would 1000 streams which equal 100 000 messages per second be feasible? (troughput for 1000 streams is 16 Mb/s)
Edit 2: Fixed packed size to 160b (typo)
Unless you need disk persistence, do not even look in message broker direction. You are just adding one problem to an other. Direct network code is a proper way to solve audio broadcast. Now if your code is messy and if you want a simplified programming model good alternative to sockets is a ZeroMQ library. This will give you all MessageBroker functionality for which you care: a) discrete messaging instead of streams, b) client discoverability; without going overboard with another software layer.
When it comes to "feasible": 100 000 messages per second with 160kb message is a lot of data and it comes to 1.6 Gb/sec even without any messaging protocol on top of it. In general Kafka shines at message throughput of small messages as it batches messages on many layers. Knowing this sustained performances of Kafka are usually constrained by disk speed, as Kafka is intentionally written this way (slowest component is disk). However your messages are very large and you need to both write and read messages at same time so I don't see it happen without large cluster installation as your problem is actual data throughput, and not number of messages.
Because you are data limited, even other classic MQ software like ActiveMQ, IBM MQ etc is actually able to cope very well with your situation. In general classic brokers are much more "chatty" than Kafka and are not able to hit message troughpout of Kafka when handling small messages. But as long as you are using large non-persistent messages (and proper broker configuration) you can expect decent performances in mb/sec from those too. Classic brokers will, with proper configuration, directly connect a socket of producer to a socket of a consumer without hitting a disk. In contrast Kafka will always persist to disk first. So they even have some latency pluses over Kafka.
However this direct socket-to-socket "optimisation" is just a full circle turn to the start of an this answer. Unless you need audio stream persistence, all you are doing with a broker-in-the-middle is finding an indirect way of binding producing sockets to consuming ones and then sending discrete messages over this connection. If that is all you need - ZeroMQ is made for this.
There is also messaging protocol called MQTT which may be something of interest to you if you choose to pursue a broker solution. As it is meant to be extremely scalable solution with low overhead.
A basic approach
As from Kafka perspective, each stream in your problem can map to one topic in Kafka and
therefore there is one producer-consumer pair per topic.
Con: If you have lots of streams, you will end up with lot of topics and IMO the solution can get messier here too as you are increasing the no. of topics.
An alternative approach
Alternatively, the best way is to map multiple streams to one topic where each stream is separated by a key (like you use IP:Port combination) and then have multiple consumers each subscribing to a specific set of partition(s) as determined by the key. Partitions are the point of scalability in Kafka.
Con: Though you can increase the no. of partitions, you cannot decrease them.
Type of data matters
If your streams are heterogeneous, in the sense that it would not be apt for all of them to share a common topic, you can create more topics.
Usually, topics are determined by the data they host and/or what their consumers do with the data in the topic. If all of your consumers do the same thing i.e. have the same processing logic, it is reasonable to go for one topic with multiple partitions.
Some points to consider:
Unlike in your current solution (I suppose), once the message is received, it doesn't get lost once it is received and processed, rather it continues to stay in the topic till the configured retention period.
Take proper care in determining the keying strategy i.e. which messages land in which partitions. As said, earlier, if all of your consumers do the same thing, all of them can be in a consumer group to share the workload.
Consumers belonging to the same group do a common task and will subscribe to a set of partitions determined by the partition assignor. Each consumer will then get a set of keys in other words, set of streams or as per your current solution, a set of one or more IP:Port pairs.

Open source multi region, consistent at-least-once FIFO solution: Dedicated Queue (e.g. Kafka) vs Database (e.g. Cassandra, RethinkDB)?

I've been searching for a FIFO solution where producers and consumers can be deployed in multiple data-centers, in different regions (e.g. >20ms ping). Obviously paying the price of increased latency, the main goal is to handle transparently the increased latency, spikes in latency, link failures.
This theoretical use-case is like this:
Super Fast Producer --sticky-load-balancing-with-fail-over--> Multi-Region Processors -->
Queue(FIFO based on order established by the producer) --> Multi-Region Consumers with fail-over
Consumers should not consume from the same "queue" at the same time, however, let's not consider the scaling aspect here. If the replication and fail-over work well for one "queue" the partitioning can be applied even at the application level with a decent amount of effort.
Thoughts:
In order for fail-over to work correctly, the Queue (e.g. messages, consumer offsets) must be active-active synchronously replicated between data centers. I don't see how an active-standby asynchronous topology can work without losing messages or break FIFO in failure scenarios.
Kafka stretch cluster would be perfect, although it can span multiple availability zones (<2ms ping and stable connections), most people advise against multiple regions (>15ms ping, unstable connections).
Confluent Platform 5.4 with the synchronous replication feature is in Preview, we could fail-over consumers at the application level in case the local cluster is down. Since data is replicated synchronously we should not break FIFO or lose messages during fail-over. In order to ensure a more active-active setup, we could rotate the Consumers periodically between data centers (e.g. once or twice a day in off-peak hours).
A DB (like Cassandra) can handle consistency across multiple data-center/regions. However, a queue use-case is an anti-pattern (Using Cassandra as a Queue).
The first point would be about the pure insert/delete workload which will make the DB work really hard to remove tombstones. It is sub-optimal use of the DB, but if it can handle the workload reliably than it is not a problem IMHO
The second point is about polling, consumers will generate a large amount of quorum reads just for polling the DB even if there is no data. Again IMHO Cassandra will handle this reliably even if it is a poor use of its capabilities.
Using a DB with notifications, like CouchDB/RethinkDB. CouchDB's replication is asynchronous so I do not see how Consumers can have a consistent view of the queue. For RethinkDB I am not sure how reliable it works across regions with majority reads and writes.
Have you deployed such "queues" in production, which would you choose?
Kafka supports 2 patterns Publish-Subscribe and Message Queue. There are some places discussed the differences. here
The problem you stated can be solved using Kafka. The FIFO queue can be implemented using the topic/partition/key message. All messages with the same key will belong to the same partition hence we can achieve the FIFO attribute. In case you want to increase the consuming throughput, you just need to increase the total of partitions per topic and increase number of consumers.
Other queues such as RabbitMQ are not easy, though. For load balancing the workload, we must use the separate queue which increasing the management cost.
You can implement many kinds of delivery semantics such as at-most-once, at-least-once, exactly-once (literally) at the producer side and the consumer side. Kafka also supports multi-center deployments.
Cassandra is not designed for queue modeling, and as you said using Cassandra as a queue is an anti-pattern. It can turn quick into a nightmare.
The main problem with the queue is the deletes (Cassandra doesn't perform well for frequently updated data anyway).
Here is a link that might help you understanding delete/queue.
https://lostechies.com/ryansvihla/2014/10/20/domain-modeling-around-deletes-or-using-cassandra-as-a-queue-even-when-you-know-better/

Kafka Consumer co-location. (Partition-Consumer allocation logic)

The essence of distributed computation is to co-locate execution with data, or in other words, to send your code o your data, not your data to your code. That is the core design of Hadoop, Spark etc.
Does Kafka / Kafka Streams allow such setup? If yes, how? If no is there something planned, maybe as a subproject e.g. using Kubernetes or similar?
I know that we can define consumer groups for a topic but I don't understand how partitions are allocated to consuming application instances and if this allocation can be made to favour co-located instances.
Please let me know if there is a better term to search for as "kafka consumer co-location" didn't please the google gods :/
The Kafka model is different. The Kafka cluster itself only stores data streams. Computation happens outside the Kafka cluster. Thus, there is only limited notion of co-location, ie, data will always be sent over the network to consumer/streams applications that do the processing.
For Kafka Streams, if you do joins for example, data sub-streams (based in Kafka partitions) of both input streams for the join will be co-located within a single Kafka Streams instance though to compute the correct result.
Note, that data stream processing is a different model and thus "shipping code to data" is not important as for batch processing.
Why would we like to have that?
To minimize network traffic? Reduce delays?
The wish is to try to give each partition to a local consumer, if possible. Any of the following condition makes that impossible or undesirable:
Broker's host does not run any Consumers
Local consumers don't subscribe to broker's topics
Local consumer is overloaded, compared to some external consumers
Even in case of a relatively simple StickyAssignor, this problem turns out to be a multi-objective optimization:
Optimize for evenly-distributed consumer load
Optimize for preserving previously-assigned partitions
All, in the situation when topic distribution and consumer membership changes dynamically!
The next step would be to introduce some numerical measure of locality. Ideal assignment would try to connect broker and consumer on the same host, rack, data center, or continent. For example, you might wish to use ping time as a measure of distance between processes; or a number of hops.
Another dimension is variation of host capabilities and load. How many more partitions consumer's host can handle?
There must be a way to aggregate all the requirements into a single number: how good is the assignment of Topic X to a consumer Y.
In the end, you might get a n * m matrix of assignment scores: for each consumer-broker pair you might compute an assignment penalty. By solving that assignment problem in O(n^3) time you'll get the best possible assignment, which favors all aspects, important for your application:
closeness to brocker
closeness to end-user
consumer's cache state
CPU load and free disk space of consumer nodes
maybe some other criteria like: regulation requirements, scheduled maintenance, cost of running a node
Kafka has a PartitionAssignor class, which controls relation between Topics and Consumers. Default is very simple algorithm, but there are more sophisticated implementations like StickyAssignor, which tries to preserve Consumer's caches. It is a pluggable interface, open for experimentation.
Kafka's philosophy favors robustness and universality. Maybe that's why such fragile and multi-faceted optimizations are not a part of a standard distribution.

Kafka Topology Best Practice

I have 4 machines where a Kafka Cluster is configured with topology that
each machine has one zookeeper and two broker.
With this configuration what do you advice for maximum topic&partition for best performance?
Replication Factor 3:
using kafka 0.10.XX
Thanks?
Each topic is restricted to 100,000 partitions no matter how many nodes (as of July 2017)
As to the number of topics that depends on how large the smallest RAM is across the machines. This is due to Zookeeper keeping everything in memory for quick access (also it doesnt shard the znodes, just replicates across ZK nodes upon write). This effectively means once you exhaust one machines memory that ZK will fail to add more topics. You will most likely run out of file handles before reaching this limit on the Kafka broker nodes.
To quote the KAFKA docs on their site (6.1 Basic Kafka Operations https://kafka.apache.org/documentation/#basic_ops_add_topic):
Each sharded partition log is placed into its own folder under the Kafka log directory. The name of such folders consists of the topic name, appended by a dash (-) and the partition id. Since a typical folder name can not be over 255 characters long, there will be a limitation on the length of topic names. We assume the number of partitions will not ever be above 100,000. Therefore, topic names cannot be longer than 249 characters. This leaves just enough room in the folder name for a dash and a potentially 5 digit long partition id.
To quote the Zookeeper docs (https://zookeeper.apache.org/doc/trunk/zookeeperOver.html):
The replicated database is an in-memory database containing the entire data tree. Updates are logged to disk for recoverability, and writes are serialized to disk before they are applied to the in-memory database.
Performance:
Depending on your publishing and consumption semantics the topic-partition finity will change. The following are a set of questions you should ask yourself to gain insight into a potential solution (your question is very open ended):
Is the data I am publishing mission critical (i.e. cannot lose it, must be sure I published it, must have exactly once consumption)?
Should I make the producer.send() call as synchronous as possible or continue to use the asynchronous method with batching (do I trade-off publishing guarantees for speed)?
Are the messages I am publishing dependent on one another? Does message A have to be consumed before message B (implies A published before B)?
How do I choose which partition to send my message to?
Should I: assign the message to a partition (extra producer logic), let the cluster decide in a round robin fashion, or assign a key which will hash to one of the partitions for the topic (need to come up with an evenly distributed hash to get good load balancing across partitions)
How many topics should you have? How is this connected to the semantics of your data? Will auto-creating topics for many distinct logical data domains be efficient (think of the effect on Zookeeper and administrative pain to delete stale topics)?
Partitions provide parallelism (more consumers possible) and possibly increased positive load balancing effects (if producer publishes correctly). Would you want to assign parts of your problem domain elements to specific partitions (when publishing send data for client A to partition 1)? What side-effects does this have (think refactorability and maintainability)?
Will you want to make more partitions than you need so you can scale up if needed with more brokers/consumers? How realistic is automatic scaling of a KAFKA cluster given your expertise? Will this be done manually? Is manual scaling viable for your problem domain (are you building KAFKA around a fixed system with well known characteristics or are you required to be able to handle severe spikes in messages)?
How will my consumers subscribe to topics? Will they use pre-configured configurations or use a regex to consume many topics? Are the messages between topics dependent or prioritized (need extra logic on consumer to implement priority)?
Should you use different network interfaces for replication between brokers (i.e. port 9092 for producers/consumers and 9093 for replication traffic)?
Good Links:
http://cloudurable.com/ppt/4-kafka-detailed-architecture.pdf
https://www.slideshare.net/ToddPalino/putting-kafka-into-overdrive
https://www.slideshare.net/JiangjieQin/no-data-loss-pipeline-with-apache-kafka-49753844
https://kafka.apache.org/documentation/

How does Kafka message processing scale in publish-subscribe mode?

All, Forgive me I am a newbie just beginner of Kafka. Currently I was reading the document of Kafka about the difference between traditional message system like Active MQ and Kafka.
As the document put.
For the traditional message system. they can not scale the message processing.
Since
Publish-subscribe allows you broadcast data to multiple processes, but
has no way of scaling processing since every message goes to every
subscriber.
I think this make sense to me.
But for the Kafka. Document says the Kafka can scale the message processing even in the publish-subscribe mode. (Please correct me if I was wrong. Thanks.)
The consumer group concept in Kafka generalizes these two concepts. As
with a queue the consumer group allows you to divide up processing
over a collection of processes (the members of the consumer group). As
with publish-subscribe, Kafka allows you to broadcast messages to
multiple consumer groups.
The advantage of Kafka's model is that every topic has both these
properties—it can scale processing and is also multi-subscriber—there
is no need to choose one or the other.
So my question is How Kafka make it ? I mean scaling the processing in the publish-subscribe mode. Thanks.
The main unique features in Kafka that enables scalable pub/sub are:
Partitioning individual topics and spreading the active partitions across multiple brokers in the cluster to take advantage of more machines, disks, and cache memory. Producers and consumers often connect to many or all nodes in the cluster, not just a single master node for a given topic/queue.
Storing all messages in a sequential commit log and not deleting them when consumed. This leads to more sequential reads and writes, offloads the broker from having to deal with keeping track of different copies of messages, deleting individual messages, handling fragmentation, tracking which consumer has acknowledged consuming which messages.
Enabling smart parallel processing of individual consumers and consumer groups in a way that each parallel message stream can come from the distributed partitions mentioned in #1 while offloading the offset management and partition assignment logic onto the clients themselves. Kafka scales with more consumers because the consumers do some of the work (unlike most other pub/sub brokers where the bulk of the work is done in the broker)