request reply patter implementation with apache kafka - apache-kafka

How can I implement request/reply pattern with Apache Kafka? Implementation should also work with scaling of service instances (f.e. pods in the kubernetes).
In the rabbit, I can create the temporary non-durable unique queue per instance that receives responses from other services. This queue will be removed automatically when connection is lost (when instance of the service is down).
How can I do this with Kafka? How to scale this solution?
I use node js

Given that your Rabbit example is only talking about the channel for receiving the response (ignoring sending the request), it's most practical (since Kafka doesn't handle dynamic topic creation/deletion particularly well) to have a single topic for responses to that service with however many partitions you need to meet your throughput goal. A requestor instance will choose a partition to consume at random (multiple instances could consume the same partition) and communicate that partition and a unique correlation ID with the request. The response is then produced to the selected partition and keyed with the correlation ID. Requestors track the set of correlation IDs they're waiting for and ignore responses with keys not in that set.
The risk of collisions in correlation IDs can be mitigated by having the requestors coordinate among themselves (possibly using something like etcd/zookeeper/consul).
This isn't a messaging pattern for which Kafka is that well-suited (it's definitely not best of breed for this), but it's workable.

Related

How to expand microservices? If Kafka is used

I have built a micro service platform based on kubernetes, but Kafka is used as MQ in the service. Now a very confusing question has arisen. Kubernetes is designed to facilitate the expansion of micro services. However, when the expansion exceeds the number of Kafka partitions, some micro services cannot be consumed. What should I do?
This is a Kafka limitation and has nothing to do with your service scheduler.
Kafka consumer groups simply cannot scale beyond the partition count. So, if you have a single partitioned topic because you care about strict event ordering, then only one replica of your service can be active and consuming from the topic, and you'd need to handle failover in specific ways that is outside the scope of Kafka itself.
If your concern is the k8s autoscaler, then you can look into the KEDA autoscaler for Kafka services
Kafka, as OneCricketeer notes, bounds the parallelism of consumption by the number of partitions.
If you couple processing with consumption, this limits the number of instances which will be performing work at any given time to the number of partitions to be consumed. Because the Kafka consumer group protocol includes support for reassigning partitions consumed by a crashed (or non-responsive...) consumer to a different consumer in the group, running more instances of the service than there are partitions at least allows for the other instances to be hot spares for fast failover.
It's possible to decouple processing from consumption. The broad outline of could be to have every instance of your service join the consumer group. Up to the number of instances consuming will actually consume from the topic. They can then make a load-balanced network request to another (or the same) instance based on the message they consume to do the processing. If you allow the consumer to have multiple requests in flight, this expands your scaling horizon to max-in-flight-requests * number-of-partitions.
If it happens that the messages in a partition don't need to be processed in order, simple round-robin load-balancing of the requests is sufficient.
Conversely, if it's the case that there are effectively multiple logical streams of messages multiplexed into a given partition (e.g. if messages are keyed by equipment ID; the second message for ID A needs to be processed after the first message, but could be processed in any order relative to messages from ID B), you can still do this, but it needs some care around ensuring ordering. Additionally, given the amount of throughput you should be able to get from a consumer of a single partition, needing to scale out to the point where you have more processing instances than partitions suggests that you'll want to investigate load-balancing approaches where if request B needs to be processed after request A (presumably because request A could affect the result of request B), A and B get routed to the same instance so they can leverage local in-memory state rather than do a read-from-db then write-to-db pas de deux.
This sort of architecture can be implemented in any language, though maintaining a reasonable level of availability and consistency is going to be difficult. There are frameworks and toolkits which can deliver a lot of this functionality: Akka (JVM), Akka.Net, and Protoactor all implement useful primitives in this area (disclaimer: I'm employed by Lightbend, which maintains and provides commercial support for one of those, though I'd have (and actually have) made the same recommendations prior to my employment there).
When consuming messages from Kafka in this style of architecture, you will definitely have to make the choice between at-most-once and at-least-once delivery guarantees and that will drive decisions around when you commit offsets. Note particularly that you need to be careful, if doing at-least-once, to not commit until every message up to that offset has been processed (or discarded), lest you end up with "at-least-zero-times", which isn't a useful guarantee. If doing at-least-once, you may also want to try for effectively-once: at-least-once with idempotent processing.

Kafka with multiple instances of microservices and end-users

This is more of a design/architecture question.
We have a microservice A (MSA) with multiple instances (say 2) running of it behind LB.
The purpose of this microservice is to get the messages from Kafka topic and send to end users/clients. Both instances use same consumer group id for a particular client/user so as messages are not duplicated. And we have 2 (or =#instances) partitions of Kafka topic
End users/clients connect to LB to fetch the message from MSA. Long polling is used here.
Request from client can land to any instance. If it lands to MSA1, it will pull the data from kafka partion1 and if it lands to MSA2, it will pull the data from partition2.
Now, a producer is producing the messages, we dont have high messages count. So, lets say producer produce msg1 and it goes to partition1. End user/client will not get this message unless it's request lands to MSA1, which might not happen always as there are other requests coming to LB.
We want to solve this issue. We want that client gets the message near realtime.
One of the solution can be having a distributed persistent queue (e.g. ActiveMQ) where both MSA1 and MSA2 keep on putting the messages after reading from Kafka and client just fetch the message from queue. But this will cause separate queue for every end-user/client/groupid.
Is this a good solution, can we go ahead with this? Anything that we should change here. We are deploying our system on AWS, so if any AWS managed service can help here e.g. SNS+SQS combination?
Some statistics:
~1000 users, one group id per user
2-4 instances of microservice
long polling every few seconds (~20s)
average message size ~10KB
Broadly you have three possible approaches:
You can dispense with using Kafka's consumer group functionality and allow each instance to consume from all partitions.
You can make the instances of each service aware of each other. For example, an instance which gets a request which can be fulfilled by another instance will forward the request there. This is most effective if the messages can be partitioned by client on the producer end (so that a request from a given client only needs to be routed to an instance). Even then, the consumer group functionality introduces some extra difficulty (rebalances mean that the consumer currently responsible for a given partition might not have seen all the messages in the partition). You may want to implement your own variant of the consumer group coordination protocol, only on rebalance, the instance starts from some suitably early point regardless of where the previous consumer got to.
If you can't reliably partition by client in the producer (e.g. the client is requesting a stream of all messages matching arbitrary criteria) then Kafka is really not going to be a fit and you probably want a database (with all the expense and complexity that implies).

How to route requests to correct consumer in consumer group

From an event sourcing/CQRS perspective: Say I have a consumer group of 2 instances, that's subscribed to a topic. On startup/subscription, each instance processes its share of the event stream, and builds a local view of the data.
When an external request comes in with a command to update the data, how would that request be routed to the correct instance in the group? If the data were partitioned by entity ID so that odd-numbered IDs went to consumer 1 and evens to consumer 2, how would that be communicated to the consumers? Or, for that matter, whatever reverse-proxy or service-mesh is responsible for sending that incoming request to the correct instance?
And what happens when the consumer group is re-balanced due to the addition or subtraction of consumers? Is that somehow automatically communicated the routing mechanism?
Is there a gap in service while the consumers all rebuild their local model from their new set of events from the given topics?
This seems to apply to both the command and query side of things, if they're both divided between multiple instances with partitioned data...
Am I even thinking about this correctly?
Thank you
Kafka partitioning is great for sharding streams of commands and events by the entity they affect, but not for using this sharding in other means (e.g. for routing requests).
The broad technique for sharding the entity state I'd recommend is to not rely on Kafka partitioning for that (only using the topic partitions to ensure ordering of commands/events for an entity, i.e. by having all commands/events for a given entity be in one partition), but instead using something external to coordinate those shards (candidates would include leases in zookeeper/etcd/consul or cluster sharding from akka (JVM) or akka.net or cloudstate/akka serverless (more polyglot)). From there, there are two broad approaches you can take:
(most really applicable if the number of entity shards for state and processing happens to equal the number of Kafka partitions) move part of the consumer group protocol into your application and have the instance which owns a particular shard consume a particular partition
have the instances ingesting from Kafka resolve the shard for an entity and which instance owns that shard and then route a request to that instance. The same pattern would also allow things like HTTP requests for an entity to be handled by any instance. By doing this you're making a service implemented in a stateful manner present to things like a service mesh/container scheduler/load balancer as a more stateless service would present.

Kafka topic filtering vs. ephemeral topics for microservice request/reply pattern

I'm trying to implement a request/reply pattern with Kafka. I am working with named services and unnamed clients that send messages to those services, and clients may expect a reply. Many (10s-100s) of clients may interact with a single service, or consumer group of services.
Strategy one: filtering messages
The first thought was to have two topics per service - the "HelloWorld" service would consume the "HelloWorld" topic, and produce replies back to the "HelloWorld-Reply" topic. Clients would consume that reply topic and filter on unique message IDs to know what replies are relevant to them.
The drawback there is it seems like it might create unnecessary work for clients to filter out a potentially large amount of irrelevant messages when many clients are interacting with one service.
Strategy two: ephemeral topics
The second idea was to create a unique ID per client, and send that ID along with messages. Clients would consume their own unique topic "[ClientID]" and services would send to that topic when they have a reply. Clients would thus not have to filter irrelevant messages.
The drawback there is clients may have a short lifespan, e.g. they may be single use scripts, and they would have to create their topic beforehand and delete it afterward. There might have to be some extra process to purge unused client topics if a client dies during processing.
Which of these seems like a better idea?
We are using Kafka in production as a handler for event based messages and request/response messages. our approach to implementing request/response is your first strategy because, when the number of clients grows, you have to create many topics which some of them are completely useless. another reason for choosing the first strategy was our topic naming guideline that each service should belong to only one topic for tacking. however, Kafka is not made for request/response messages but I recommend the first strategy because:
few numbers of topics
better service tracking
better topic naming
but you have to be careful about your consumer groups. which may causes of data loss.
A better approach is using the first strategy with many partitions in one topic (service) that each client sends and receives its messages with a unique key. Kafka guarantees that all messages with the same key will go to a specific partition. this approach doesn't need filtering irrelevant messages and maybe is a combination of your two strategies.
Update:
As #ValBonn said in the suggested approach you always have to be sure that the number of partitions >= number of clients.

What to do if I have an actor is supposed to have large throughtput?

I have an actor which aggregates some information and processes it. It currently looks like this:
class MessageTracerActor extends Actor{
override def receive: Receive = {
case MyActor.TracableMessage(msg) => //send processed msg to another place
case v: Any => //corner-case, has special handler
}
}
The actor is supposed to send the trace of messages which extend TracableMessage. But the TracableMessages is sent from quite large number of actors and hosting the MessageTracerActor on one machine is not quite good.
I looked at cluster shrading, but this seems to not be the case. They said
Cluster sharding is typically used when you have many stateful actors
that together consume more resources (e.g. memory) than fit on one
machine. If you only have a few stateful actors it might be easier to
run them on a Cluster Singleton node.
But Cluster Singleton is hosted strictly on one node which is not scalable.
Maybe there is some configuration option with which I could specify the amount of threads (maybe even nodes) used for processing messages received by the actor?
There are a few options for scaling message processing beyond a single actor.
Spreading the message processing in a single node
If the trace message processing is stateless, you can distribute the work between multiple instances of the trace message processing actors using routing. A router is an actor that establishes a pool of processing actors, and distributes every incoming message among the processing actors.
// create a round robin style router for actors
val router: ActorRef = context.actorOf(RoundRobinPool(5).props(Props[MessageTracerActor]), "tracer-router")
In the above example, the round-robin style router is used for distributing the messages among tracer actors evenly. This means that you will lose ordering guarantees between the messages sent to the router: message enqueued later might be processed before a message that was enqueued before. Because each message processor sees only an arbitrary subset of incoming messages, stateful processing like aggregations can't be done consistently either.
In order to make the ordering consistent, you have to think about what messages need to be in order. If all of the tracable messages need to be processed exactly in the order they enter the router, the routers wont help you much. However, some of the tracable messages might need to be processed in order in relation to some messages but not the others. For example, your tracable messages might contain a source of the message and the ordering must be guaranteed only between messages from the same source.
Identifying the ordering between messages allows you to distribute the messages among message processors in a consistent manner. Akka provides functionality for this using consistent hashing pools. In the consistent hashing pool, the router distributes the incoming messages to message processors based on hash key mechanism. The messages with the same routing hash key will be routed to the same message processor, which means that you can consistently do aggregations on incoming messages for a subset of the tracable messages.
Spreading the message processing across multiple nodes
If one Akka node is not enough for processing the tracable messages, you can scale the message processing using Akka's clustering features. In the cluster, you have multiple Akka nodes connected to each other, working together by distributing work across the cluster rather than processing everything in a single node.
In the cluster, you have distributed version of the tools described earlier available. For both the stateless and stateful routing of messages, you can use a cluster aware router. The cluster aware router creates a pool of message processors across the cluster member nodes rather than creating them all in a single node.
Besides cluster aware router, you can also use cluster sharding. Like in the consistent hash pool, you need to specify a hashing key for the cluster shard in order to consistently route messages to the correct actor. The difference between a cluster aware router and a sharding is that the shard automatically creates an actor for each key, thus the message processor actor doesn't need to handle messages from different keys separately.
If all of the tracable messages need to be aggregated under the same state, your final option is to consider Akka's distributed data feature. In this feature, the aggregation work is distributed across multiple nodes and joined at a later stage. Note that distributed data API is still experimental in Akka 2.4.
Other areas to look into
Distributing the message processing across multiple nodes means that there is a higher risk of individual message processors dropping (e.g. network connection failure, node crashing). In order to keep the state persistent and transferrable between nodes, you may want to look into Akka's persistence feature.