I am investigating Kafka 9 as a hobby project and completed a few "Hello World" type examples.
I have got to thinking about Real World Kafka applications based on request response messaging in general and more specifically how to link a Kafka request message to its response message.
I was thinking along the lines of using a generated UUID as the request message key and employ this request UUID as the associated response message key. Much the same type of mechanism that WebSphere MQ has message correlation id.
My end 2 end process would be.
1). Kafka client generates a random UUID and sends a single Kafka request message.
2). The server would consume this request message extract & store the request UUID value
3). complete a Business Process using the message payload.
4). Respond with a response message that employs the stored UUID value from the request message as response message Key.
5). the Kafka client polls the response topic until it either timeouts or retrieves a message with the original request UUID value.
What I concerned about is that the Kafka Consumer polling will remove other clients messages from the response topic and increment the offsets making other clients fail.
Am I trying to apply Kafka in a use case it was never designed for?
Is it possible to implement request/response messaging in Kafka?
Even though Kafka provides convenience methods to persist the committed offsets for a given consumer group, you're not required to use that behavior and can write your own if you feel the need. Even so, the use of Kafka the way you've described it is a bit awkward for the use case as each client needs to repeatedly search the topic for a specific response. That's inefficient at best.
You could break the problem into two parts, continuing to use Kafka to deliver requests to and responses from your server. The only piece you'd need to add would be some sort of API layer that your clients talk to and which encapsulates the Kafka-specific logic from your clients. This layer would need a local DB (relational or NoSQL) that could store responses by uuid making it very fast and easy for the API to answer whether a response is available for a specific uuid.
Easier! You can only write on zookeeper that the UUID X should be answered on partition Y, and make the producer that sent that UUID consume the partition Y... Does that make sense?
I think you need a well defined shard key of the service that invokes the request. Your request should contain this shard key and the name of the topic where to post response. Also you should create some sort of state machine and when a message regarding your task comes you transition to some state... this would be for strict async design
In theory, you could
assign an ID to each request and message that is supposed to get a result message;
create a hash function that would map this ID to an identifier of of a partition,
when sending the result message, use the same hash function to get the identifier of the partition to send it to,
in the producer you could only observe that given partition.
That would reduce the need to crawl many messages in that topic to filter out the result required by the waiting request handler.
Related
Let's say you have a POST request with some product as the payload. Traditionally, your HttpRequest lifecycle should end with an HttpResponse carrying the requested action's result, in our case a response saying "Product created" might be enough.
But with a message broker, things might turn like this:
The request handler create the appropriate message, CreateProduct(...), and produces it to a topic in the message broker.
Then what ???
A consumer retrieves and process the message by actually creating the product in a persistent database.
Then What ???
What should happens at step 2 ?
If we send a response saying "Your product should be created very soon, keep waiting, we keep you posted":
How can the client be notified after a response has been sent already
?
Are we forced to use WebSocket so we can keep the link opened ?
What should happens at step 4 ?
I have my opinion but I would like to know how you handle it in production.
The app that actually created the product can produce a message saying "Product created" to a status topic in the message broker, so the original message's producer can consume it and then notify the client some how. The only way I see it possible is through a WebSocket connection.
So I would like to know if WebSocket is the only way to do Http Request/Response involving a message broker ? and is it reasonable to use a message broker for Http Request/Response ?
You could think of this in a fully asynchronous fashion ( no websocket needed then).
You do an POST Http request and this will create an unique ID associated with your job. This ID will be stored in a database as well, with a status like 'processing'.
Besides the ID will be returned to your client.
Your job ID ( and its payload parameters) travels inside Kafka and finally goes to a consumer. This consumer will process the job and commit stuff to external DB ( or whatever).
When the job is done you update the job status to 'done' or something like this.
In the meantime, client side, you poll an endpoint that will ask your Job DB state if the job is over or not.
This is a very common way to cover your needs.
Yannick
In scenario where multiple single domain event types are produced to single topic and only subset of event types are consumed by consumer i need a good way to read the event type before taking action.
I see 2 options:
Put event type (example "ORDER_PUBLISHED") into message body (payload) itself which would be like broker agnostic approach and have other advantages. But would involve parsing of every message just to know the event type.
Utilize Kafka message headers which would allow to consume messages without extra payload parsing.
The context is event-sourcing. Small commands, small payloads. There are no huge bodies to parse. Golang. All messages are protobufs. gRPC.
What is typical workflow in such scenario.
I tried to google on this topic, but didn't found much on Headers use-cases and good practices.
Would be great to hear when and how to use Kafka message headers and when not to use.
Clearly the same topic should be used for different event types that apply to the same entity/aggregate (reference). Example: BookingCreated, BookingConfirmed, BookingCancelled, etc. should all go to the same topic in order to (excuse the pun) guarantee ordering of delivery (in this case the booking ID is the message key).
When the consumer gets one of these events, it needs to identify the event type, parse the payload, and route to the processing logic accordingly. The event type is the piece of message metadata that allows this identification.
Thus, I think a custom Kafka message header is the best place to indicate the type of event. I'm not alone:
Felipe Dutra: "Kafka allow you to put meta-data as header of your message. So use it to put information about the message, version, type, a correlationId. If you have chain of events, you can also add the correlationId of opentracing"
This GE ERP system has a header labeled "event-type" to show "The type of the event that is published" to a kafka topic (e.g., "ProcessOrderEvent").
This other solution mentions that "A header 'event' with the event type is included in each message" in their Kafka integration.
Headers are new in Kafka. Also, as far as I've seen, Kafka books focus on the 17 thousand Kafka configuration options and Kafka topology. Unfortunately, we don't easily find much on how an event-driven architecture can be mapped with the proper semantics onto elements of the Kafka message broker.
I'm trying to implement an RPC architecture using Kafka as a message broker. The decision of using Kafka instead of another message broker solution is dictated by the current context.
The actual implementation consists on two different types of service:
The receiver: this service receives messages from a Kafka topic which consumes, processes the messages and then publish the response message to a response topic;
The caller: this service receives HTTP requests, then publish messages to the receiver topic, consumes the response topic of the receiver service for the response message, then returns it as an HTTP response.
The request/response messages published in the topics are related by the message key.
The receiver implementation was fairly simple: at startup, it creates the "request" and "response" topic, then starts consuming the request topic with the service group id (many instances of the receiver will share the same group id in order to implement a proper request balance). When a request arrives, the service processes the request and then publish the response in the response topic.
My problem is with the caller implementation, in particular while consuming the response from the response queue.
With the following assumptions:
The HTTP requests must be managed concurrently;
There could be more than one instance of this caller service.
every single thread/service must receive all the messages in the response topic, in order to find the message with the corresponding request key.
As an example, imagine that two receiver services produce two messages with keys 1 and 2 respectively. These messages will be published in the receiver topic, and processed. The response will then be published in the topic receiver-responses. If the two receiver services share the same group-id, it could be that response 1 arrives to the service that published message 2 and vice versa, resulting in a HTTP timeout.
To avoid this problem, I've managed to think these possible solutions:
Creating a new group for every request (EDIT: but a group cannot be deleted via code, hence it would be necessary another service to clean the zookeeper from these groups);
Creating a new topic for every request, then delete it afterwards.
Hoping that I made myself sufficiently clear - I must admit I am a beginner to Kafka - my question would be:
Which solution is more costly than the other? Or is there another topic/group configuration that could achieve the assumption 3?
Thanks.
I think I've found a possible solution. A group will be automatically deleted by the zookeeper when it's offset doesn't update for a period of time, determined by the configuration offsets.topic.retention.minutes.
The offset update time check should be possible to set up by setting the configuration offsets.retention.check.interval.ms.
This way, when a consumer connects to the response topic searching for the reply message, the created group can be abandoned, and it will be deleted by the zookeeper later in time.
I am considering Kafka to stream updates from the back-end to the front-end applications.
- Data streams are specific to a user requests, so each request will generate a stream in the back-end.
- Each user will have multiple concurrent requests. One to many relationship btw user and streams
I first thought I would setup a topic "per user request" but learnt that hundreds of thousands of topics is bad for multiple reasons.
Reading online, I came across posts that suggest one topic partitioned on userid. How is that any better than multiple topics?
If partitioning on userid is the way to go, the consumer will receive updates for different requests (from that user) and that will cause issues. I need to be able to not process a stream until I choose to, and if each request had it own topic this will work out great.
Thoughts?
I don't think Kafka will be a good option for your use case. As your use case is somewhat "synchronous" and "dynamic" in nature. A user request is submitted and the client wait for the stream of response events, the client should also know when the response for a particular user request ends. Multiple user requests may end up in the same Kafka partition as we cannot afford to have an exclusive partition for each user when number of users is high.
I guess Redis may be a better use case for this use case. Every request can have an unique id, and response events are added to a Redis list with some reasonable expiry time. The Redis list is given the same key name as the request id.
Redis list will look like (key is request id):
request id --> response even1, response event2,...... , response end evnt
The process which is relaying the event to the client will delete the list after it successfully sends all the response event to the client and the "last response event marker" is encountered. If the relaying process dies before it can delete the response, Redis will take care of deleting the list after the list's expiry time.
Although it is possible (I guess) to have a Kafka cluster of several thousends topics, I'm not sure it is the way to go in your particular case.
Usually you design your Kafka app around streams of data: like click-streams, page-views etc. Then, if you want some kind of "sticky" processors - you need partition key. In your case, if you select user id as a key, Kafka will store all events from an user to the same partition.
Kafka consumer, on the other side, read messages from 1 to all partitions of a topic. That means, if say, you have a topic with 10 partitions, you can start your Kafka consumer in a consumer group so every consumer has a distinct partitions assigned.
It means, for the user id example, all users will be processed by the exactly one consumer depending on the key. For example, userid A goes to partition 1, but userid B goes to partition 10.
Again, you can use message key in order to map your data stream to Kafka partitions. All events with the same key will be stored to the same partition and will be consumed/processed by the same consumer instance.
I have the following typical scenario:
An order service used to purchase products. Acts as the commander of the distributed transaction.
A product service with the list of products and its stock.
A payment service.
Orders DB Products DB
| |
--------------- ---------------- ----------------
| OrderService | | ProductService | | PaymentService |
--------------- ---------------- ----------------
| | |
| -------------------- |
--------------- | Kafka orders topic |-------------
---------------------
The normal flow would be:
The user orders a product.
Order service creates an order in DB and publishes a message in Kafka topic "orders" to reserve a product (PRODUCT_RESERVE_REQUEST).
Product service decreases the product stock one unit in its DB and publishes a message in "orders" saying PRODUCT_RESERVED
Order service gets the PRODUCT_RESERVED message and orders the payment publishing a message PAYMENT_REQUESTED
Payment service orders the payment and answers with a message PAYED
Order service reads the PAYED message and marks the order as COMPLETED, finishing the transaction.
I am having trouble to deal with error cases, e.g: let's assume this:
Payment service fails to charge for the product, so it publishes a message PAYMENT_FAILED
Order service reacts publishing a message UNDO_PRODUCT_RESERVATION
Product service increases the stock in the DB to cancel the reservation and publishes PRODUCT_UNRESERVATION_COMPLETED
Order service finishes the transaction saving the final state of the order as CANCELLED_PAYMENT_FAILED.
In this scenario imagine that for whatever reason, order service publishes a UNDO_PRODUCT_RESERVATION message but doesn't receive the PRODUCT_UNRESERVATION_COMPLETED message, so it retries publishing another UNDO_PRODUCT_RESERVATION message.
Now, imagine that those two UNDO_PRODUCT_RESERVATION messages for the same order end up arriving to ProductService. If I process both of them I could end up setting an invalid stock for the product.
In this scenario how can I implement idempotency?
UPDATE:
Following Artem's instructions I can now detect duplicated messages (by checking the message header) and ignore them but there may still be situations like the following where I shouldn't ignore the duplicated messages:
Order Service sends UNDO_PRODUCT_RESERVATION
Product service gets the message and starts processing it but crashes before updating the stock.
Order Service doesn't get a response so it retries sending UNDO_PRODUCT_RESERVATION
Product service knows this is a duplicated message BUT, in this case it should repeat the processing again.
Can you help me come up with a way to support this scenario as well? How could I distinguish when I should discard the message or reprocess it?
We used spring-integration-kafka to produce and consume messages with Kafka in our microservices. In our case, we send org.springframework.messaging.Message objects to topics and get the same type from topics after deserialization from byte-array. In Message entity there are message-id, sent-time etc. headers values other than message payload which is the actual object that you want to transfer from one microservice to others. We use unique message-id value to implement idempotency. On producer side, you must implement some logic to ensure that, the message-id of the Message is the same when it is produced multiple times. This is actually related to your produce logic. In our case, we use Publishing Events Using Local Transactions which is very well described in the blog https://www.nginx.com/blog/event-driven-data-management-microservices/ by Chris Richardson. With this approach we can recrate Message object with the same message-id on producer side. On consumer side, we persist all the consumed message id values to database and check this ids before processing the received messages. If we see a message whose id is in our persistent store, we simply ignore it.
In your case, To implement idempotency:
you should keep a unique identifier with the messages,
On producer side, you must generate the same identifier when it is produced multiple times,
On consumer side, you must check the received id to detect whether it is consumed before or not
Regarding to Second Scenario Which is Described in UPDATE,
I think you should change your mind a little bit. If you want to implement publish-subscribe mechanism which is more suitable in microservices architecture, you shouldn't wait response on producer side. In this scenario, you wait other message to know whether the consumer consumed the message or not and if it is not consumed by the consumer, you send it again.
How about the implementation below;
On producer side, you send messages to Kafka within a transaction in producer. You should provide a mechanism here to send messages to kafka only the transaction on producer side is committed. This is Atomicity issue and i give a link above which shows how to solve this issue.
On Consumer side, you poll messages from kafka topic one by one in order and you get the next message only when the current message can be consumed. If it is not consumed, you shouldn't get the next message. Because the next message might be related to current message and if you consume the next message you may corrupt consistency of your data. Its not producer's concern when the message not consumed. On consumer side, you should provide retry and replay mechanisms to consume messages.
I think you shouldn't wait response on producer side. Kafka is a very smart tool, and with its offset commit capability, as a consumer you don't have to consume messages when you poll messages from topic. If you have a problem while processing messages, you simply don't commit offset to get next message.
With the implementation described above, you don't have a problem like "How could I distinguish when I should discard the message or reprocess it?"
Regards...
actually because of the complications you mentioned about organizing transaction over multiple micro services over Apache Kafka, I developed another concept and wrote a blog about it.
If you reach a state of complication that Kafka solution might not be feasible anymore, you might find it as an interesting read. It is too long to explain here but basically it uses a J2EE container fully with Micro Service principle and with full transaction support between the Micro Services with the help of the Spring Boot + Netflix.
Micro Services Fanout and Transaction Problems and Solutions with Spring Boot and Netflix