Let's say you have a POST request with some product as the payload. Traditionally, your HttpRequest lifecycle should end with an HttpResponse carrying the requested action's result, in our case a response saying "Product created" might be enough.
But with a message broker, things might turn like this:
The request handler create the appropriate message, CreateProduct(...), and produces it to a topic in the message broker.
Then what ???
A consumer retrieves and process the message by actually creating the product in a persistent database.
Then What ???
What should happens at step 2 ?
If we send a response saying "Your product should be created very soon, keep waiting, we keep you posted":
How can the client be notified after a response has been sent already
?
Are we forced to use WebSocket so we can keep the link opened ?
What should happens at step 4 ?
I have my opinion but I would like to know how you handle it in production.
The app that actually created the product can produce a message saying "Product created" to a status topic in the message broker, so the original message's producer can consume it and then notify the client some how. The only way I see it possible is through a WebSocket connection.
So I would like to know if WebSocket is the only way to do Http Request/Response involving a message broker ? and is it reasonable to use a message broker for Http Request/Response ?
You could think of this in a fully asynchronous fashion ( no websocket needed then).
You do an POST Http request and this will create an unique ID associated with your job. This ID will be stored in a database as well, with a status like 'processing'.
Besides the ID will be returned to your client.
Your job ID ( and its payload parameters) travels inside Kafka and finally goes to a consumer. This consumer will process the job and commit stuff to external DB ( or whatever).
When the job is done you update the job status to 'done' or something like this.
In the meantime, client side, you poll an endpoint that will ask your Job DB state if the job is over or not.
This is a very common way to cover your needs.
Yannick
Related
I have a use case where I want to implement synchronous request / response on top of kafka. For example when the user sends an HTTP request, I want to produce a message on a specific kafka input topic that triggers a dataflow eventually resulting in a response produced on an output topic. I want then to consume the message from the output topic and return the response to the caller.
The workflow is:
HTTP Request -> produce message on input topic -> (consume message from input topic -> app logic -> produce message on output topic) -> consume message from output topic -> HTTP Response.
To implement this case, upon receiving the first HTTP request I want to be able to create on the fly a consumer that will consume from the output topic, before producing a message on the input topic. Otherwise there is a possibility that messages on the output topic are "lost". Consumers in my case have a random group.id and have auto.offset.reset = latest for application reasons.
My question is how I can make sure that the consumer is ready before producing messages. I make sure that I call SubscribeTopics before producing messages. but in my tests so far when there are no committed offsets and kafka is resetting offsets to latest, there is a possibility that messages are lost and never read by my consumer because kafka sometimes thinks that the consumer registered after the messages have been produced.
My workaround so far is to sleep for a bit after I create the consumer to allow kafka to proceed with the commit reset workflow before I produce messages.
I have also tried to implement logic in a rebalance call back (triggered by a consumer subscribing to a topic), in which I am calling assign with offset = latest for the topic partition, but this doesn't seem to have fixed my issue.
Hopefully there is a better solution out there than sleep.
Most HTTP client libraries have an implicit timeout. There's no guarantee your consumer will ever consume an event or that a downstream producer will send data to the "response topic".
Instead, have your initial request immediately return 201 Accepted status (or 400, for example, if you do request validation) with some tracking ID. Then require polling GET requests by-id for status updates either with 404 status or 200 + some status field within the request body.
You'll need a database to store intermediate state.
I'm having an architecture of microservices where each service's producer write to the same topic. I have two instance of kafkaRestproxy each listen to that topic but the problem here is that :
Suppose a request come to instance-1 of a restproxy and it will redirect to the microservice and that service done with the job and write the response to the topic but the response is consumed by the second instance of the restproxy let say instance-2.
What should I do to solve this? Is their any kind of application_id we can attach to the request so when that microservice done with the job and if another instance of restproxy consumed that response then we can redirect the response to that instance of restproxy which gets that request?
Your proxies form a Kafka Consumer group, just as any other application. When you request records, you give both the consumer group and the consumer instance name (such as a host of the HTTP client) GET /consumers/(string:group_name)/instances/(string:instance)/records
You should generally not try to strictly control which consumers get which information beyond assigning a unique instance to each request, to allow for parallel consumption (assuming this is what you want).
Also, the rest proxy isn't consuming anything unless you have another application that's requesting that information, e.g. the GET request above.
I'm trying to implement an RPC architecture using Kafka as a message broker. The decision of using Kafka instead of another message broker solution is dictated by the current context.
The actual implementation consists on two different types of service:
The receiver: this service receives messages from a Kafka topic which consumes, processes the messages and then publish the response message to a response topic;
The caller: this service receives HTTP requests, then publish messages to the receiver topic, consumes the response topic of the receiver service for the response message, then returns it as an HTTP response.
The request/response messages published in the topics are related by the message key.
The receiver implementation was fairly simple: at startup, it creates the "request" and "response" topic, then starts consuming the request topic with the service group id (many instances of the receiver will share the same group id in order to implement a proper request balance). When a request arrives, the service processes the request and then publish the response in the response topic.
My problem is with the caller implementation, in particular while consuming the response from the response queue.
With the following assumptions:
The HTTP requests must be managed concurrently;
There could be more than one instance of this caller service.
every single thread/service must receive all the messages in the response topic, in order to find the message with the corresponding request key.
As an example, imagine that two receiver services produce two messages with keys 1 and 2 respectively. These messages will be published in the receiver topic, and processed. The response will then be published in the topic receiver-responses. If the two receiver services share the same group-id, it could be that response 1 arrives to the service that published message 2 and vice versa, resulting in a HTTP timeout.
To avoid this problem, I've managed to think these possible solutions:
Creating a new group for every request (EDIT: but a group cannot be deleted via code, hence it would be necessary another service to clean the zookeeper from these groups);
Creating a new topic for every request, then delete it afterwards.
Hoping that I made myself sufficiently clear - I must admit I am a beginner to Kafka - my question would be:
Which solution is more costly than the other? Or is there another topic/group configuration that could achieve the assumption 3?
Thanks.
I think I've found a possible solution. A group will be automatically deleted by the zookeeper when it's offset doesn't update for a period of time, determined by the configuration offsets.topic.retention.minutes.
The offset update time check should be possible to set up by setting the configuration offsets.retention.check.interval.ms.
This way, when a consumer connects to the response topic searching for the reply message, the created group can be abandoned, and it will be deleted by the zookeeper later in time.
I have the following typical scenario:
An order service used to purchase products. Acts as the commander of the distributed transaction.
A product service with the list of products and its stock.
A payment service.
Orders DB Products DB
| |
--------------- ---------------- ----------------
| OrderService | | ProductService | | PaymentService |
--------------- ---------------- ----------------
| | |
| -------------------- |
--------------- | Kafka orders topic |-------------
---------------------
The normal flow would be:
The user orders a product.
Order service creates an order in DB and publishes a message in Kafka topic "orders" to reserve a product (PRODUCT_RESERVE_REQUEST).
Product service decreases the product stock one unit in its DB and publishes a message in "orders" saying PRODUCT_RESERVED
Order service gets the PRODUCT_RESERVED message and orders the payment publishing a message PAYMENT_REQUESTED
Payment service orders the payment and answers with a message PAYED
Order service reads the PAYED message and marks the order as COMPLETED, finishing the transaction.
I am having trouble to deal with error cases, e.g: let's assume this:
Payment service fails to charge for the product, so it publishes a message PAYMENT_FAILED
Order service reacts publishing a message UNDO_PRODUCT_RESERVATION
Product service increases the stock in the DB to cancel the reservation and publishes PRODUCT_UNRESERVATION_COMPLETED
Order service finishes the transaction saving the final state of the order as CANCELLED_PAYMENT_FAILED.
In this scenario imagine that for whatever reason, order service publishes a UNDO_PRODUCT_RESERVATION message but doesn't receive the PRODUCT_UNRESERVATION_COMPLETED message, so it retries publishing another UNDO_PRODUCT_RESERVATION message.
Now, imagine that those two UNDO_PRODUCT_RESERVATION messages for the same order end up arriving to ProductService. If I process both of them I could end up setting an invalid stock for the product.
In this scenario how can I implement idempotency?
UPDATE:
Following Artem's instructions I can now detect duplicated messages (by checking the message header) and ignore them but there may still be situations like the following where I shouldn't ignore the duplicated messages:
Order Service sends UNDO_PRODUCT_RESERVATION
Product service gets the message and starts processing it but crashes before updating the stock.
Order Service doesn't get a response so it retries sending UNDO_PRODUCT_RESERVATION
Product service knows this is a duplicated message BUT, in this case it should repeat the processing again.
Can you help me come up with a way to support this scenario as well? How could I distinguish when I should discard the message or reprocess it?
We used spring-integration-kafka to produce and consume messages with Kafka in our microservices. In our case, we send org.springframework.messaging.Message objects to topics and get the same type from topics after deserialization from byte-array. In Message entity there are message-id, sent-time etc. headers values other than message payload which is the actual object that you want to transfer from one microservice to others. We use unique message-id value to implement idempotency. On producer side, you must implement some logic to ensure that, the message-id of the Message is the same when it is produced multiple times. This is actually related to your produce logic. In our case, we use Publishing Events Using Local Transactions which is very well described in the blog https://www.nginx.com/blog/event-driven-data-management-microservices/ by Chris Richardson. With this approach we can recrate Message object with the same message-id on producer side. On consumer side, we persist all the consumed message id values to database and check this ids before processing the received messages. If we see a message whose id is in our persistent store, we simply ignore it.
In your case, To implement idempotency:
you should keep a unique identifier with the messages,
On producer side, you must generate the same identifier when it is produced multiple times,
On consumer side, you must check the received id to detect whether it is consumed before or not
Regarding to Second Scenario Which is Described in UPDATE,
I think you should change your mind a little bit. If you want to implement publish-subscribe mechanism which is more suitable in microservices architecture, you shouldn't wait response on producer side. In this scenario, you wait other message to know whether the consumer consumed the message or not and if it is not consumed by the consumer, you send it again.
How about the implementation below;
On producer side, you send messages to Kafka within a transaction in producer. You should provide a mechanism here to send messages to kafka only the transaction on producer side is committed. This is Atomicity issue and i give a link above which shows how to solve this issue.
On Consumer side, you poll messages from kafka topic one by one in order and you get the next message only when the current message can be consumed. If it is not consumed, you shouldn't get the next message. Because the next message might be related to current message and if you consume the next message you may corrupt consistency of your data. Its not producer's concern when the message not consumed. On consumer side, you should provide retry and replay mechanisms to consume messages.
I think you shouldn't wait response on producer side. Kafka is a very smart tool, and with its offset commit capability, as a consumer you don't have to consume messages when you poll messages from topic. If you have a problem while processing messages, you simply don't commit offset to get next message.
With the implementation described above, you don't have a problem like "How could I distinguish when I should discard the message or reprocess it?"
Regards...
actually because of the complications you mentioned about organizing transaction over multiple micro services over Apache Kafka, I developed another concept and wrote a blog about it.
If you reach a state of complication that Kafka solution might not be feasible anymore, you might find it as an interesting read. It is too long to explain here but basically it uses a J2EE container fully with Micro Service principle and with full transaction support between the Micro Services with the help of the Spring Boot + Netflix.
Micro Services Fanout and Transaction Problems and Solutions with Spring Boot and Netflix
I am investigating Kafka 9 as a hobby project and completed a few "Hello World" type examples.
I have got to thinking about Real World Kafka applications based on request response messaging in general and more specifically how to link a Kafka request message to its response message.
I was thinking along the lines of using a generated UUID as the request message key and employ this request UUID as the associated response message key. Much the same type of mechanism that WebSphere MQ has message correlation id.
My end 2 end process would be.
1). Kafka client generates a random UUID and sends a single Kafka request message.
2). The server would consume this request message extract & store the request UUID value
3). complete a Business Process using the message payload.
4). Respond with a response message that employs the stored UUID value from the request message as response message Key.
5). the Kafka client polls the response topic until it either timeouts or retrieves a message with the original request UUID value.
What I concerned about is that the Kafka Consumer polling will remove other clients messages from the response topic and increment the offsets making other clients fail.
Am I trying to apply Kafka in a use case it was never designed for?
Is it possible to implement request/response messaging in Kafka?
Even though Kafka provides convenience methods to persist the committed offsets for a given consumer group, you're not required to use that behavior and can write your own if you feel the need. Even so, the use of Kafka the way you've described it is a bit awkward for the use case as each client needs to repeatedly search the topic for a specific response. That's inefficient at best.
You could break the problem into two parts, continuing to use Kafka to deliver requests to and responses from your server. The only piece you'd need to add would be some sort of API layer that your clients talk to and which encapsulates the Kafka-specific logic from your clients. This layer would need a local DB (relational or NoSQL) that could store responses by uuid making it very fast and easy for the API to answer whether a response is available for a specific uuid.
Easier! You can only write on zookeeper that the UUID X should be answered on partition Y, and make the producer that sent that UUID consume the partition Y... Does that make sense?
I think you need a well defined shard key of the service that invokes the request. Your request should contain this shard key and the name of the topic where to post response. Also you should create some sort of state machine and when a message regarding your task comes you transition to some state... this would be for strict async design
In theory, you could
assign an ID to each request and message that is supposed to get a result message;
create a hash function that would map this ID to an identifier of of a partition,
when sending the result message, use the same hash function to get the identifier of the partition to send it to,
in the producer you could only observe that given partition.
That would reduce the need to crawl many messages in that topic to filter out the result required by the waiting request handler.