How to implement communication between consumer and producer with fast and slow workers? - rest

I'm looking for a pattern and existing implementations for my situation:
I have synchronous soa which uses REST APIs, in fact I implement remote procedure call with REST. And I have got some slow workers which process requests for substantial time (around 30 seconds) and due to some license constraints for some requests I can process them only sequentially (see system setup).
What are recommended ways to implement communication for such case?
How can I mix synchronous and asynchronous communication in the situation when the consumer is behind firewall and I cannot send him notification about completed tasks easily and I might not be able to let consumer use my message broker if I have one?
Workers are implemented in Python using Flask and gunicorn. At the moment I'm using synchronous REST interfaces and allow for the delay as I only had fast workers. I looked at Kafka and RabbitMq and they would fit for backend side
communication, however how does producer communicate with the consumer?
If the consumer fires API request my producer can return code 202, then how shall producer notify consumer that result is available? Will consumer have to poll the producer for results?
Also if I use message brokers and my gateway acts on behalf of consumer, it should have a registry of requests (I already have GUID for every request now) and results, which approach would you recommend for implementing it?

Producer- agent which produces the message
Consumer - agent which can handle the message and implement the logic for processing the message

Related

Sending Http Request with Kafka Stream

I am aware that it's not recommended to send http request with kafka stream as the blocking nature of external RPC calls may impact performance.
However what if the use case doesn't allow me to avoid sending http request?
I'm building an application that consumes from an input topic, then for each message it will go thorough various iterations of filtering, mapping, and joining with kTable. At the end of these iterations the result is ready to be "delivered".
Apparently, the "delivery" method is via http request. I have to call external rest APIs to send these result to various vendors. I will also need to wait for the response to come back and based on the result I will mark the delivery as either successful or failed and produce the result to an output topic, which will be consumed by other service for archiving purpose.
I'm aware that http calls will block the currently calling stream thread, so I configured a timeout which is strictly and greatly less than the consumer's max.poll.interval.ms to avoid rebalance in case the external API service is down. Also the timed out request will be sent to a low priority queue to be ready for delivery re-attempt in later times.
As you can see, I cannot really avoid making external RPC calls within kafka streams. I'm just curious if there is better architecture that's meant for such use case?
If you cannot avoid it, then one other option is to send data to some outbound "request" topic, and write a consumer to do the requests, and produce back to a "response" topic with HTTP status codes or success/fail indicators, for example.
Then have Streams also consume this response topic for the joining.
The main reason not to do blocking RPC within Streams is that it's very sensitive to time, and increasing timeouts excessively should generally be avoided when possible.

Redis / Kafka - Why stream consumers are blocked?

Are Kafka stream/redis stream good for reactive architecture?
I am asking this mainly because both redis and kafka seem to be blocking threads when consuming the messages.
Are there any reasons behind this? I was hoping that I could read messages with some callback - so execute when the message was delivered like pub/sub in a reactive manner. Not by blocking thread.
Kafka client is relatively low level, what is "good" as in: it provides you with much flexibility when (and in which thread) you'd do the record processing. In the end, to receive a record, someone needs to block (as actual reading is sending fetch requests over and over). Whether the thread being blocked is a "main-business" thread or some side-i/o dedicated one will depend on developer's choice.
You might take a look at higher-level offerings such as Spring-Kafka or Kafka Stream API / Kafka Connect frameworks that provide "fatter" inversion-of-control containers, effectively answering the above concern.

Synchronous Messaging Queue

Can there be a scenario wherein one can use the synchronous Pub-Sub model to build a system? Generally, they are used to achieve asynchronous behavior wherein the producer publishes messages on the queue and gets an acknowledgment and once the consumer consumes the message and processes it. Does anybody like Kafka, Rabbit-MQ provide this use case.

Does Confluent Rest Proxy API (producer) retry event publish on failure

We are planning to use Confluent Rest proxy as the platform layer for listening to all user events (& publishing them to Kafka). Working on micro-services model & having varied types of event generators, we want our APIs/event-generators to be decoupled from event listening/handling. Also, at event listening layer, event resilience is important for us.
From what I understand, if the Rest proxy layer fails to publish to Kafka(once) for any reason, it doesn't retry. This functionality needs to be handled by the caller (the client layer), which needs to make synchronous calls & retry on failure. However, couldn't find any details on this, in the product documentation. Could someone please confirm the same?
Confluent Rest Proxy developers claim that with the right Rest Proxy cluster set-up & right request batching by the client, performance as good as native producers' can be achieved. Any exceptions/(positive/negative)thoughts here?
Calls to the Rest Proxy Producer API are blocking. If the client doesn't need to know the partition & offset details, can we configure these calls to be non-blocking in anyway, such that once the request is received, resilience is managed by the Rest Proxy layer itself. The client just receives a 200 HTTP status as acknowledgement, whenever a produce msg request is received.
The REST Proxy is just a normal Kafka Producer and Kafka Consumer and can be configured with or without retries enabled, just as any other Kafka Producer can.
A single producer publishing via a REST Proxy will not achieve the same throughput as a single native Java Producer. However you can scale up many REST proxies and many HTTP producers to get high performance in aggregate. You can also mitigate the performance penalty imposed by HTTP by batching multiple messages together into a consolidated HTTP request to minimize the number of HTTP round trips on the wire.

Is RabbitMQ capable of "pushing" messages from a queue to a consumer?

With RabbitMQ, is there a way to "push" messages from a queue TO a consumer as opposed to having a consumer "poll and pull" messages FROM a queue?
This has been the cause of some debate on a current project i'm on. The argument from one side is that having consumers (i.e. a windows service) "poll" a queue until a new message arrives is somewhat inefficient and less desirable than the idea having the message "pushed" automatically from the queue to the subscriber(s)/consumer(s).
I can only seem to find information supporting the idea of consumers "polling and pulling" messages off of a queue (e.g. using a windows service to poll the queue for new messages). There isn't much information on the idea of "pushing" messages to a consumer/subscriber...
Having the server push messages to the client is one of the two ways to get messages to the client, and the preferred way for most applications. This is known as consuming messages via a subscription.
The client is connected. (The AMQP/RabbitMQ/most messaging systems model is that the client is always connected - except for network interruptions, of course.)
You use the client API to arrange that your channel consume messages by supplying a callback method. Then whenever a message is available the server sends it to the client over the channel and the client application gets it via an asynchronous callback (typically one thread per channel). You can set the "prefetch count" on the channel which controls the amount of pipelining your client can do over that channel. (For further parallelism an application can have multiple channels running over one connection, which is a common design that serves various purposes.)
The alternative is for the client to poll for messages one at a time, over the channel, via a get method.
You "push" messages from Producer to Exchange.
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
BTW this is fitting very well IoT scenarios. Devices produce messages and sends them to an exchange. Queue is handling persistence, FIFO and other features, as well as delivery of messages to subscribers.
And, by the way, you never "Poll" the queue. Instead, you always subscribe to publisher. Similar to observer pattern. Generally, I would say genius principle.
So it is similar to post box or post office, except it sends you a notification when message is available.
Quoting from the docs here:
AMQP brokers either deliver messages to consumers subscribed to
queues, or consumers fetch/pull messages from queues on demand.
And from here:
Storing messages in queues is useless unless applications can consume
them. In the AMQP 0-9-1 Model, there are two ways for applications to
do this:
Have messages delivered to them ("push API")
Fetch messages as needed ("pull API")
With the "push API", applications have to indicate interest in
consuming messages from a particular queue. When they do so, we say
that they register a consumer or, simply put, subscribe to a queue. It
is possible to have more than one consumer per queue or to register an
exclusive consumer (excludes all other consumers from the queue while
it is consuming).
Each consumer (subscription) has an identifier called a consumer tag.
It can be used to unsubscribe from messages. Consumer tags are just
strings.
RabbitMQ broker is like server that wont send data to consumer without consumer client getting registering itself to server. but then question comes like below
Can RabbitMQ keep client consumer details and connect to client when packet comes?
Answer is no. so what is alternative well then write plugin by yourself that maintain client information in some kind of config. Plugin will pull from RabbitMQ Queue and push to client.
Please give look at this plugin might help.
https://www.rabbitmq.com/shovel.html
Frankly speaking Client need to implement AMQP protocol to receive so and should listen connection on some port for that. This sound like another server.
Regards,
Vishal