We have multiple services and use the publish/subscribe pattern for sending events from service A to be handled by other services (B & C). The goal is to allow multiple queues to receive messages from a Producer by matching the binding key / topic.
This works fine if services B & C start first. In that case, the Subscribe method creates the Exchanges and Queues to receive the messages when published. However, if service A starts first, then the published messages are lost as the receiving queues are not created.
Looking for the Best Practice way to ensure the queues are created before publish. The producer does not have knowledge of the consumers, and there may be more consumers over time for a given message type, so we can't have the producer code take responsibility for queue creation.
Our current implementation is using RabbitMQ on the backplane, but we want to migrate over time to SQS and Azure Service Bus, so we need this to be Message Broker agnostic
The simple answer, start your consumer services before you start your publishers.
Alternatively, you could use the DeployTopologyOnly flag with a custom build or command-line to deploy the queue/exchanges/bindings without actually starting the consumers, but it will still be the consumer service with all of its configuration.
Related
I need to build a secure REST API for different services where client services can post and receive messages from other clients( like mail box. but messages are going to be in JSON form. and should be persistent. I am expecting around 5000 client services. With around 50 message per service per day).
My questions are:
Can I use Kafka for this(I think I will be needing some wrapper over
Kafka to manage other task) ?
If yes then outbox and inbox are going to be a separate topic for
each service?( 2 topics per service. 5000*2 topics. My plan is to
create them dynamically as new client joins in)
what are the alternative technologies to write this kind of gateway.
Any help will be appreciated.
You can't use Kafka to implement REST API because REST API implies request/response while Kafka is just a message queue (Kafka doesn't provide a mechanism to respond to messages). You can use Kafka to produce messages to be consumed by other services. The idea of message queues is to decouple producer from consumer and vice versa. When a consumer receives a message it acts on it, that's it. But when you say inbox/outbox you imply that there's a response for a message which means that producers and consumers pace should be similar which couples them which is against the nature of message queues.
It seems like in your case it makes more sense to use http requests/response or even websockets. If you want to save the request/response data (making it persistent) you can save it either in a database, object storage (like S3), log it or send each message to Kafka so that Kafka stores all of your messages, writes to Kafka will actually be very fast because Kafka is roughly-speaking an append-only log. You can then search messages values using ksqldb.
I'm using ActiveMQ Artemis 2.18.0 and some Spring Boot clients that communicate with each other via topics. The Spring Boot clients use JMS for all MQTT operations.
I'd like to know if it is possible for a producer with one or more subscribers to find out whether a certain subscriber is actively listening or not. For example, there are 3 clients - SB1, SB2, and SB3. SB1 publishes to test/topic, and SB2 and SB3 are subscribed to test/topic. If SB2 shuts down for any reason would it be possible for SB1 to become aware of this?
I understand that queues would be the way to go for this, but my project is much better suited to the use of topics, and it is set up this way already and works fine. There's just one operation where it must be determined whether the listener is active or not in order to update the listener's online status, a crucial parameter. Right now, clients and the server continually poll a database so that the online status is periodically updated, I want to avoid doing this and use something that Artemis may provide instead.
Apache ActiveMQ Artemis emits notifications to inform listeners of potentially interesting events as consumer created or closed, see Management Notifications at http://activemq.apache.org/components/artemis/documentation/latest/management.html.
A listener of the management notification address would receive a message for each consumer created or closed, see the Management Notification Example at http://activemq.apache.org/components/artemis/documentation/latest/examples.html#management-notification
Part of the point to pub/sub based messaging is to decouple the information producer (publisher) from the consumer (subscriber). As a rule a published REALLY shouldn't care if there even are any subscribers.
If you want to know the status of the subscriber then it's up to the subscriber to update this, not the publisher. Things like the Last Will & Testament feature allow the subscriber to update it's status in the event of a failure to explicitly do it when going offline.
I'm looking for a pattern and existing implementations for my situation:
I have synchronous soa which uses REST APIs, in fact I implement remote procedure call with REST. And I have got some slow workers which process requests for substantial time (around 30 seconds) and due to some license constraints for some requests I can process them only sequentially (see system setup).
What are recommended ways to implement communication for such case?
How can I mix synchronous and asynchronous communication in the situation when the consumer is behind firewall and I cannot send him notification about completed tasks easily and I might not be able to let consumer use my message broker if I have one?
Workers are implemented in Python using Flask and gunicorn. At the moment I'm using synchronous REST interfaces and allow for the delay as I only had fast workers. I looked at Kafka and RabbitMq and they would fit for backend side
communication, however how does producer communicate with the consumer?
If the consumer fires API request my producer can return code 202, then how shall producer notify consumer that result is available? Will consumer have to poll the producer for results?
Also if I use message brokers and my gateway acts on behalf of consumer, it should have a registry of requests (I already have GUID for every request now) and results, which approach would you recommend for implementing it?
Producer- agent which produces the message
Consumer - agent which can handle the message and implement the logic for processing the message
In our scenario we have a set of micro services which interact with other services by sending event messages. We anticipate millions of messages per day at the peak. Every message is targeted to one or more listener types.
Our requirements are as follows:
Zero lost messages.
Ability to dynamically register multiple listeners of a specific
type in order to increase throughput.
Listeners are not guaranteed to be alive when messages are
dispatched.
We consider 2 options:
Send each message to JMS main queue then listeners of that queue will route the messages to specific queues according to message content, and then target services will listen to those specific queues.
Send messages to a Kafka topic by message type then target services will subscribe to the relevant topic and consume the messages.
What are the cons and pros for using either JMS or Kafka for that purpose?
Your first requirement is "zero lost messages". However, if you want publish-subscribe semantics (i.e. topics in JMS), but listeners are not guaranteed to be alive when messages are dispatched then JMS is a non-starter as those messages will simply be discarded (i.e. lost).
I would suggest to go with Kafka as it has fault tolerance mechanism and even if some message lost or not captured by any listener you can easily retrieve it from Kafka cluster.
Along with this you can easily add new listener / listener in group and kafka along with zookeeper will take care of managing it very well.
In summary, Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable. Like many publish-subscribe messaging systems, Kafkamaintains feeds of messages in topics. Producers write data to topics and consumers read from topics.
Very easy for integration.
With RabbitMQ, is there a way to "push" messages from a queue TO a consumer as opposed to having a consumer "poll and pull" messages FROM a queue?
This has been the cause of some debate on a current project i'm on. The argument from one side is that having consumers (i.e. a windows service) "poll" a queue until a new message arrives is somewhat inefficient and less desirable than the idea having the message "pushed" automatically from the queue to the subscriber(s)/consumer(s).
I can only seem to find information supporting the idea of consumers "polling and pulling" messages off of a queue (e.g. using a windows service to poll the queue for new messages). There isn't much information on the idea of "pushing" messages to a consumer/subscriber...
Having the server push messages to the client is one of the two ways to get messages to the client, and the preferred way for most applications. This is known as consuming messages via a subscription.
The client is connected. (The AMQP/RabbitMQ/most messaging systems model is that the client is always connected - except for network interruptions, of course.)
You use the client API to arrange that your channel consume messages by supplying a callback method. Then whenever a message is available the server sends it to the client over the channel and the client application gets it via an asynchronous callback (typically one thread per channel). You can set the "prefetch count" on the channel which controls the amount of pipelining your client can do over that channel. (For further parallelism an application can have multiple channels running over one connection, which is a common design that serves various purposes.)
The alternative is for the client to poll for messages one at a time, over the channel, via a get method.
You "push" messages from Producer to Exchange.
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
BTW this is fitting very well IoT scenarios. Devices produce messages and sends them to an exchange. Queue is handling persistence, FIFO and other features, as well as delivery of messages to subscribers.
And, by the way, you never "Poll" the queue. Instead, you always subscribe to publisher. Similar to observer pattern. Generally, I would say genius principle.
So it is similar to post box or post office, except it sends you a notification when message is available.
Quoting from the docs here:
AMQP brokers either deliver messages to consumers subscribed to
queues, or consumers fetch/pull messages from queues on demand.
And from here:
Storing messages in queues is useless unless applications can consume
them. In the AMQP 0-9-1 Model, there are two ways for applications to
do this:
Have messages delivered to them ("push API")
Fetch messages as needed ("pull API")
With the "push API", applications have to indicate interest in
consuming messages from a particular queue. When they do so, we say
that they register a consumer or, simply put, subscribe to a queue. It
is possible to have more than one consumer per queue or to register an
exclusive consumer (excludes all other consumers from the queue while
it is consuming).
Each consumer (subscription) has an identifier called a consumer tag.
It can be used to unsubscribe from messages. Consumer tags are just
strings.
RabbitMQ broker is like server that wont send data to consumer without consumer client getting registering itself to server. but then question comes like below
Can RabbitMQ keep client consumer details and connect to client when packet comes?
Answer is no. so what is alternative well then write plugin by yourself that maintain client information in some kind of config. Plugin will pull from RabbitMQ Queue and push to client.
Please give look at this plugin might help.
https://www.rabbitmq.com/shovel.html
Frankly speaking Client need to implement AMQP protocol to receive so and should listen connection on some port for that. This sound like another server.
Regards,
Vishal