Message brokers - "message has been read" acknowledgment solution - server

I am implementing a service that dispatches messages to clients using an arbitrary message broker. A single user may have multiple clients and the message will be dispatched to each one of them. Once the user has read the message on one client, I want the message removed from the user's other clients.
Do message brokers typically implement this functionality, or will I need a custom solution?
For a custom solution, I was thinking that the broker could maintain a separate reply topic to which a client will deliver a message to say that the user has read the message. The service can consume messages on this reply topic, and dispatch another message to the user's other clients that tells them to remove the message.
Is this how such a solution might typically be implemented?
If it helps, I am considering using MQTT as a message protocol.

There is no concept of even end to end message delivery notification in the MQTT protocol1, let alone read notification. You are likely to need to implement this your self.
If I was doing this I would have 2 topics per user something like this:
[user id]/msg
and
[user id]/read
I would make the payload of the messages delivered to the [user id]/msg contain a message id. I would then publish the message id on the [user id]/read topic. All clients would subscribe to both, that way they could easily mark as read/remove messages as they were consumed on other clients.
1confirmation for higher QOS levels are between the publisher and the broker and then between the broker and the subscriber

Related

What is the flow of a request to a server with a queue in the middle?

I'm trying very hard to understand the flow of a web request to a server which has a queue or message broker in the middle, but I can't find information about when and where the reply is given.
Imagine this use case:
Client A:
sends a invoice order request
the invoice is enqueued
the request is processed and dequeued.
at which time the client will receive a response?
right after the message is received by the queue?
right after the message is processed and dequeued? Other?
I'm asking because if the reply only comes after the message being processed the client might wait a long time. Imagine the message takes 3 minutes to process, would the client need to keep requesting the server to see if it is processed? or a connection is maintained using something like long polling?
I'm interested in scenarios using RabbitMq and kafka.
Advantage of having a messaging system is to ensure the frontend webserver and backend processing is decoupled. Best practice is Web server should publish the message and just wait for the messaging system to acknowledge receiving the message.

Is it considered bad architecture to have a downstream service access write APIs in an upstream service?

I have a boss that is gung ho about only having downstream services consume data from an upstream service. However, we have situations where it would make sense to have the downstream service send an update to the upstream service, which naturally he is completely against.
So my question is is it bad architecture to have a downstream services send update data to an upstream service?
Seems like RESTful apis would be horrible architecture on an if that is the case, considering an upstream service would never have the need for a PUT, only GETs.
Is he wrong, or what am I missing?
It sounds like a job for messaging. I've built a few of these integrations where systems broadcast changes in their environment to a message broker and anything that's interested in that change consumes the message and acts accordingly. A shared message format is required though.
If Upstream service is A and Downstream service is B, then
B accepts a new user ingest. B processes the request and creates a new user. B then creates the message, for example:
<user mode="new">
<name></name>
<email></email>
</user>
or
<users>
<user mode="new">
<name></name>
<email></email>
</user>
<users>
and sends it to a topic at the message broker, e.g. Apache ActiveMQ. The topic could be:
/user
A can either be a durable subscriber to the user topic or you can use Apache Camel to route the message to another topic which A would subscribe to. Persistent messages and durable topic consumers ensure messages aren't lost if the broker goes down, in theory.
A sees the user message, sees the mode is new and examines the email to see if it needs to create the user. A then processes the message and does what it needs to do.
At this point, if there are other systems that would need to know about the new user A can broadcast the message to another topic which they listen to. You would do that if A needs to know about a new user first and then decide who else needs to know. If they all need to know, then they can all subscribe to the user topic.
Using messaging, no system needs to know about any other system. Each system just broadcasts events in a known format (in this case a domain specific XML message but JSON is fine).
If you have multiple clients, each can do their own ingest then broadcast an appropriate message or messages, perhaps one per user even, to the topic and all the other systems can act on it.
If you don't want to use XML or JSON you could attach a CSV of all new users to the message. The key is, all other systems should know what the message format is.

Is there a way to send messages to every client connected to the same subscriber on google pubsub?

I have a topic with one subscription on pubsub. Each instance of my nodejs server listens to the subscription. Whenever there is a message, if it is delivered to any one of the server instances, the other instances do not receive it. Is there a way to make each instance receive the message?
Or will I have to create separate subscription for each instance?
If you want every instance of a server to receive the message, then you need to use separate subscriptions. There are two forms of having multiple subscribers:
Single subscription, multiple subscribers: this is load balancing, where messages are delivered to one of the subscribers (though duplicates can happen on occasion as Google Cloud Pub/Sub has at-least-once delivery). Use this mechanism when you need multiple subscribers to handle the throughput of messages.
Multiple subscriptions, one subscriber per subscription: this is fan out, where messages are delivered to all of the subscribers because they each have their own subscription. Use this mechanism when you need each subscriber to receive and process every message.

hornetq guarantee that the message reached the queue

I am using org.hornetq.api.core.client
how can I guarantee the message that I am sending actually reached the queue (not the client, just the queue) ?
producer.send("validQueue",clientMessage)
please note that the queue is a valid queue .
this similar question is referring to invalid queue. other ones such as this one is relevant to the delivery to the client .
It really depends on how you are sending.
First question of yours was about
First of all, on JMS you have to way to send to an invalid queue since the producer will validate the queue's existence. On HornetQ core api you send to an address (not to a queue), and you may have an unbound queue. So you have to query if the address has queues or not.
Now, for confirmation the message was received:
Scenario I, persistent messages, non transactionally
Every message is sent blocked. The client will unblock as soon as the server acknowledged receiving the message. This is done automatically.. you don't have to do anything.
Scenario II, non persistent messages, non transactionally
There are no confirmations by default. The message is sent asynchronously. We assume the message is transient and it's not a big deal if you lost it. you can change that by setting block-on-non-persistent-send on the ServerLocator.
Scenario III, transactionally (either persistent or not).
As soon as you call commit the message is on the queues.
Scenario IV, Confirmation send
You set a callback and you get a method call as soon as the server acked it on the queues. Look on the manual for confirmation callback. There's also the same feature on JMS2.

Send XMPP message without starting a chat

I am basically writing a XMPP client to automatically reply to "specific" chat messages.
My setup is like this:
I have pidgin running on my machine configured to run with an account x#xyz.com.
I have my own jabber client configured to run with the same account x#xyz.com.
There could be other XMPP clients .
Here is my requirement:
I am trying to automate certain kind of messages that I receive on gtalk. So whenever I receive a specific message eg: "How are you" , my own XMPP client should reply automatically with say "fine". How are you". All messages sent (before and after my client replies) to x#xyz.com but should be received by all clients (my own client does not have a UI and can only respond to specific messages.).
Now I have already coded my client to reply automatically. This works fine. But the problem I am facing is that as soon as I reply (I use the smack library), all subsequent messages that are sent to x#xyz.com are received only by my XMPP client. This is obviously a problem as my own client is quite dump and does not have a UI, so I don't get to see the rest of the messages sent to me, thereby making me "lose" messages.
I have observed the same behavior with other XMPP clients as well. Now the question is, is this is a requirement of XMPP (I am sorry but I haven't read XMPP protocol too well). Is it possible to code an XMPP client to send a reply to a user and still be able to receive all subsequent messages in all clients currently listening for messages? Making my client a full fledged XMPP client is a solution, but I don't want to go that route.
I hope my question is clear.
You may have to set a negative presence priority for your bot..
First thing to know is that in XMPP protocol every client is supposed to have a full JID. This is a bare JID - in your case x#xyz.com with a resource in the end e.g. x#xyz.com/pidgin or x#xyz.com/home (where /pidgin and /home are the resource). This is a part of how routing messages to different clients is supposed to be achieved.
Then there are the presence stanzas. When going online a client usually sends a presence stanza to the server. This informs about e.g. if the client is available for chat or away for lunch. Along with this information can be sent a priority. When there are more than one clients connected the one with the highest priority will receive the messages sent to the bare JID (e.g. ClientA(prio=50) and ClientB(prio=60) -> ClientB receives the messages sent to x#xyz.com). But there are also negative priorities. A priority less than 0 states that this client should never be sent any messages. Such a stanza might look like this
<presence from="x#xyz.com/bot">
<priority>-1</priority>
</presence>
This may fit your case. Please keep in mind it also depends on the XMPP server where your account is located, which may or may have not fully implemented this part of the protocol.
So to summarize: I recommend you to look through the Smack API how to set a presence and set the priority to <0 for your bot client right after it connected.