Feedback message from subscriber in mqtt protocol - sockets

I used the MQTT protocol to send messages between two computers. I have patterned from this code.
publisher:
import paho.mqtt.client as mqtt
from random import randrange, uniform
import time
mqttBroker ="mqtt.eclipse.org"
client = mqtt.Client("Temperature_Inside")
client.connect(mqttBroker)
while True:
randNumber = randrange(10)
client.publish("TEMPERATURE", randNumber)
print("Just published " + str(randNumber) + " to topic TEMPERATURE")
time.sleep(1)
subscriber:
import paho.mqtt.client as mqtt
import time
def on_message(client, userdata, message):
print("received message: " ,str(message.payload.decode("utf-8")))
mqttBroker ="mqtt.eclipse.org"
client = mqtt.Client("Smartphone")
client.connect(mqttBroker)
client.loop_start()
client.subscribe("TEMPERATURE")
client.on_message=on_message
time.sleep(1)
client.loop_stop()
I want a feedback to be sent to the publisher when I receive the message. Is there a way to get message feedback?

There is no end to end delivery notification in the MQTT protocol. This is very deliberate.
MQTT is a pub/sub system, designed to totally separate the producer of information from the consumer. There could be 0 to infinite number of subscribers to a topic when a producer publishes a message. There could also be offline subscribers who will have the message delivered when they reconnect (which could be any time after the message was published)
What MQTT does provide is the QOS levels, but it is important to remember that these only apply to a single leg of the delivery journey. E.g. a message published at QOS 2 ensures it will reach the broker, but makes no guarantees about any subscribers as their subscription may be at QOS 0.
If your system requires end to end delivery notification then you will need to implement a response message yourself, this is normally dinner by including a unique ID in the initial message and sending a separate response message in a different topic also containing that ID

To ensure your message will get delivered you can use QoS. This can be set when publishing or subscribing. So for your case you will want either QoS 1 or 2. QoS 2 ensures it will reach the broker exactly once when publishing, and if subscribed at QoS 2 it will ensure the subscriber gets the message exactly once. Note though QoS 2 is the slowest form of publishing and subscribing. I find the most common way to deal with messages is with QoS 1, and then in your subscribe on_message you can determine how to deal with duplicate messages yourself. The paho mqtt client allows you to set QoS when publishing or subscribing but it defaults to 0. I've updated your code below.
# publisher
import paho.mqtt.client as mqtt
from random import randrange, uniform
import time
import json
mqttBroker ="mqtt.eclipse.org"
client = mqtt.Client("Temperature_Inside")
client.connect(mqttBroker)
id = 1
while True:
randNumber = randrange(10)
message_dict = { 'id': id, 'value': randNumber }
client.publish("TEMPERATURE", json.dumps(message_dict), 1)
print("Just published " + str(randNumber) + " to topic TEMPERATURE")
id += 1
time.sleep(1)
# subscriber
import paho.mqtt.client as mqtt
import time
import json
from datetime import datetime
parsed_messages = {}
def on_message(client, userdata, message):
json_body = json.loads(str(message.payload.decode("utf-8")))
msg_id = json_body['id']
if msg_id in parsed_messages.keys
print("Message already recieved at: ", parsed_messages[msg_id].strftime("%H:%M:%S"))
else
print("received message: " ,json_body['value'])
parsed_messages[msg_key] = datetime.now()
mqttBroker ="mqtt.eclipse.org"
client = mqtt.Client("Smartphone")
client.connect(mqttBroker)
client.loop_start()
client.subscribe("TEMPERATURE", 1)
client.on_message=on_message
time.sleep(1)
client.loop_stop()
Note it is important that the subscriber also defines QoS 1 on subscribing otherwise it will subscribe with QoS 0 which is default for the paho client, and the message will get downgraded to 0, meaning the message will get delivered at most once (but may not get delivered at all if packet dropped). As stated the above only ensures that the message will get received by the subscriber. If you want the publisher to get notified when the subscriber has processed the message you will need to publish on a new topic (with some uuid) when the subscriber has finished processing that the publisher can subscribe to. However when I see this being done I often question why use MQTT, and not just send HTTP requests. Here is a good link on MQTT QoS if you're interested (although it lacks detail on what happens from subscriber side).

Related

KAFKA client library (confluent-kafka-go): synchronisation between consumer and producer in the case of auto.offset.reset = latest

I have a use case where I want to implement synchronous request / response on top of kafka. For example when the user sends an HTTP request, I want to produce a message on a specific kafka input topic that triggers a dataflow eventually resulting in a response produced on an output topic. I want then to consume the message from the output topic and return the response to the caller.
The workflow is:
HTTP Request -> produce message on input topic -> (consume message from input topic -> app logic -> produce message on output topic) -> consume message from output topic -> HTTP Response.
To implement this case, upon receiving the first HTTP request I want to be able to create on the fly a consumer that will consume from the output topic, before producing a message on the input topic. Otherwise there is a possibility that messages on the output topic are "lost". Consumers in my case have a random group.id and have auto.offset.reset = latest for application reasons.
My question is how I can make sure that the consumer is ready before producing messages. I make sure that I call SubscribeTopics before producing messages. but in my tests so far when there are no committed offsets and kafka is resetting offsets to latest, there is a possibility that messages are lost and never read by my consumer because kafka sometimes thinks that the consumer registered after the messages have been produced.
My workaround so far is to sleep for a bit after I create the consumer to allow kafka to proceed with the commit reset workflow before I produce messages.
I have also tried to implement logic in a rebalance call back (triggered by a consumer subscribing to a topic), in which I am calling assign with offset = latest for the topic partition, but this doesn't seem to have fixed my issue.
Hopefully there is a better solution out there than sleep.
Most HTTP client libraries have an implicit timeout. There's no guarantee your consumer will ever consume an event or that a downstream producer will send data to the "response topic".
Instead, have your initial request immediately return 201 Accepted status (or 400, for example, if you do request validation) with some tracking ID. Then require polling GET requests by-id for status updates either with 404 status or 200 + some status field within the request body.
You'll need a database to store intermediate state.

How can I send a message to my akka actor system's event stream without addressing the message to any actor in particular?

I'm interested in implementing:
1. an akka actor A that sends messages to an event stream;
2. an akka actor L that listens to messages of a certain type that have been published on the event stream.
If possible, I would like to reutilize the actor system's event stream.
I know how to do 2. It is explained here: https://doc.akka.io/docs/akka/2.5/event-bus.html#event-stream
But how can I do 1?
I know how to make A send a message addressed to another actor(Ref), but I do not want to address the message to any particular actor(Ref). I just want the message to appear in the event stream and be picked up by whoever is listening to messages of that type. Is this possible somehow?
A side-question: if I implement 2 as described in https://doc.akka.io/docs/akka/2.5/event-bus.html#event-stream, does the listener know who sent the message?
As per the documentation link that you posted you can publish messages to the EventStream:
system.eventStream.publish(Jazz("Sonny Rollins"))
Message will be delivered to all actors that subscribed themselves to this message type:
system.eventStream.subscribe(jazzListener, classOf[Jazz])
For the subscribers to know the sender, I suggest you define an ActorRef field in your payload and the sending actor can put its self reference in it when publishing the message. NB Defining the sender's ActorRef explicitly in the message type is how the new akka-typed library deals with all actor interactions, so it's a good idea to get used to this pattern.

Spring Cloud Stream Kafka - Eventual consistency - Does Kafka auto retry unacknowledged messages (when using autocommitoffset=false)

Implementing an eventually consistent distributed architecture has turned out to be a pain. There are tons of blog posts telling stories about how to do it, but not showing (code) how to actually do it.
One of the aspects I'm suffering is having to deal with manual retries of the messages when they haven't been ack'd.
For instance: my order service sends a pay event to Kafka. Payment Service is subscribed to it and processes it, answering with payment ok or payment failure
Ask for payment: Order Service ----Pay event----> Kafka ----Pay Event ----> Payment Service
Payment OK: -> Payment Service ----Payment ok event ----> Kafka ----Payment ok Event ----> Order Service
Payment Fail -> Payment Service ----Payment failure event ----> Kafka ----Payment failure Event ----> Order Service
The point is:
I know for sure when a message has been delivered to Kafka by using sync sendings. BUT, the only way I have to know that the payment has been processed by Payment Service is by expecting an answer event (Payment ok| Payment failure).
This forces me to implement a retry mechanism in Order server. If it hasn't gotten an answer in some time, retry with a new Pay event.
What's more, this also forces me to take care of duplicated messages in Payment Service in case they were actually processed but the answer didn't get to Order Service.
I was wondering if Kafka has a built in mechanism to send retries if the consumer didn't acknowledge the new offset of the messages.
In Spring Cloud Stream we can set a autoCommitOffset property to false and handle the ack of the offset in the consumer:
#StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
What happens if we don't execute acknowledgment.acknowledge(); Will the message be automatically resent by Kafka to this consumer?
If it is possible we wouldn't need to retry manually any more and could do stuff like this:
Paymen Service:
#Autowired
private PaymentBusiness paymentBusiness;
#StreamListener(Sink.INPUT)
public void process(Order order) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
paymentBusiness(order);
//If we don't get here because of an exception
//Kafka would retry...
acknowledgment.acknowledge();
}
}
If this were possible, how is the retry period configured in Kafka?
In the worst case (and most likely) scenario, this isn't supported and we would have to retry manually. Do you know any real example of Spring Cloud Stream apps dealing with eventual consistency using Kafka?
What happens if we don't execute acknowledgment.acknowledge(); Will the message be automatically resent by Kafka to this consumer?
No. A Kafka consumer reads messages sequentially for as long as a client is open. Kafka does not support more sophisticated acknowledgment modes, such as individual message acknowledgment, only updating the offset for a given consumer group and partition-topic. Spring Cloud Stream supports manual acknowledgment for messages in Spring Cloud Stream for scenarios where they are processed asynchronously (thus preventing message loss) - but the assumption is that once a message is acknowledged manually, its offset is saved, so all previous messages from the same partition-topic will be considered 'read'. If you want to single out failed messages, you can use DLQ support - and have a subsequent consumer receiving them. Restarting the consumer will resume reading from the last saved offset, so you have the option of not saving offsets for a series of unsuccessfully processed messages.
The Spring Cloud Stream consumers have built-in retry and DLQ support - see enableDlq in http://docs.spring.io/spring-cloud-stream/docs/Brooklyn.SR2/reference/htmlsingle/#_kafka_consumer_properties as well as retry settings provided as part of the default consumer properties: http://docs.spring.io/spring-cloud-stream/docs/Brooklyn.SR2/reference/htmlsingle/#_consumer_properties

Akka DistributedPubSubMediator at-least-once delivery guarantees for publishing to a topic

I need to have at-least-once delivery guarantees for messages published to a DistributedPubSubMediator topic.
I looked into DistributedPubSubMediator.scala and can see the following in TopicLike trait (Akka 2.4.6):
trait TopicLike extends Actor {
var subscribers = Set.empty[ActorRef]
def defaultReceive: Receive = {
case msg ?
subscribers foreach { _ forward msg }
}
}
However I couldn't find any method to retrieve subscribers set from mediator... It would be great if there was a message request GetTopicSubscribers which would expose this information to mediator clients:
mediator ! GetTopicSubscribers("mytopic")
So after publishing to a topic the publisher could wait for Ack messages from all active subscribers. Is there any other way to accomplish something like that?
It would be great if somehow akka.contrib.pattern.ReliableProxy can be plugged in into DistributedPubSubMediator.
You could get your publisher to ask the mediator for the count of subscribers via a akka.cluster.pubsub.DistributedPubSubMediator.CountSubscribers("myTopic")
Then it just needs to keep a count of how many Ack messages it gets from subscribers.
No need to track actual subscribers or which ones have Acknowledged, when your Ack count reaches the subscriber count you know they have all received it (thanks to Akka at-most-once delivery reliability)
Note however this comment in the source code:
// Only for testing purposes, to poll/await replication
case object Count
final case class CountSubscribers(topic: String)
Suggests perhaps CountSubscribers is not something to rely on too heavily?

Understanding mqtt subscriber qos

I am new to MQTT and I just learned about the meaning of the QOS level that is decided when a message is published:
0 when we prefer that the message will not arrive at all rather than arrive twice
1 when we want the message to arrive at least once but don't care if it arrives twice (or more)
2 when we want the message to arrive exactly once.
A higher QOS value means a slower transfer
I noticed that the subscriber side can also set the "Maximum QOS level they will receive".
Quoting from here:
For example, if a message is published at QoS 2 and a client is subscribed with QoS 0, the message will be delivered to that client with QoS 0.
Does this mean that the message might not arrive to the client (QOS 0) despite the fact that publisher sent it with QOS 2?
This might be a big issue among inexperienced developers - for example, the default QOS of the subscribe function in the npm mqtt package is 0! (The default should have been the maximum value 2 in my opinion, i.e. "let the publisher decide the QOS").
You are correct, there is no guarantee that a message published at QoS 2 will arrive at a subscriber who is using QoS 0. If it is important for that subscriber to receive the message, they should be using QoS 1 or 2. That is something to decide for any given application.
I would rewrite your definition of QoS 0 as "at most once", i.e. the message will be received or it won't, there isn't a chance of a duplicate.
Regarding default QoS - I think most clients use QoS 0 as the default. I don't see that setting QoS 1 or 2 as the default would help the inexperienced developer, they still need to understand why and what they are doing and to consider the implications on their application.
A publisher really doesn't have a direct notion of what clients are subscribed to that message. A publisher's QOS level determines the quality of service in ensuring that the broker receives the publication. Once the broker receives the publication, it becomes responsible to re-send the message.
[edit] The broker then resends the message to the subscribers, but only at most at the QoS that it received from the publisher. This may even be a downgrading of QoS that subscribers have specified.
I found this article quite helpful in understanding this concept.
"Does this mean that the message might not arrive to the client (QOS 0) despite the fact that publisher sent it with QOS 2?"
Yes that is true. The publisher will want to publish at QOS 2 to ensure that the record arrives at the state layer only once (without duplicates). A layer of retrys + acks are used to ensure this. There is additional work for the brokers that provide the topic to subscribing clients to ensure that the message is delivered at the requested QOS level.
For example a message is published at QOS 1 and a subscriber to the same topic is subscribed at QOS 2, then the broker handling the delivery of the message to said subscriber will have to ensure that no duplicate is sent to the client.
In your example a publisher is publishing at QOS 2, so the state layer inserted the record once, and there is a subscriber at QOS 0 for this same topic. The subscriber may never receive this message. For example during message send there was a network hiccup and the record never arrived. Since there is no ack mechanism in QOS 0 the broker never attempts to redeliver.
i did't read MQTT protocol Specifications yet,
just say my test with mosquitto 1.5.3.
1. run mosquitto server/broker
with default conf.
mosquitto -v
1541075091: mosquitto version 1.5.3 starting
1541075091: Using default config.
2. pub test msg
AAA sub topic 'aaa'
BBB sub topic '+'
DDD pub topic 'aaa'
3. the server stdout
1541075322: New connection from 10.1.1.159 on port 1883.
1541075322: New client connected from 10.1.1.159 as DDD (c1, k60).
1541075322: No will message specified.
1541075322: Sending CONNACK to DDD (0, 0)
1541075322: Received PUBLISH from DDD (d0, q1, r1, m1, 'aaa', ... (8 bytes))
1541075322: Sending PUBACK to DDD (Mid: 1)
1541075322: Sending PUBLISH to AAA (d0, q0, r0, m0, 'aaa', ... (8 bytes))
1541075322: Sending PUBLISH to BBB (d0, q0, r0, m0, 'aaa', ... (8 bytes))
1541075322: Received DISCONNECT from DDD
1541075322: Client DDD disconnected.
server PUBACK to DDD before PUBLISH the msg.
4. so i guess
pub qos=1 only make sure broker received the msg,
sub qos also:
[ pub ] ---pub_qos---> [ broker ] ---sub_qos--> [ sub ]
// MQTT clients and broker Network topology is star network.
// if i have time, i'll read the Protocol Specifications