Get all messages after the client has re-connected to the mqtt broker - chat

I'm trying to build an instant-messaging app using MQTT. But I've hit a road block as I'm not able to receive all messages sent by the publisher when the client reconnects after going offline for some time. The client is connected to the broker with these settings:
A client id
clean session - false
receive with QoS 2
While the publisher sends messages with these settings:
QoS 2
retain flag set to true
The problem is when the client reconnects, it receives only the latest (offline) message sent by the publisher while all the preceding messages are lost.
I was going through some articles where it is mentioned that the persistent connection means that the broker persists the topic subscriptions and all the QoS 1 and 2 messages. Here are some of them: HiveMQ persistent connections, another article.
Is there a workaround wherein I can get all the messages published on a topic while the client was offline or I am doing something wrong?
P.S. I've gone through this Receive offline messages mqtt link already and I'm doing the same as answered but it doesn't solve my issue.

MQTT retain flag ensures that only the last well known message is stored in the broker. Disabling the retain flag and having a persistent client connection will enable queuing of messages in the broker (only with qos 1 and 2) and delivering the same when client comes back online.
please keep in mind to use the same client id on reconnection to broker since broker maintains the context of the client using the client id.

Related

dead letter queue for kafka connect http sink

I am writing HTTP Sink plugin for Kafka connect. Its purpose is to send HTTP requests for each message in the configured Kafka topic. I want to send the message to dead letter queue in case HTTP request fails. Can I make use of dead letter queue configuration provided in sink plugin ?
The reason for this question is that, it has been mentioned in kafka connect documentation and several blogs that only errors in transformer and converter will be send to dead letter queue and not the ones during PUT. Since the task of sending the http request is done in PUT. So I am think, is there a way to send failed http messages to DLQ ? If not, is it possible to send the message to some other kafka topic for further processing ?
According to #Neil this might be informative,
Kip 610 (implemented in Kafka 2.6) added DLQ support for issues when interacting with end system. Kip 298 added DLQ but only on issues prior to sink system interaction.
Check the versions of your connect cluster and sink version and see if it supports it.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors

Client Local Queue in Red Hat AMQ

We have a network of Red Hat AMQ 7.2 brokers with Master/Slave configuration. The client application publish / subscribe to topics on the broker cluster.
How do we handle the situation wherein the network connectivity between the client application and the broker cluster goes down? Does Red Hat AMQ have a native solution like client local queue and a jms to jms bridge between local queue and remote broker so that network connectivity failure will not result in loss of messages.
It would be possible for you to craft a solution where your clients use a local broker and that local broker bridges messages to the remote broker. The local broker will, of course, never lose network connectivity with the local clients since everything is local. However, if the local broker loses connectivity with the remote broker it will act as a buffer and store messages until connectivity with the remote broker is restored. Once connectivity is restored then the local broker will forward the stored messages to the remote broker. This will allow the producers to keep working as if nothing has actually failed. However, you would need to configure all this manually.
That said, even if you don't implement such a solution there is absolutely no need for any message loss even when clients encounter a loss of network connectivity. If you send durable (i.e. persistent) messages then by default the client will wait for a response from the broker telling the client that the broker successfully received and persisted the message to disk. More complex interactions might require local JMS transactions and even more complex interactions may require XA transactions. In any event, there are ways to eliminate the possibility of message loss without implementing some kind of local broker solution.

How can I get messages without missing in Kafka?

i'm a newbie in Kafka. I've been testing Kafka for sending messages.
This is my situation, now.
add.java in my local VM is sending messages to kafka in my local VM regularly.
relay.java in another server is polling from kafka in my local VM and producing to kafka in another server.
While I was sending messages from kafka in my local VM to kafka in another server,
I pulled LAN cable out from my lap top. Few seconds later, I connected LAN cable to it again.
And then I found that some messages were lost while LAN cable was disconnected.
However, When the network is reconnected, I want to get all messages which are in disconnection without
missing.
Are there any suggestions?
Any help would be highly appreciated.
First of all, I suggest you use MirrorMaker (1 or 2) because it supports exactly this use case of consuming and producing to another cluster.
Secondly, add.java should not be dropping messages if your LAN is disconnected.
Whether you end up with dropped messages on the way from relay.java depends on your consumer and producer settings within there. For example, you should definitely disable auto offset commits and only commit after you have gotten a completion event and acknowledgement from its producer action. This will result in at least once delivery.
You can find multiple posts about processing guarantees in Kafka

High Availability for MQTT Based C++ Services

I have written few c++ services which have the MQTT Client. Based on the message received on the MQTT topic the c++ service will take some actions like sending an MQTT message to another topic or saving the message to the database etc.
I have set up a few MQTT Brokers on Dockers and attached those MQTT Brokers to an HA Load balancer. All these MQTT Brokers also clustered.
So, if client 1 connected broker-1 ( through Load balancer ) can send message to client x connected broker -x. Due to the clustering of the MQTT Brokers.
So, How can I set the load balancer to my c++ services with HA or similar load balancers?
Update:
In the case of HTTP / REST APIs, the request will be transferred to only one web application at any point of time. But in case of MQTT, the message will be published, and If I run multiple c++ service of Same ABC then all the services will process that message. How I should make sure only one service will process the message. I want to establish High Availability for the C++ service
This is not possible under MQTT 3.x. The reason being that prior to MQTT 5, every message is sent to every subscriber to that topic making it very difficult to load balance correctly. Subscribers would need receive everything then discard decide for themselves to discard some messages, leaving them for other subscribers. It's one of the limitations of MQTT 3.x.
There are those who have worked around this by connecting their MQTT broker into an Apache Kafka cluster, routing all messages from MQTT to Kafka and then attaching their subscribers (like your c++ services) to Kafka instead of MQTT. Kafka supports the type of load balancing you are asking for.
This may be about to change with MQTT 5.0. There are still a lot of clients and brokers which don't support this. However if both your client and broker support MQTT version 5 then there is a new 1 concept of "Shared Subscriptions":
Shared Subscriptions – If the message rate on a subscription is high, shared subscriptions can be used to load balance the messages across a number of receiving clients
You haven't stated your client library. But your first steps should be:
investigate if both your broker and subscriber support MQTT 5
Check the API for your client to discover how to use subscriber groups
1 New to MQTT, Kafka already has it.

Kafka: Killing a consumers connection

With EMS it is possible to see all connections to a particular EMS server, and kill any unwanted connections.
As far as I can tell, I have an unwanted process somewhere that is subscribing to my Kafka topic with the same consumer name as my process.
Therefore, my process is not receiving any messages and I don't know where this "rogue" process is located.
Is there any command I can run to kill such connections?
I am running Kafka 0.9
If you use the Confluent Control Center you can see each consumer group and all the clients in each consumer group. That might help you identify the "rogue" consumer.
Otherwise you might have to just pick a new group id so it won't matter what the other client is subscribing to (because it will be in another consumer group).
It sounds like you should also configure some security and ACLs so that rogue apps can't authenticate and subscribe to topics they are not allowed to access.