Two nodemcu's unable to communicate with raspberry pi using mqtt - raspberry-pi

Raspberry pi is acts as local host i'm trying to send data to raspbberry pi using mqtt with node mcu with two different topics.
eg:
if x>10 then i send 1 otherwise 0
same logic i have used in both node mcu.
if im communicate with only one nodemcu it getting good response but if i connect with both nodemcu's then sometimes not getting value in raspberry pi console.

This often depends on both the client and the broker used, and your configuration of each. The fact that two have problems where one does not suggests a client ID collision: every MQTT client device must have a different client ID. If a broker receives subscriptions from two clients with the same ID, the broker may disconnect one, usually the first. If each client is configured to reconnect, this can cause an endless series of disconnects for both, each of them connected half the time.
Any broker that does not disconnect duplicate clients could still fail to deliver to one, because it uses the client IDs to track which clients a message has been delivered to. The first client that pings for messages on its subscriptions will receive the latest message, and any later ones will miss that message because the message is already marked as delivered to that client ID.
Most clients avoid these problems with random IDs, yet let the developer set one manually. Does your identical logic set a client ID? You can verify what is actually set on each device through the broker's logs.

Related

Watson IOT Out node on Raspberry Pi repeatedly disconnecting

I am using a Watson IOT Output (wiotp out) in a Node-RED flow on my Raspberry PI and am having issues with the connection repeatedly disconnecting and then re-connecting.
Here is a screenshot of my Credentials Node and one of my IOT Out Node.
The connection is configured so that I can send messages to the cloud and successfully have them trigger a flow in my cloud Node-RED instance.
The problem I'm having is that when I attempt to send a string array as my payload, very items in the array actually make it through before the service disconnects. I am limited to around 3-5 strings at a time, which is frustrating because I am losing 195-197 of the 200 items I am trying to send to the IOT platform.
How can I keep a persistent connection and make sure my entire payload makes it through to the IOT service?
If you're seeing very frequent disconnects, it can often mean that you're inadvertently performing clientId stealing (i.e., two MQTT clients are fighting over the same clientId). You can confirm this by looking at the device connection logs in the device drilldown panel: you'll see evidence of log messages such as "The client ID was reused."
Ben

Moscapsule in Swift 3, user disconnected sporadically and reconnected again

I have an iPhone application that uses Moscapsule in order to establish MQTT connection to a Mosquitto broker (mosquitto version 1.4.4).
I create the connection like this:
mqttConfig = MQTTConfig(clientId: "iphone7_UI", host:"x.x..x.x", keepAlive:9999)
Other than the above I am basically relying on the library to keep the connection up and running and properly sub/publish MQTT messages.
The application mostly publishes messages and so far I have not had any issues with publications. Recently I added a feature where it is required for the application to subscribe to a topic and this is where I am getting issues.
The application successfully subscribes to the topic and receives messages properly in the beginning but then after a while (random amount of time, a couple of minutes usually) I can see at the mosquitto printouts that the iphone client disconnects and then reconnects and when that happens it no longer receives any MQTT messages (it can still publish properly).
What could be the reason why the application (probably Moscapsule library that I am using) decides to discard the old connection and create a new one? any ideas?

websocket communication between clients in distributed system

I'm trying to build instant messaging app. Clients will not only send messages but also often send audios. And I've decided to use websocket connection to communicate with clients. It is fast and allows to send binary data.
The main idea is to receive from client1 message and notify about it client2. But here's the thing. My app will be running on GAE. And what if client1's socket is opened on server1 and client2's is opened on server2. This servers don't know about each others clients.
I have one idea how to solve it, but I am sure it is shitty way. I am going to use some sort of communication between servers(for example JMS or open another websocket connection between servers, doesn't matter right now).
But it surely will lead to a disaster. I can't even imagine how often those servers will speak to each other. For each message server1 should notify server2, server2 should notify client2. But things become even worse when serverN comes into play.
Another way I see this to work is Firebase. But it restricts message size to 4KB. So I can't send audios via it. As a solution I can notify client about new audio and he goes to my server for it.
Hope I clearly explained the problem. Does anyone know how to solve it? Or maybe there are another ways to build such apps?
If you are building a messaging cluster and expect communicating clients to connect to different instances of the server then server-server communication is inevitable. Usually it's not a problem though.
First, if you don't use any load balancing your clients will connect to the same server 50% of time on average (in case of 2 servers).
Second, intra-datacenter links are fast and free in all known public clouds.
Third, you can often do something smart on the frontend to make sure two likely to communicate clients connect to the same server. For instance direct all clients from the same country to the same server using DNS load balancing.
The second part of the question is about passing large media files. It's a common best practice to send it out of band - store on the server and only pass the reference to it. Like someone suggested in the comment, save the audio on the server and just send a message like "audio is available, fetch it from here ...". You don't need to poll the server for that. Just fetch it once when the receiving client requests it.
In general, it seems like you are trying to reinvent the wheel. Just use something off the shelf.
Let all client get connected to multiple servers and each server keeps this metadata
A centralized system like zookeeper stores active servers details
When a client c1 sends a message to client c2:
the message is received by a server (say s1, we can add a load balancer to distribute incoming requests)
s1 will broadcast this information to all other servers to get which server the client c2 is connected to OR a better approach to use consistent hashing which decides which server the client can connect to & in this approach message broadcast is not required
the corresponding server responses to server s1 (say s2)
now s1 sends the message m to s2 and server s2 to client c2
Cons of the above approach:
Each server will have a connection with the n-1 servers, creating a mesh topology
Centralized system (zookeeper) becomes a single point of failures (which is solvable)
Apps like Whatsapp, G-Talk uses XMPP and TCP/IP.

MQTT Two way communication

I am interested in making communications of commands between different MQTT clients and to perform the regarding actions on both end. Is it possible to have two way communication using MQTT? I am working on Raspberry PI.
Yes its possible by using different publisher and subscriber topic for same client. Also you need some handler on client side to act.
Not sure what you mean by two way communication. You have subscribers and publishers in Mqtt. You could have a subscriber sitting out there listening to a particular topic and have it react to certain messages.
The way you would interact with that subscriber is by a publisher. Have it send a message to that subscriber's topic it's listening on.
A client that you have subscribing and publishing is actually initiating the communication in both cases. Since the broker is ack and responding, the broker need not know the IP address of the client since it responds back through the client's TCP path to the socket. This, to an extent assists in security of the client that is behind a firewall since the client does not need port forwarding for the reason mentioned.

UDP Server Discovery - Should clients send multicasts to find server or should server send regular beacon?

I have clients that need to all connect to a single server process. I am using UDP discovery for the clients to find the server. I have the client and server exchange IP address and port number, so that a TCP/IP connection can be established after completion of the discovery. This way the packet size is kept small. I see that this could be done in one of two ways using UDP:
Each client sends out its own multicast message in search of the server, which the server then responds to. The client can repeat sending this multicast message in regular intervals (in the case that the server is down) until the server responds.
The server sends out a multicast message beacon at regular intervals. The clients subscribe to the multicast group and in this way receives the server's multicast message and complete the discovery.
In 1. if there are many clients then initially there would be many multicast messages transmitted (one from each client). Only the server would subscribe and receive the multicast messages from the clients. Once the server has responded to the client, the client ceases to send out the multicast message. Once all clients have completed their discovery of the server no further multicast messages are transmitted on the network. If however, the server is down, then each client would be sending out a multicast message beacon in intervals until the server is back up and can respond.
In 2. only the server would submit a multicast message beacon in regular intervals. This message would end up getting routed to all clients that are subscribed to the multicast group. Once the clients receive the packet the client's UDP listening socket gets closed and they are no longer subscribed to the multicast group. However, the server must continue to send the multicast beacon, so that new clients can discover it. It would continue sending out the beacon at regular intervals regardless of whether any clients are out their requiring discovery or not.
So, I see pros and cons either way. It seems to me that #1 would result in heavier load initially, but this load eventually reduces down to zero. In #2 the server would continue sending out a beacon forever.
UDP and multicast is a fairly new topic to me, so I am interested in finding out which would be the preferred approach and which would result in less network load.
I've used option #2 in the past several times. It works well for simple network topologies. We did see some throughput problems when UDP datagrams exceeded the Ethernet MTU resulting in a large amount of fragmentation. The largest problem that we have seen is that multicast discovery breaks down in larger topologies since many routers are configured to block multicast traffic.
The issue that Greg alluded to is rather important to consider when you are designing your protocol suite. As soon as you move beyond simple network topologies, you will have to find solutions for address translation, IP spoofing, and a whole host of other issues related to the handoff from your discovery layer to your communications layer. Most of them have to do specifically with how your server identifies itself and ensuring that the identification is something that a client can make use of.
If I could do it over again (how many times have we uttered this phrase), I would look for standards-based discovery mechanisms that fit the bill and start solving the other protocol suite problems. The last thing that you really want to do is come up with a really good discovery scheme that breaks the week after you deploy it because of some unforeseen network topology. Google service discovery for a starting list. I personally tend towards DNS-SD but there are a lot of other options available.
I would recommend method #2, as it is likely (depending on the application) that you will have far more clients than you will servers. By having the server send out a beacon, you only send one packet every so often, rather than one packet for each client.
The other benefit of this method, is that it makes it easier for the clients to determine when a new server becomes available, or when an existing server leaves the network, as they don't have to maintain a connection to each server, or keep polling each server, to find out.
Both are equally viable methods.
The argument for method #1 would be that in normal principle, clients initiate requests, and servers listen and respond to them.
The argument for method #2 would be that the point of multicast is so that one host can send a packet and it can be received by many clients (one-to-many), so it's meant to be the reverse of #1.
OK, as I think about this I'm actually drawn to #2, server-initiated beacon. The problem with #1 is that let's say clients broadcast beacons, and they hook up with the server, but the server either goes offline or changes its IP address.
When the server is back up and sends its first beacon, all the clients will be notified at the same time to reconnect, and your entire system is back up immediately. With #1, all of the clients would have to individually realize that the server is gone, and they would all start multicasting at the same time, until connected back with the server. If you had 1000 clients and 1 server your network load would literally be 1000x greater than method #2.
I know these messages are most likely small, and 1000 packets at a time is nothing to a UDP network, but just from a design standpoint #2 feels better.
Edit: I feel like I'm developing a split-personality disorder here, but just thought of a powerful point of why #1 would be an advantage... If you ever wanted to implement some sort of natural load balancing or scaling with multiple servers, design #1 works well for this. That way the first "available" server can respond to the client's beacon and connect to it, as opposed to #2 where all the clients jump to the beaconing server.
Your option #2 has a big limitation in that it assumes that the server can communicate more or less directly with every possible client. Depending on the exact network architecture of your operational system, this may not be the case. For example, you may be depending that all routers and VPN software and WANs and NATs and whatever other things people connect networks together with, can actually handle the multicast beacon packets.
With #1, you are assuming that the clients can send a UDP packet to the server. This is an entirely reasonable expectation, especially considering the very next thing the client will do is make a TCP connection to the same server.
If the server goes down and the client wants to find out when it's back up, be sure to use exponential backoff otherwise you will take the network down with a packet storm someday!