I have a weird problem with multi-master in Modbus TCP/IP. I know that Modbus Serial doesn't support multi master. But when I saw some documents, they said Modbus TCP supports multi master.
I composed three tcp clients as Modbus TCP master and a server as Modbus TCP slave. Each Modbus TCP master requests the Modbus TCP slave to get data at every 2 seconds regularly. And I use Modbus TCP stack for master device, which made by Triangle MicroWorks.
I expected every master could receive data from slave, but actually, one
master only communicated well with slave, other masters could not receive data. They only received a return status "3", which means "MBCHNL_RESP_STATUS_CANCELED".
In this composition, is this behavior right?
I wonder if "multi-master/multiple same request" couldn't be supported by the stack or there are other ways to behave multi-master.
I found an answer for this problem.
In short, masters were too fast and the slave channel was busy. I cannot assure all types of modbus stack doing like this, but it did in that case.
The return message, "MBCHNL_RESP_STATUS_CANCELED", came from the message queue in the TMW stack code, it because there're some code for checking to make sure this is not a duplicate request. So the slave channel couldn't afford to process messages from three masters simultaneously, and then messages of each masters remained in their own queue.
I asked Triangle Microworks same question, I received their opnion last week.
"... You are allowed to have multiple channels(each channel must have a unique ip/port combination).
2 seconds may be too fast for only 1 channel. ... Try changing period to 3 seconds and so on."
I think it is not perfect answer, so I improved logic for requesting as bellow.
- Send per 2 seconds, but if I only received response.
The communication is more fluent than it was. It looks more continuous. Sometimes, masters cannot receive some seconds, but after some seconds, they communicate well again.
I know it is not a perfect answer as well. If I found a better answer, I'll write it again.
Related
I am willing to know about the comparison of the Packet delivery rate between MQTT and CoAP transmission. I know that TCP is more secure than UDP, so MQTT should have a higher Packet delivery rate. I just want to know, if 2000 packets are sent using both protocols separately what would be the approximate percentage in the two cases?
Please help with an example if possible.
If you dig a little, you will find, that both, TCP and UDP, are mainly sending IP messages. And some of these messages may be lost. For TCP, the retransmission is handled by the TCP protocol without your influence. That sure works not too bad (at least in many cases). For CoAP, when you use CON messages, CoAP does the retransmission for you, so also not too much to lose.
When it comes to transmissions with more message loss (eg. bad connectivity), the reliability may also depend on the amount of data. If it fits into one IP message, the probability that this reaches the destination is higher, than 4 messages reaching their destination. In that situation the difference starts:
Using TCP requires, that all the messages are reaching the destination without gaps (e.g. 1,2, (drop 3), 4 will not work). CoAP will deliver the messages also with gaps. It depends on your application, if you can benefit from that or not.
I've been testing CoAP over DTLS 1.2 (Connection ID) for mostly a year now using a Android phone and just moving around sending request (about 400 bytes) with wifi and mobile network. It works very reliable. Current statistic: 2000 request, 143 retransmissions, 4 lost. Please note: 4 lost mainly means "no connection", so be sure, using TCP will have results below that, especially when moving around and frequently new TCP/TLS handshakes get required.
So my conclusion:
If you have a stable network connection, both should work.
If you have a less stable network connection and gaps are not acceptable by your application, both have trouble.
If you have a less stable network connection and gaps are acceptable by your application, then CoAP will do a better job.
I'm look recommendations on how to achieve low latency for the following network protocol:
Alice sends out a request for information to many peers selected at random from a very large pool.
Each peer responds with a small packet <20kb.
Alice aggregates the responses and selects a peer accordingly.
Alice and the selected peer then continue to the second phase of the protocol whereby a sequence of 2 requests and responses are performed.
Repeat from 1.
Given that steps 1 and 2 do not need to be reliable (as long as a percentage of responses arrive back we proceed to step 3) and that 1 is essentially a multicast, this part of the protocol seems to suit UDP - setting up a TCP connection to these peers would add an addition round trip.
However step 4 needs to be reliable - we can't tolerate packet loss during the subsequent requests/responses.
The conundrum I'm facing is that UDP suits 1 and 2 and TCP protocol suits 4. Connecting to every peer selected in 1 is slow especially since we aim to transmit just 20kb, however UDP cannot be tolerated for step 4. Handshaking the peer selected in 4. would require an additional round trip, which compared to the 3 round trips still is a considerable increase in total time.
Is there some hybrid scheme whereby you can do a TCP handshake while transmitting a small amount of data? (The handshake could be merged into 1 and 2 and hence doesn't add any additional round trip time.)
Is there a name for such protocols? What should I read to become more acquainted with such problems?
Additional info:
Participants are assumed to be randomly distributed around the globe and connected via the internet.
The pool selected from in step 1. is on the order of 1000 addresses and the the sample from the pool on the order of 10 to 100.
There's not enough detail here to do a well-informed criticism. If you were hiring me for advice, I'd want to know a lot more about the proposal, but since I'm doing this for free, I'll just answer the question as asked, and try to make it practical rather than ideal.
I'd argue that UDP is not suitable for the early part of your protocol. You can't just multicast a single packet to a large number of hosts on the Internet (although you can do it on typical LANs). A 20KB payload is not the sort of thing you can generally transmit in a single datagram in any case, and the moment messages fail to fit in a single datagram, UDP loses most of its attraction, because you start reinventing TCP (badly).
Probably the simplest thing you can do is base your system on HTTP, and work with implementations which incorporate all the various speed-ups that Google (mostly) has been putting into HTTP development. This includes TCP Fast Open, and things like it. Initiate connections out to your chosen servers; some will respond faster than others: use that to your advantage by going with the quickest ones. Don't underestimate the importance of efficient implementation relative to theoretical round-trip time, by the way.
For stage two, continue with HTTP as before. For efficiency, you could hold all the connections open at the end of phase one and then close all the ones except your chosen phase two partner. It's not clear from your description that the phase two exchange lends itself to the HTTP model, though, so I have to hand-wave this a bit.
It's also possible that you can simply hold TCP connections open to all available peers more or less permanently, thus dodging the cost of connection establishment nearly all the time. A thousand simultaneous open connections is large, but not outrageous in most contexts (although you may need to tweak OS settings to allow it). If you do that, you can just talk whatever protocol you like over TCP. If it's a truly peer-to-peer protocol, you only need one TCP connection per pair. Implementing this kind of thing is tricky, though: an average programmer will do a terrible job of it, in my experience.
I've been doing a lot of reading (out of curiosity) about game servers. For fast paced games (first person shooters) UDP seems to be a must.
From what I gathered, what it comes down to is what happens when packets get lost. You'll get enormous latency while TCP tries to resend/receive the lost packet and game's state at that point will be already way ahead of the client, making that packet that you waited for so long useless by the time it arrives.
Provided that you disable Nagle's algorithm, I've some trouble figuring out why something like this won't work, instead of dealing with UDP packets:
You open up multiple connections to server
client thread 1 <- packet 1
client thread 2 <- packet 2
client thread 3 <- packet 3
client thread 4 <- packet 4
client thread 5 <- packet 5
So what if something happens to packet 1 and you receive it 500ms later, you'll have packet 2 by then so it doesn't matter and client can discard it. I'm sure I'm missing some key info but I couldn't find anything on this. Something like what I'm proposing is very trivial to implement with Zeromq or another messaging library.
The proposed technique would help somewhat (and btw it doesn't really require multiple threads, only multiple TCP connections), but it does assume that whatever was causing packet loss will only affect the packets of some of the TCP connections and not all of them. That might not be a very reliable assumption -- since all of the TCP connections are (presumably) going through the same network path, any fault condition (or bandwidth shortage) that causes TCP stream A to drop a packet might well cause streams B through E to drop packets as well -- and then you are back to the original problem again.
Having multiple TCP streams active does allow you to work around the strict-ordering requirements of a single TCP stream (i.e. if you know the updates in data-stream B are logically independent of the updates in data-stream A, then why force a new B-update to always wait until all the previously-sent A-updates have been successfully transmitted?). I suspect you'll still find situations where UDP works better, though. A hybrid approach (using UDP for the low-latency data, but also a TCP stream for the more latency-tolerant or complex data) might yield a better result.
Ok folks, NOT counting ethernet speed (Infinitband), kernel bypass or any other fancy stuff, just plain TCP/IP (TCP/UDP over Ethernet) networking. What is the fastest messaging queue implementation that can deliver a message from host A to host B?
Let's assume 10Gigabits ethernet cards connecting both machines with up-to-date architecture and CPUs. What latency in microseconds are we talking here for a 1472 bytes message (MTU - IP/UDP headers)?
As #Sachin described very well, what I am looking for is the messaging queue and the latency number to send a message from A to B like below:
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
if you do not require a broker in between, 0MQ gave us the best performance (you will need to test the numbers on your platform/use case). If using a broker in between, both ActiveMQ & RabbitMQ performed in the same range. Using Redis as a messaging server did not hold up for us.
If not using a messaging server, options such as Netty, J-groups etc might be useful (not sure about your programming language).
You could look into reliable UDP as well if going with straight socket connectivity.
hope it helps.
The lower bound would be at least 2 TCP connections and the routing time inside the messaging queue server (meaning the delays associated with these)
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
Off course, if you build in redundancy, fault tolerance etc, then you are going to be certainly way above this lower bound.
It looks like you are talking about an UDP-based MQ because you mentioned MTU. Well, for UDP-based MQs this time is usually measured as the time required to publish a message and see it back in the message bus. So it is a round-trip time, not a one-way time as you described. This can usually be done in less than 6 microseconds, depending of course on your choice of LAN.
I would be greatful for help, understanding how long it takes to establish a TCP connection when I have the Ping RoundTripTip:
According to Wikipedia a TCP Connection will be established in three steps:
1.SYN-SENT (=>CLIENT TO SERVER)
2.SYN/ACK-RECEIVED (=>SERVER TO CLIENT)
3.ACK-SENT (=>CLIENT TO SERVER)
My Questions:
Is it correct, that the third transmission (ACK-SENT) will not yet carry any payload (my data) but is only used for the connection establishement.(This leads to the conclusion, that the fourth packt will be the first packt to hold any payload....)
Is it correct to assume, that when my Ping RoundTripTime is 20 milliseconds, that in the example given above, the TCP Connection establishment would at least require 30 millisecons, before any data can be transmitted between the Client and Server?
Thank you very much
Tom
Those things are basically correct, though #2 assumes that the round-trip time is symmetric.
To measure this, called the "Time to Syn/ACK" (which is NOT the Time to Establish a connection - the connection is only half-open when in that state, you need the 3rd packet acknowledging the establishment to consider it established), you usually need professional tools that include their own TCP stack, enabling that kind of measurement. The most used one is called the Spirent Avalanche, but you also have Ixia's IxLoad or BreakingPoint Systems box (BPS has now been acquired by Ixia btw).
Note that, yes, 3rd packet won't have any data, and that is also true of the first two. They are only Syn and Syn+Ack flagged (those are TCP flags), and contain no application data. This initial exchange, called the Three-way Handshake therefore causes some overhead, which is why TCP is typically not used in real-time applications (voice, live video, etc..).
Also as stated, you can't assume that Latency = RTT/2. It is in fact very complicated to measure one-way latency above layer 3 (IP) - and you are already at layer 4 (TCP) here. This blog post covers in details the challenge of this: http://synsynack.wordpress.com/2012/04/09/realistic-latency-measurement-in-the-application-layers/