My recent project requires the use of i2c communication using a single master with multiple slaves. I know that with each data byte (actual data) sent by master,the slave responds with Nack\Ack(1,0).
I am confused that how this Nack and ACK are interpreted. I searched the web but i didn't got clear picture about this. My understanding is something like this.
ACK- I have successfully received the data. Send me more data.
NACK- I haven't received the data.Send again.
Is this something like this or I am wrong.
Please clarify and suggest the right answer.
Thanks
Amit kumar
You really should read the I2C specification here, but briefly, there are two different cases to consider for ACK/NACK:
After sending the slave address: when the I2C master sends the address of the slave to talk to (including the read/write bit), a slave which recognizes its address sends an ACK. This tells the master that the slave it is trying to reach is actually on the bus. If no slave devices recognize the address, the result is a NACK. In this case, the master must abort the request as there is no one to talk to. This is not generally something that can be fixed by retrying.
Within a transfer: after the side reading a byte (master on a receive or slave on a send) receives a byte, it must send an ACK. The major exception is if the receiver is controlling the number of bytes sent, it must send a NACK after the last byte to be sent. For example, on a slave-to-master transfer, the master must send a NACK just before sending a STOP condition to end the transfer. (This is required by the spec.)
It may also be that the receiver can send a NACK if there is an error; I don't remember if this is allowed by the spec.
But the bottom line is that a NACK either indicates a fatal condition which cannot be retried or is simply an indication of the end of a transfer.
BTW, the case where a receiving device needs more time to process is never indicated by a NACK. Instead, a slave device either does "clock stretching" (or the master simply delays generating the clock) or it uses a higher-layer protocol to request retrying.
Edit 6/8/19: As pointed out by #DavidLedger, there are I2C flash devices that use NACK to indicate that the flash is internally busy (e.g. completing a write operation). I went back to the I2C standard (see above) and found the following:
There are five conditions that lead to the generation of a NACK:
No receiver is present on the bus with the transmitted address so there is no device to respond with an acknowledge.
The receiver is unable to receive or transmit because it is performing some real-time function and is not ready to start
communication with the master.
During the transfer, the receiver gets data or commands that it does not understand.
During the transfer, the receiver cannot receive any more data bytes.
A master-receiver must signal the end of the transfer to the slave transmitter.
Therefore, these NACK conditions are valid per the standard.
Short delays, particularly within a single operation will normally use clock stretching but longer delays, particularly between operations, as well as invalid operations, my well produce a NACK.
I2C Protocol starts with a start bit followed by the slave address (7 bit address + 1 bit for Read/Write).
After sending the slave address Master releases the data bus(SDA line), put the line in high impedance state leaving it for slave to drive the line.
If address matches the slave address, slave pull the line low for the ACK.
If the line is not pull low by any of the slave, then Master consider it as NACK and sends the Stop bit or repeated start bit in next clock pulse to terminate the or restart the communication.
Besides this the NACK is also sent whenever receiver is not able to communicate or understand the data.
NACK is also used by master (receiver) to terminate the read flow once it has all the data followed by stop bit.
Related
Suppose I have two STM boards with a full duplex SPI connection (one is master, one is slave), and suppose I use HAL_SPI_Transmit() and HAL_SPI_Receive() on each end for the communication.
Suppose further that I want the communication to consist of a series of single-byte command-and-response transactions: master sends command A, slave receives it and then sends response A; master sends command B, slave receives it and then sends response B, and so on.
When the master calls HAL_SPI_Transmit(), the nature of SPI means that while it clocks out the first byte over the MOSI line, it is simultaneously clocking in a byte over the MISO line. The master would then call HAL_SPI_Receive() to furnish clocks for the slave to do the transmitting of its response. My question: What is the result of the master's HAL_SPI_Receive() call? Is it the byte that was simultaneously clocked in during the master's transmit, or is is what the slave transmitted afterwards?
In other words, does the data that is implicitly clocked in during HAL_SPI_Transmit() get "discarded"? I'm thinking it must, because otherwise we should always use the HAL_SPI_TransmitReceive() call and ignore the received part.
(Likewise, when HAL_SPI_Receive() is called, what is clocked OUT, which will be seen on the other end?)
Addendum: Please don't say "Don't use HAL". I'm trying to understand how this works. I can move away from HAL later--for now, I'm a beginner and want to keep it simple. I fully recognize the shortcomings of HAL. Nonetheless, HAL exists and is commonly used.
Yes, if you only use HAL_SPI_Transmit() to send data, the received data at the same clocked event gets discarded.
As an alternative, use HAL_SPI_TransmitReceive() to send data and receive data at the same clock events. You would need to provide two arrays, one that contains data that will be sent, and the other array will be populated when bytes are received at the same clock events.
E.g. if your STM32 SPI Slave wishes to send data to a master when the master plans to send 4 clock bytes to it (master sends 0xFF byte to retrieve a byte from slave), using HAL_SPI_TransmitReceive() will let you send the data you wish to send on one array, and receive all the clocked bytes 0xFF on another array.
I never used HAL_SPI_Receive() before on its own, but the microcontroller that called that function can send any data as long as the clock signals are valid. If you use this function, you should assume on the other microcontroller that the data that gets sent must be ignored. You could also use a logic analyzer to trace the SPI data exchange between two microcontrollers when using HAL_SPI_Transmit() and HAL_SPI_Receive().
I saw the socket option TCP_NODELAY, which is used to turn on or off the Nagle alorithm.
I checked what the Nagle algorithm is, and it seems similar to 'stop and wait'.
Can someone give me a clear difference between these two concepts?
In a stop and wait protocol, one
sends a message to the peer
waits for an ack for that message
sends the next message
(i.e. one cannot send a new message until the previous one has been acknowledged)
Nagle's algorithem as used in TCP is orthoginal to this concept. When the TCP application sends some data, the protocol buffers the data and waits a little while to see if there's more data to be sent instead of sending data to the peer immediately.
If the application has more data to send in this small timeframe, the protocol stack merges that data into the current buffer and can send it as one large message.
This concept could very well be applied to a stop and go protocol as well. (Note that TCP is not a stop and wait protocol)
The Nagle Algorithm is used to control whether the socket provider sends outgoing data immediately as-is at the cost of less efficient network transmissions (off), or if it buffers outgoing data so it can make more efficient network transmissions at the cost of speed (on).
Stop and Wait is a mechanism used to ensure the integrity of transmitted data, by making the sender send a frame of data and then wait for an acknowledgement from the receiver before sending another frame, thus ensuring frames are received in the same order in which they are sent.
These two features operate independently of each other.
I'm working on an gameServer that communicate with game client, but wonder whether the packet server send to client remain sequence when client received it ?
like server sends packets A,B,C
but the client received B,A,C ?
I have read the great blog http://packetlife.net/blog/2010/jun/7/understanding-tcp-sequence-acknowledgment-numbers/
It seems that every packet send by the server has an ack corresponding by client, but it does not say why the packet received by client has the same sequence with server
It's worth reading TCP's RFC, particularly section 1.5 (Operation), which explains the process. In part, it says:
The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order by the internet communication system. This is achieved by assigning a sequence number to each octet transmitted, and requiring a positive acknowledgment (ACK) from the receiving TCP. If the ACK is not received within a timeout interval, the data is retransmitted. At the receiver, the sequence numbers are used to correctly order segments that may be received out of order and to eliminate duplicates. Damage is handled by adding a checksum to each segment transmitted, checking it at the receiver, and discarding damaged segments.
I don't see where it's ever made explicit, but since the acknowledgement (as described in section 2.6) describes the next expected packet, the receiving TCP implementation is only ever acknowledging consecutive sequences of packets from the beginning. That is, if you never receive the first packet, you never send an acknowledgement, even if you've received all other packets in the message; if you've received 1, 2, 3, 5, and 6, you only acknowledge 1-3.
For completeness, I'd also direct your attention to section 2.6, again, after it describes the above-quoted section in more detail:
An acknowledgment by TCP does not guarantee that the data has been delivered to the end user, but only that the receiving TCP has taken the responsibility to do so.
So, TCP ensures the order of packets, unless the application doesn't receive them. That exception probably wouldn't be common, except for cases where the application is unavailable, but it does mean that an application shouldn't assume that a successful send is equivalent to a successful reception. It probably is, for a variety of reasons, but it's explicitly outside of the protocol's scope.
TCP guarantees sequence and integrity of the byte stream. You will not receive data out of sequence. From RFC 793:
Reliable Communication: A stream of data sent on a TCP connection is delivered reliably and in
order at the destination.
I am a newbie to networks and in particular TCP (I have been fooling a bit with UDP, but that's it).
I am developing a simple protocol based on exchanging messages between two endpoints. Those messages need to be certified, so I implemented a cryptographic layer that takes care of that. However, while UDP has a sound definition of a packet that constitutes the minimum unit that can get transferred at a time, the TCP protocol (as far as my understanding goes) is completely stream oriented.
Now, this puzzles me a bit. When exchanging messages, how can I tell where one starts and the other one ends? In principle, I can obviously communicate fixed length messages or first communicate the size of each message in some header. However, this can be subject to attacks: while of course it is going to be impossible to distort or determine the content of the communication, the above technique would make it easy to completely disrupt my communication just by adding a single byte in the middle.
Say that I need to transfer a message 1234567 bytes long. First of all, I communicate 4 bytes with an integer representing the size of the message. Okay. Then I start sending out the actual message. That message gets split in several packets, which get separately received. Now, an attacker just sends in an additional packet, faking it as if it was part of the conversation. It can just be one byte long: this completely destroys any synchronization mechanism I have implemented! The message has a spurious byte in the middle, and it doesn't successfully get decoded. Not only that, the last byte of the first message disrupts the alignment of the second message and so on: the connection is destroyed, and with a simple, simple attack! How likely and feasible is this attack anyway?
So I am wondering: what is the maximum data unit that can be transferred at once? I understand that to a call to send doesn't correspond a call to receive: the message can be split in different chunks. How can I group the packets together in some way so that I know that they get packed together? Is there a way to define an higher level message that gets reconstructed and aligned all together and triggers a single call to a receive-like function? If not, what other solutions can I find to keep my communication re-alignable even in presence of an attacker?
Basically it is difficult to control the way the OS divides the stream into TCP packets (The RFC defining TCP protocol states that TCP stack should allow the clients to force it to send buffered data by using push function, but it does not define how many packets this should generate. After all the attacker can modify any of them).
And these TCP packets can get divided even more into IP fragments during their way through the network (which can be opted-out by a 'Do not fragment' IP flag -- but this flag can cause that your packets are not delivered at all).
I think that your problem is not about introducing packets into a stream protocol, but about securing it.
IPSec could be very beneficial in your scenario, as it operates on the network layer.
It provides integrity for every packet sent, so any modification on-the-wire gets detected and the invalid packets are dropped. In case of TCP the dropped packets get re-transmitted automatically.
(Almost) everything is done automatically by the OS -- so yo do not need to worry about it (and make mistakes doing so).
The confidentiality can be assured as well (with the same advantage of not re-inventing the wheel).
IPSec should provide you a reliable transport protocol ontop of which you can use whatever framing format you like.
Another alternative is using SSL/TLS on top of TCP session which is less robust (as it does close the whole connection on integrity error).
Now, an attacker just sends in an additional packet, faking it as if it was part of the conversation. It can just be one byte long: this completely destroys any synchronization mechanism I have implemented!
Thwarting such an injection problem is dealt with by securing the stream. Create an encrypted stream and send your packets through that.
Of course the encrypted stream itself then has this problem; its messages can be corrupted. But those messages have secure integrity checks. The problem is detected, and the connection can be torn down and re-established to resynchronize it.
Also, some fixed-length synchronizing/framing bit sequence can be used between messages: some specific bit pattern. It doesn't matter if that pattern occurs inside messages by accident, because we only ever specifically look for that pattern when things go wrong (a corrupt message is received), otherwise we skip that sequence. If a corrupt message is received, we then receive bytes until we see the synchronizing pattern, and assume that whatever follows it is the start of a message (length followed by payload). If that fails, we repeat the process. When we receive a correct message, we reply to the peer, which will re-transmit anything we didn't get.
How likely and feasible is this attack anyway?
TCP connections are identified by four items: the source and destination IP, and source and destination port number. The attacker has to fake a packet which matches your stream in these four identifiers, and sneak that packet past all the routers and firewalls between that attacker and the receiving machine. The attacker also has to be in the right ballpark with regard to the TCP sequence number.
Basically, this is next to impossible for an attacker C to perpetrate against endpoints A and B which are both distant from C on the network. The fake source IP will be rejected long before C is able to reach its destination. It's more plausible as an inside job (which includes malware): C is close to A and B.
I know that for TCP you can have for example Nagle's Algorithm enabled. However, can you have something similar for UDP?
Practical Question(assume UDP socket):
If I call send() two times in a short period of time with 1 byte of data in each send() call. Is it possible that the transport layer decides to send only 1 UPD packet with the 1 byte + 1 byte = 2 bytes of data?
Thanks in advance!
No. UDP datagrams are delivered intact exactly as sent, or not at all.
Not according to the RFC (RFC 768). Above IP facilities themselves, UDP really only provides, as extras, port-based routing and a little bit of extra detection for corruption or misrouting.
That means there's no facility to combine datagrams. In fact, since it's meant to be transaction oriented, I would say that combining two transactions into one may well be a bad idea in terms of keeping these transaction disparate.
Otherwise, you would need a layer above UDP which could figure out how to extract these transactions from a datagram. At the moment, that's not necessary since the datagram is the transaction.
As added support (though not, of course, definitive) for this contention, see the UDP wikipedia page:
Datagrams – Packets are sent individually and are checked for integrity only if they arrive. Packets have definite boundaries which are honored upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent.
However, the best support for it comes from one of its clients. UDP was specially engineered for TFTP (among other things) and that protocol breaks down if you cannot distinguish a transaction.
Specifically, one of the TFTP transaction types is the data transaction which consists of an opcode, block number and up to 512 bytes of data. Without a length indication at the start or a sentinel value at the end, there is no way to work out where the next transaction would start unless there is a one-to-one mapping between transaction and datagram.
As an aside, the other four TFTP transaction types have either a fixed length or end-of-string sentinel values but the data transaction is the decider here.