HAL SPI DMA check how many bytes received during operation - stm32

I am transferring 10 bytes from master to slave over SPI+DMA with HAL. How can I check whether how many bytes the receiver has at that moment and if all the 10 byte has not been received then stops the process again. Because the master after sending 10 bytes should get an answer from slave but if the slave has not received full byte it waits and system go in indifinite.......
Any idea??

"I am transferring 10 bytes from master to slave over SPI+DMA with HAL."
Since you use DMA you just config transfer size to DMA receiver API and enable DMA interrupt. When DMA receive 10 bytes the DMA receiver complete interrput will arrive except that the sender transfer less than 10 bytes.
"Because the master after sending 10 bytes should get an answer from slave but if the slave has not received full byte it waits and system go in indifinite......."
You can solve this problem by using timeout mechanism in the slave side.

Related

Does HAL_SPI_Transmit() discard received data?

Suppose I have two STM boards with a full duplex SPI connection (one is master, one is slave), and suppose I use HAL_SPI_Transmit() and HAL_SPI_Receive() on each end for the communication.
Suppose further that I want the communication to consist of a series of single-byte command-and-response transactions: master sends command A, slave receives it and then sends response A; master sends command B, slave receives it and then sends response B, and so on.
When the master calls HAL_SPI_Transmit(), the nature of SPI means that while it clocks out the first byte over the MOSI line, it is simultaneously clocking in a byte over the MISO line. The master would then call HAL_SPI_Receive() to furnish clocks for the slave to do the transmitting of its response. My question: What is the result of the master's HAL_SPI_Receive() call? Is it the byte that was simultaneously clocked in during the master's transmit, or is is what the slave transmitted afterwards?
In other words, does the data that is implicitly clocked in during HAL_SPI_Transmit() get "discarded"? I'm thinking it must, because otherwise we should always use the HAL_SPI_TransmitReceive() call and ignore the received part.
(Likewise, when HAL_SPI_Receive() is called, what is clocked OUT, which will be seen on the other end?)
Addendum: Please don't say "Don't use HAL". I'm trying to understand how this works. I can move away from HAL later--for now, I'm a beginner and want to keep it simple. I fully recognize the shortcomings of HAL. Nonetheless, HAL exists and is commonly used.
Yes, if you only use HAL_SPI_Transmit() to send data, the received data at the same clocked event gets discarded.
As an alternative, use HAL_SPI_TransmitReceive() to send data and receive data at the same clock events. You would need to provide two arrays, one that contains data that will be sent, and the other array will be populated when bytes are received at the same clock events.
E.g. if your STM32 SPI Slave wishes to send data to a master when the master plans to send 4 clock bytes to it (master sends 0xFF byte to retrieve a byte from slave), using HAL_SPI_TransmitReceive() will let you send the data you wish to send on one array, and receive all the clocked bytes 0xFF on another array.
I never used HAL_SPI_Receive() before on its own, but the microcontroller that called that function can send any data as long as the clock signals are valid. If you use this function, you should assume on the other microcontroller that the data that gets sent must be ignored. You could also use a logic analyzer to trace the SPI data exchange between two microcontrollers when using HAL_SPI_Transmit() and HAL_SPI_Receive().

What happens when a process tries to read more bytes than the one that sent it

If Two processes communicate via sockets and Process A sends Process B 100 bytes.
Process B tries to read 150 bytes. Later Process A sends 50 bytes.
What is the result of Process B's read?
Will the process B read wait until it receives 150 bytes?
That is dependent on many factors, especially related to the type of socket, but also to the timing.
Generally, however, the receive buffer size is considered a maximum. So, if a process executes a recv with a buffer size of 150, but the operating system has only received 100 bytes so far from the peer socket, usually the available 100 are delivered to the receiving process (and the return value of the system call will reflect that). It is the responsibility of the receiving application to go back and execute recv again if it is expecting more data.
Another related factor (which will not generally be the case with a short transfer like 150 bytes but definitely will if you're sending a megabyte, say) is that the sender's apparently "atomic" send of 1000000 bytes will not all be delivered in one packet to the receiving peer, so if the receiver has a corresponding recv with a 1000000 byte buffer, it's very unlikely that all the data will be received in one call. Again, it's the receiver's responsibility to continue calling recv until it has received all the data sent.
And it's generally the responsibility of the sender and receiver to somehow coordinate what the expected size is. One common way to do so is by including a fixed-length header at the beginning of each logical transmission telling the receiver how many bytes are to be expected.
Depends on what kind of socket it is. For a STREAM socket, the read will return either the amount of data currently available or the amount requested (whichever is less) and will only ever block (wait) if there is no data available.
So in this example, assuming the 100 bytes have (all) been transmitted and received into the receive buffer when B reads from the socket and the additional 50 bytes have not yet been transmitted, the read will return those 100 bytes and will not wait.
Note also, the dependency of all the data being transmitted and received -- when process A writes data to a socket it will not necessarily be sent immediately or all at once. Depending on the underlying transport, there's an MTU size and any write larger than that will be broken up. Smaller writes may also be delayed and combined with later writes to make up the MTU. So in your case the send of 100 bytes might be too large (and broken up), or might be too small and not be transmitted immediately.

I2C repeated start

I am trying to use TC74 (or DS1621) temperature sensor which comes with I2C interface. My I2C ISR is able so far to write command and config bytes to the chip. However I don't know how to instruct the ISR to jump to state 0x10 (repeated start) for a read operation. The read procedure is as following:
start bit by micro-controller (ATTINY48 in my case)
sending slave address+w (in state 0x8), ACK from slave
sending command byte to slave (in state 0x18), ACK from slave
at this point (state 0x28) ISR must send a repeated start and jump to state 0x10
then sending Slave Adrress+R , ACK from slave
then in state 0x40 data will be read from slave, NACK to slave
in state 0x58 data is ready and copied to proper variable, stop bit will be transmitted.
I can set a flag every time I call the TC74 read function and check that flag inside the ISR, so instead of sending the stop bit after writing the data byte to the TC74, it will issue a repeated start bit. However I am not sure if this is the correct and standard method or not. Generally, in many states of I2C peripherals, the next state must be decided.
How should I instruct the ISR in each state to jump to the desired next state?

NACK and ACK responses on I2c bus

My recent project requires the use of i2c communication using a single master with multiple slaves. I know that with each data byte (actual data) sent by master,the slave responds with Nack\Ack(1,0).
I am confused that how this Nack and ACK are interpreted. I searched the web but i didn't got clear picture about this. My understanding is something like this.
ACK- I have successfully received the data. Send me more data.
NACK- I haven't received the data.Send again.
Is this something like this or I am wrong.
Please clarify and suggest the right answer.
Thanks
Amit kumar
You really should read the I2C specification here, but briefly, there are two different cases to consider for ACK/NACK:
After sending the slave address: when the I2C master sends the address of the slave to talk to (including the read/write bit), a slave which recognizes its address sends an ACK. This tells the master that the slave it is trying to reach is actually on the bus. If no slave devices recognize the address, the result is a NACK. In this case, the master must abort the request as there is no one to talk to. This is not generally something that can be fixed by retrying.
Within a transfer: after the side reading a byte (master on a receive or slave on a send) receives a byte, it must send an ACK. The major exception is if the receiver is controlling the number of bytes sent, it must send a NACK after the last byte to be sent. For example, on a slave-to-master transfer, the master must send a NACK just before sending a STOP condition to end the transfer. (This is required by the spec.)
It may also be that the receiver can send a NACK if there is an error; I don't remember if this is allowed by the spec.
But the bottom line is that a NACK either indicates a fatal condition which cannot be retried or is simply an indication of the end of a transfer.
BTW, the case where a receiving device needs more time to process is never indicated by a NACK. Instead, a slave device either does "clock stretching" (or the master simply delays generating the clock) or it uses a higher-layer protocol to request retrying.
Edit 6/8/19: As pointed out by #DavidLedger, there are I2C flash devices that use NACK to indicate that the flash is internally busy (e.g. completing a write operation). I went back to the I2C standard (see above) and found the following:
There are five conditions that lead to the generation of a NACK:
No receiver is present on the bus with the transmitted address so there is no device to respond with an acknowledge.
The receiver is unable to receive or transmit because it is performing some real-time function and is not ready to start
communication with the master.
During the transfer, the receiver gets data or commands that it does not understand.
During the transfer, the receiver cannot receive any more data bytes.
A master-receiver must signal the end of the transfer to the slave transmitter.
Therefore, these NACK conditions are valid per the standard.
Short delays, particularly within a single operation will normally use clock stretching but longer delays, particularly between operations, as well as invalid operations, my well produce a NACK.
I2C Protocol starts with a start bit followed by the slave address (7 bit address + 1 bit for Read/Write).
After sending the slave address Master releases the data bus(SDA line), put the line in high impedance state leaving it for slave to drive the line.
If address matches the slave address, slave pull the line low for the ACK.
If the line is not pull low by any of the slave, then Master consider it as NACK and sends the Stop bit or repeated start bit in next clock pulse to terminate the or restart the communication.
Besides this the NACK is also sent whenever receiver is not able to communicate or understand the data.
NACK is also used by master (receiver) to terminate the read flow once it has all the data followed by stop bit.

What is the difference between "Interrupt coalescing" and the "Nagle algorithm"?

Is the main difference that?
Interrupt coalescing (ethtool -C eth1 rx-usecs 0) - coalesce the received packets from different connections, i.e. increase bandwitdh, but increase the latency of the receive
Nagle algorithm (socket options = TCP_NODELAY) - coalesce the sent packets from the same connection, i.e. increase bandwitdh, but increasethe the latency of the send
Interrupt coalescing concerns the network driver: the idea is to avoid invoking the interrupt handler anew every time a network packet shows up. Instead, after receiving a packet, the NIC waits until M packets are received or until N microseconds have passed before generating an interrupt. Then the driver can process many packets at once. (Otherwise, with modern gigabit and 10-gigabit adapters, the processor would need to field hundreds of thousands or millions of interrupts per second, which can prevent the system from being able to accomplish much else.) As your link points out, there is (or at least may be) a cost of additional latency since the OS doesn't start processing a received packet at the earliest possible instant.
Nagle's algorithm is focused on reducing the number of packets sent by coalescing payload data from multiple packets into one. The classic example is a telnet session. Without Nagle, every time you press a key, the system has to create an entire new packet (min 64 bytes on Ethernet) to send one byte.
So the intent of interrupt coalescing is to support greater bandwidth utilization, while the intent of Nagle's algorithm is actually to produce lower bandwidth (by sending fewer packets).