Describe how you communicate with an external peripheral device on the I2C bus - i2c

I am trying to summaraize the general description and can't come up with a way to say it. Describe how you communicate with an external peripheral device on the I2C bus? Maybe with steps

There is plenty of material available throughout the web. For example you will find good information on https://i2c.info/. Also if you look out for data sheets of micro controllers like the ATMega328p you can also find very detailed descriptions.
The usual procedure looks like this:
Master Setup START condition (HIGH to LOW transition of SDA while SCL is HIGH)
Master sends I2C device address (usually a 7 bit address + bit0 = 0 to write)
Slave sends: ACK
Master sends I2C register address that you want to read (8 bits)
Slave sends: ACK
Master sends Repeated START (HIGH to LOW transition of SDA while SCL is HIGH)
Send I2C device address (7 bit address + bit0 = 1 to read)
Slave sends: ACK
Slave sends: MSB of the requested register
Master sends: ACK
Slave sends: LSB of the requested register (if register address actually contains more than one Byte)
Master sends: NACK (to inform the Slave that it received all expected data)
Master sends STOP (a LOW to HIGH transition of SDA while SCL is HIGH)

Related

How PCIE Root complex moves DMA transaction from PCIe endpoint to Host memory

I have very basic doubt ,how PCIE Root complex moves DMA transaction from PCIe endpoint to Host memory.
Suppose ,Pcie EP(End Point) want to initiate a DMA write transaction to HOST memory from its local memory.
So DMA read channel present on PcieEP ,will read data from its local memory,then PCIe module in the PcieEP convert this to Pci TLP transaction and direct it to PCIE root complex.
So my Query is
Know how PCIE rootcomplex ,will come to know that it has to redirect this packet to HOST Memory ?
How is the hardware connection from PCIeroot complex to Host Memory ? Will there be DMA Write in PCIe root complex to write this data to Host Memory .
The PCIe RC will receive the TLP and it will have a address translation function which optionally translates the address and send the packet to its user side interface. And usually after the PCIe RC, there is IOMMU logic which converts PCIe address to host physical address (and checks permissions). The IOMMU has for PCIe uses address translation table on memory for for each {bus, device, function} pairs or even PSID(process space id) and then that packet will have new physical address and go to an interconnect (usually supporting cache coherency). The interconnect receives the packet from iommu (the iommu becomes a master to the interconnect), and that interface node has system memory map having information where the addressed target is located within the interconnect. The system address map should be set by the firmware before OS runs. (usually there is interrupt controller - Interrupt translation service for arm system - after iommu and before the interconnect to intercept MSI-message signaled interrupt- and generate interrupt to the main interrupt controller).

STM32 + Lwip, MCU load due to broadcast packet

Due to the wrong network configuration,
It is assumed that broadcast packet looping has occurred.
STM32 MCU continuously receives broadcast packets.
As a result, the MCU load increases.
Tested on the STM32F746G-DISCOVERY board,
MCU load increased to 70 ~ 80%.
In this case, the polling period is broken and
Our products do not work properly.
Except for using the Serial to Ethernet Controller with TCP / IP Protocol stack,
Is there a way to avoid this problem?
If you detect flooding of broadcast packets, you could in theory temporary disable receiving broadcast packets in the MAC configuration (the ethernet hardware inside the STM32). STM32 MCU can filter packets by broadcast, multicast, receive-all, hash of either sender or received hardward adress.

NACK and ACK responses on I2c bus

My recent project requires the use of i2c communication using a single master with multiple slaves. I know that with each data byte (actual data) sent by master,the slave responds with Nack\Ack(1,0).
I am confused that how this Nack and ACK are interpreted. I searched the web but i didn't got clear picture about this. My understanding is something like this.
ACK- I have successfully received the data. Send me more data.
NACK- I haven't received the data.Send again.
Is this something like this or I am wrong.
Please clarify and suggest the right answer.
Thanks
Amit kumar
You really should read the I2C specification here, but briefly, there are two different cases to consider for ACK/NACK:
After sending the slave address: when the I2C master sends the address of the slave to talk to (including the read/write bit), a slave which recognizes its address sends an ACK. This tells the master that the slave it is trying to reach is actually on the bus. If no slave devices recognize the address, the result is a NACK. In this case, the master must abort the request as there is no one to talk to. This is not generally something that can be fixed by retrying.
Within a transfer: after the side reading a byte (master on a receive or slave on a send) receives a byte, it must send an ACK. The major exception is if the receiver is controlling the number of bytes sent, it must send a NACK after the last byte to be sent. For example, on a slave-to-master transfer, the master must send a NACK just before sending a STOP condition to end the transfer. (This is required by the spec.)
It may also be that the receiver can send a NACK if there is an error; I don't remember if this is allowed by the spec.
But the bottom line is that a NACK either indicates a fatal condition which cannot be retried or is simply an indication of the end of a transfer.
BTW, the case where a receiving device needs more time to process is never indicated by a NACK. Instead, a slave device either does "clock stretching" (or the master simply delays generating the clock) or it uses a higher-layer protocol to request retrying.
Edit 6/8/19: As pointed out by #DavidLedger, there are I2C flash devices that use NACK to indicate that the flash is internally busy (e.g. completing a write operation). I went back to the I2C standard (see above) and found the following:
There are five conditions that lead to the generation of a NACK:
No receiver is present on the bus with the transmitted address so there is no device to respond with an acknowledge.
The receiver is unable to receive or transmit because it is performing some real-time function and is not ready to start
communication with the master.
During the transfer, the receiver gets data or commands that it does not understand.
During the transfer, the receiver cannot receive any more data bytes.
A master-receiver must signal the end of the transfer to the slave transmitter.
Therefore, these NACK conditions are valid per the standard.
Short delays, particularly within a single operation will normally use clock stretching but longer delays, particularly between operations, as well as invalid operations, my well produce a NACK.
I2C Protocol starts with a start bit followed by the slave address (7 bit address + 1 bit for Read/Write).
After sending the slave address Master releases the data bus(SDA line), put the line in high impedance state leaving it for slave to drive the line.
If address matches the slave address, slave pull the line low for the ACK.
If the line is not pull low by any of the slave, then Master consider it as NACK and sends the Stop bit or repeated start bit in next clock pulse to terminate the or restart the communication.
Besides this the NACK is also sent whenever receiver is not able to communicate or understand the data.
NACK is also used by master (receiver) to terminate the read flow once it has all the data followed by stop bit.

HAL SPI DMA check how many bytes received during operation

I am transferring 10 bytes from master to slave over SPI+DMA with HAL. How can I check whether how many bytes the receiver has at that moment and if all the 10 byte has not been received then stops the process again. Because the master after sending 10 bytes should get an answer from slave but if the slave has not received full byte it waits and system go in indifinite.......
Any idea??
"I am transferring 10 bytes from master to slave over SPI+DMA with HAL."
Since you use DMA you just config transfer size to DMA receiver API and enable DMA interrupt. When DMA receive 10 bytes the DMA receiver complete interrput will arrive except that the sender transfer less than 10 bytes.
"Because the master after sending 10 bytes should get an answer from slave but if the slave has not received full byte it waits and system go in indifinite......."
You can solve this problem by using timeout mechanism in the slave side.

pcie raw throughput test

I am doing a PCIE throughput test via a kernel module, the test result numbers are quite strange (write is 210MB/s but read is just 60MB/s for PCIE gen1 x1). I would like to ask for your suggestions and correction if there are wrong approaches in my test configuration.
My test configuration is as follow:
One board is configured as the Root Port, one board is configured as
the Endpoint. PCIE link is gen 1, width x1, MPS 128B. Both boards run
Linux OS
At Root Port side, we allocate a memory buffer and its size is 4MB.
We map the inbound PCIE memory transaction to this buffer.
At Endpoint side, we do DMA read/write to the remote buffer and
measure throughput. With this test the Endpoint will always be the
initiator of transactions.
The test result is 214MB/s for EP Write test and it is only 60MB/s
for EP Read test. The Write test throughput is reasonable for PCIe
Gen1 x1, but the EP Read throughput is too low.
For the RP board, I tested it with PCIE Ethernet e1000e card and get maximum throughput ~900Mbps. I just wonder in the case of Ethernet TX path, the Ethernet card (plays Endpoint role) also does EP Read request and can get high throughput (~110MB/s) with even smaller DMA transfer, so there must be something wrong with my DMA EP Read configuration.
The detail of the DMA Read test can be summarized with below pseudo code:
dest_buffer = kmalloc(1MB)
memset(dest_buffer, 0)
dest_phy_addr = dma_map_single(destination_buffer)
source_phy_addr = outbound region of Endpoint
get_time(t1)
Loop 100 times
Issue DMA read from source_phy_addr to dest_phy_addr
wait for DMA read completion
get_time(t2)
throughput = (1MB * 100)/(t2 - t1)
Any recommendations and suggestion are appreciated. Thanks in advanced!