How the speed of the i2c communication will be decided that it should be operated at 100kbps or 400kbps rate? - linux-device-driver

Please clarify above question that i was being asked in interview in one company.
And How slave device will communicate to master if at the same time slave device receives request from two or multiple master?
If we have I2C device (Master and Slave connected then how the speed of the data transfer between master and slave will be decided weather to use 100kbps normal mode or 400kbps fast mode)?

The i2c slave devices (from a small sensor to cameras in mobile (in cmos cameras, for control interface, not data is i2c interface)) comes with the supported i2c clock rates mentioned in the datasheet. Similarly, the master devices also come with the i2c max. supported clock rates. The speed of slower(from master and slave) will be speed of the i2c communication.
If you are trying to detect the frequency at runtime (i.e. not knowing the supported rate of slave device) is not recommended for embedded devices. (since they are not hot-plug in the most cases).
So, for example the i2c rate is configured 400kHz for master and slave is supporting till 100kHz, then it will be problem. Master needs to configure <= 100kHz.
If slave supports till 400kHz and master is configured 100kHz (though it is capable of supporting 400kHz), then there is no problem. In this case, the speed is decided according to your speed requirements. If you need more speed (4x speed), then configure the master at 400kHz, and if you are satisfied with the performance at 100kHz, then configure to 100kHz for power saving. You can also configure the master at custom frequency between 100kHz and 400kHz in this case.
If multiple devices with different i2c rates are interfaced on same bus, then the lowest should be the speed. Since it is difficult to change the i2c clock rate runtime in most cases. (e.g. In linux kernel the clock rate and its settings are provided by device tree.)
For multiple master and one slave communication, read this

Related

How Multiple slave to single master SPI software slave management works

I am using STM32H7 family of microcontroller as SPI Master Transmit device which needs to talk to 4 SPI slave devices receive only which are also all STM32H7 MCU's. Both master and slave are configured for software slave management.
The confusion is how slave will identify when master wants to talk to it or transmit data to it without using hardware NSS pin?
How slave device will start receiving in this scenario and stop receiving when all data transmitted?
If you use software slave select (NSS), you must select and deselect the SPI interface by software.
Typically, you would setup an external interrupt on the pin used as NSS/CS and select/deselect the SPI interface when the interrupt is triggered.
On an STM32F1 chip, the SPI interface is selected/deselected by setting/clearing the SSI bit in the SPI_CR1 register. I assume it's very similar on a STM32H7 chip.
Update
I've just checked the STM32H7 and it's exactly the same.
It is very simple. Every slave has one pin called CS. You need to select this device by setting this pin just by using the GPIO. Then you can transmit or receive data. Remember that master has to supply clock even if it wants only to receive data.
It seems that the code shown below can manage the problem.
__HAL_SPI_ENABLE(&hspi1);
__HAL_SPI_DISABLE(&hspi1);

Can we get high Bandwidth in Cluster strategy as we get in Star strategy of google nearby connections ?

We used nearby connection and tried both star and cluster strategy and noticed that in star strategy bandwidth is very high as compared to the bandwidth in cluster strategy.
The network structure which cluster strategy uses, best matches our requirements but we want to have a high bandwidth as we get in star strategy.
If its possible then how can we achieve this?
tldr; You can get high speeds with P2P_CLUSTER but only if both devices are connected to the same router.
The reason P2P_STAR and P2P_POINT_TO_POINT are more restrictive is so that technologies with the same restrictions can be used. P2P_STAR can do everything P2P_CLUSTER can do, but can additionally use Wifi hotspots (and similar technologies, like Wifi Direct). Likewise, P2P_POINT_TO_POINT can do everything P2P_STAR can do, but also with the inclusion of Wifi Aware. It's a tradeoff between flexibility and bandwidth.
As of today, these are the technologies behind each strategy. (Note: We're constantly adding more, and are open to suggestions).
P2P_CLUSTER:
Bluetooth Low Energy (BLE)
Bluetooth Classic
Wifi Local Area Network (LAN)
P2P_STAR:
Bluetooth Low Energy (BLE)
Bluetooth Classic
Wifi Local Area Network (LAN)
Wifi Hotspot
Wifi Direct
P2P_POINT_TO_POINT:
Bluetooth Low Energy (BLE)
Bluetooth Classic
Wifi Local Area Network (LAN)
Wifi Hotspot
Wifi Direct
Wifi Aware

Hardware for Low-Latency transmission from Microcontroller to PC

First and foremost I aware of a "similar" question/answer here: USB: low latency (< 1ms) with interrupt transfer and raw HID
In my case I'm at the start of my project and currently choosing the "right hardware" for the job. I want to transfer raw sensor data from an IMU to a host PC in roughly <1ms. So my Idea was to use a Teensy or Arduino uC to handle the interface between IMU and PC. The current priority is driving the input latency down as best as possible using (ideally) the USB protocol. I'm well aware that once on the PC I have to deal with a non real-time system.
Is there anything "hardware-wise" that I have to pay attention when choosing my Microcontroller?

i2c master for s35390a rtc slave

Is there a sample i2c master code that supports rtc s35390a hardware clock? I am currently working on an SOC that needs to support s35390a from Seiko. But currently, i am getting an error rtc-s35390a 0-0030: hctosys: unable to read the hardware clock. I cannot read/write data properly. I am implementing combined form of transmission.
Use oscilloscope to check if I2C SCL/SDA show some thing
If you can get the first address correct waveform, You will easy to get the register value
This might not be a rtc chip problem.

Direct communication between two PCI devices

I have a NIC card and a HDD both connected on PCIe slots in a Linux machine. Ideally, I'd like to funnel incoming packets to the HDD without involving the CPU, or involving it minimally. Is it possible to set up direct communication along the PCI bus like that? Does anyone have pointers as to what to read up on to get started on a project like this?
Thanks all.
Not sure if you are asking about PCI or PCIe. You used both terms, and the answer is different for each.
If you are talking about a legacy PCI bus: The answer is "yes". Board to board DMA is doable. Video capture boards may DMA video frames directly into your graphics card memory for example.
In your example, the video card could DMA directly to a storage device. However, the data would be quite "raw". Your NIC would have no concept of a filesystem for example. You also need to make sure you can program the NIC's DMA engine to sit within the confines of your SATA controller's registers. You don't want to walk off the end of the BAR!
If you are talking about a modern PCIe bus: The answer is "typically no, but it depends". Peer-to-peer bus transactions are a funny thing in the PCI Express Spec. Root complex devices are not required to support it.
In my testing, peer-to-peer DMA will work, if your devices are behind a PCIe switch (not directly plugged into the motherboard). However, if your devices are connected directly to the chipset (Root Complex), peer-to-peer DMA will not work, except in some special cases. The most notable special case would be the video capture example I mentioned earlier. The special cases are mentioned in the chipset datasheets.
We have tested the peer-to-peer PCIe DMA with a few different Intel and AMD chipsets and found consistent behavior. Have not tested the most recent generations of chipsets though. (We have discussed the lack of peer-to-peer PCIe DMA support with Intel, not sure if our feedback has had any impact on their Engineering dept.)
Assuming that both the NIC card and the HDD are End Points (or Legacy Endpoints) you cannot funnel traffic without involving the Root Complex (CPU).
PCIe, unlike PCI or PCI-X, is not a bus but a link, thus any transaction from an Endpoint device (say the NIC) would have to travel through the Root Complex (CPU) in order to get to another branch (HDD).