Do you guys know how to enable the clock stretching for I2C slave?
Is it enough to just put this function I2C_StretchClockCmd(I2C2, ENABLE) in the initialization of I2C?
How does clock stretching work exactly?
Seems for me yes, that's enough. Here is a background:
Clock Generation
The SCL clock is always generated by the I2C master. The specification requires minimum periods for the low and high phases of the clock signal. Hence, the actual clock rate may be lower than the nominal clock rate e.g. in I2C buses with large rise times due to high capacitances.
Clock Stretching
I2C devices can slow down communication by stretching SCL: During an SCL low phase, any I2C device on the bus may additionally hold down SCL to prevent it to rise high again, enabling them to slow down the SCL clock rate or to stop I2C communication for a while. This is also referred to as clock synchronization.
Note: The I2C specification does not specify any timeout conditions for clock stretching, i.e. any device can hold down SCL as long as it likes.
In an I2C communication the master device determines the clock speed. Unlike RS232 the I2C bus provides an explicit clock signal which relieves master and slave from synchronizing exactly to a predefined baud rate.
However, there are situations where an I2C slave is not able to co-operate with the clock speed given by the master and needs to slow down a little. This is done by a mechanism referred to as clock stretching.
An I2C slave is allowed to hold down the clock if it needs to reduce the bus speed. The master on the other hand is required to read back the clock signal after releasing it to high state and wait until the line has actually gone high.
How does the I2C clock speed affect the duration of clock stretching introduced by the I2C slave?
Clock stretching is a phenomenon where the I2C slave pulls the SCL line low on the 9th clock of every I2C data transfer (before the ACK stage). The clock is pulled low when the CPU is processing the I2C interrupt to evaluate either the address or process a data received from Master or to prepare the next data when Master is reading from the slave.
The time the clock is pull low depends on the time the CPU takes to process the interrupt and hence is dependent on the CPU speed and not the I2C clock speed.
Why we need the clock stretching? The same thing can be achieved at the time of acknowledgement of the slave on 9th bit the of clock.
The things can be achieved by holding the Data line to till the slave internal process get over.
Related
Hello,
I'm making a project where I want to bit-bang the JTAG protocol.
According to the AN4666 provided by ST, DMA + GPIO can achieve high speeds in bit-banging synchronous protocols.
I want to:
Generate N PWM pulses (the CLK signal).
With the falling edge of each pulses, I want to set some GPIO with DMA.
With the rising edge, I want to read from the GPIO using DMA.
What is the best way to achieve these specs using HAL?
even withtout dma you can reach quite high freq bit banged i/o i'll say in range 2 - 10MHz assuming fast enougth mcu and gpio bus clock high enough (48 96MHz)
Clock just wan't be as stable and may suffer "stall" say idle time when iterrupt occur vs dma. but is way simpler
for DMA base , if you use 3 bit of one port, one for clk and one for TDI and one for TDO then use 2 dma one to wr and one that rd on same timer source (if possible) at double rate of the TCK signal
the data in is rebuilt by taking teh i bit of one read data over 2
index like 0 2 4 or 1 3 5 ... depending on edge you want and how you wr clk array in mem is coded.
last if your jtag chain is 8 bit multiple SPI is even simpler and dma easy ;)
I have a STM32F417IG microcontroller an external 16bit-DAC (TI DAC81404) that is supposed to generate a Signal with a sampling rate of 32kHz. The communication via SPI should not involve any CPU resources. That is why I want to use a timer triggered DMA to shift the data with a rate of 32kHz to the SPI data register in order to send the data to the DAC.
Information about the DAC
Whenever the DAC receives a channel address and the new corresponding 16bit value the DAC will renew its output voltage to the new received value. This is achieved by:
Pulling the CS/NSS/SYNC – pin to low
Sending the 24bit/3 byte long message and
Pulling the CS back to a high state
The first 8bit of the message are containing among other information the information where the output voltage should be applied. The next and concurrently the last 16bit are containing the new value.
Information about STM32
Unfortunately the microcontroller of ST are having a hardware problem with the NSS-pin. Starting the communication via SPI the NSS-pin is pulled low. Now the pin is low as long as SPI is enabled (. (reference manual page 877). That is sadly not the right way for communicate with device that are in need of a rise of the NSS after each message. A “solution” would be to toggle the NSS-pin manually as suggested in the manual (When a master is communicating with SPI slaves which need to be de-selected between transmissions, the NSS pin must be configured as GPIO or another GPIO must be used and toggled by software.)
Problem
If DMA is used the ordinary way the CPU is only used when starting the process. By toggling the NSS twice every 1/32000 s this leads to corresponding CPU interactions.
My question is whether I missed something in order to achieve a communication without CPU.
If not my goal is now to reduce the CPU processing time to a minimum. My pIan is to trigger DMA with a timer. So every 1/32k seconds the data register of SPI is filled with the 24bit data for the DAC.
The NSS could be toggled by a timer interrupt.
I have problems achieving it because I do not know how to link the timer with the DMA of the SPI using HAL-functions. Can anyone help me?
This is a tricky one. It might be difficult to avoid having one interrupt per sample with this combination of DAC and microcontroller.
However, one approach I would look at is to have the CS signal created as a timer output-compare (like PWM). You can use multiple channels of the same timer or link multiple timers to create a delay between the CS output and the DMA trigger. You should allow some room for jitter, because depending on what else is happening the DMA might not respond instantly. This won't hurt your DAC output signal though, because it only outputs the value on the rising edge of chip select (called SYNC in the DAC datasheet) which will still be from your first timer.
I want to generate clock for PCA9959 LED driver with my STM32L552. The LED driver needs an external clock at 20 MHz (+/- 15%). I'm trying to generate a 22 MHz clock on port PA8 on STM32L552. I managed to generate a PWM on port PA8, but I can't reach the frequency of ~22Mhz. I arrive at a maximum of 8Mhz.
Here are the PWM parameters:
I'm not sure I filled in the pwm parameters correctly. Normally with his settings I guess I should have a 22 MHz PWM with a 20% duty cycle.
PWM (MHz) = SystemCoreClock (MHz) / Prescaler => 22MhZ = 110MHz / 5
My clock configuration:
Thanks for your help.
The easiest way to output a high speed clock like this is with the MCO peripheral, rather than a timer. Fortunately for you the MCO pin is PA8. Perhaps the person who designed your board knew this and intended you to use MCO. Read the reference manual to see how.
If you do want to use a timer to do 22MHz, then as you have correctly identified you cannot get a 50% duty-cycle on your PWM. I would recommend starting with a 40% or 60%, with an output-compare value of 2-out-of-5 or 3-out-of-5, not 1 as you have above.
There is no detail in the PCA9959 datasheet about what the required mark-space ratio of the clock is, but I guess anything other than 50% could be a problem. You would be better to divide the clock by an even number. Either just divide 110MHz by 6 and output 18.33MHz, or else slow your core down a bit and divide by 4 (reduce the N parameter of your PLL).
Whether you use MCO or PWM don't forget to set the GPIO pin mode to the fastest slew rate available. Maybe the 8MHz you are measuring is the result of aliasing a faster clock that has been through the wrong GPIO mode. You could test this using a scope with at least 100MHz bandwidth.
I want to implement a I2C SLAVE in an FPGA, just for learning purposes. I read in the I2C specification that for the FAST mode there is a timing parameter tPS = 50ns (max) which means "pulse width of spikes that must be suppressed by the input filter". Should this be a digital filter inside the slave? If yes, does that mean my slave must have a maximum clock period of 25ns (or something)?
Another question would be: is there a (robust) way of implementing this slave using the SCL line as the only clock? Or a faster second clock is needed (and in this case I would treat the SCL line as "data")? If so, how do I calculate the minimum frequency of this other clock?
Thanks in advance!
I'm testing the SPI capabilities of STM32H7. For this I'm using the SPI examples provided in STM32CubeH7 on 2 Nucleo-H743ZI boards. I will perhaps not keep this code in my own development, rigth now the goal is to understand how SPI is working and what bandwith I can get in the different modes (with DMA, with cache enabled or not, etc...).
I'd like to share the figures I've computed, as it doesn't seem very high. In the example, if I understood correctly, the CPU is # 400Mhz and the SPI bus frequency # 100MHz.
For polling mode I've measured the number of cycles of the call to function HAL_SPI_TransmitReceive.
For DMA I've measured between call to HAL_SPI_TransmitReceive_DMA and call to the transfer complete callback.
Measurements of cycles where made with SysTick clocked on internal clock. Since there is no low power usage, it should be accurate.
I've just modified ST's examples to send a buffer of 1KB.
I get around 200.000 CPU cycles in polling mode, which means around 2MB/s
And around 3MB/s in DMA mode.
Since the SPI clock runs at 100Mhz I would have expected much more, especially in DMA mode, what do you think ? Is there something wrong in my test procedure ?