who do someone know about the STM32 GPIO toggling timing issue?
I want get the always normal togging timming signal without the abnormal togging timming iusse.
Due to the contacted image, I have a diffcult problem that the signal timing of the One Wire protocol changed the total timing in order to decided the period of the One wire logic interface.
HAL_GPIO_TogglePin(GPIOB, GPIO_PIN_12);
DWT_Delay_us(2); // about 2us delay time
using the STM32F722 mcu and STM32 CUBEMX/IDE
Related
Here's my configuration:
GPIO_InitTypeDef GPIO_InitStruct = {0};
GPIO_InitStruct.Pin = 8;
GPIO_InitStruct.Mode = GPIO_MODE_IT_FALLING;
GPIO_InitStruct.Pull = GPIO_NOPULL;
HAL_GPIO_Init(GPIOJ, &GPIO_InitStruct);
When I put the signal on the input pin (square, 2Hz, 3.3Vp-p) I get an interrupt every 250ms, so - on every RISING and falling edge of the signal. I changed the test signal duty cycle to test if it's really what is happening and it confirmed it. I get the interrupt on both edges.
I even debugged the HAL driver to test if it does what I think it does. And yes, it seems to configure the EXTI correctly, only for the falling edge for my pin.
What may be the cause of such behavior? My device is STM32H747I-DISCO discovery board with TouchGFX software used for presentation. The software works correctly, I tested it on measuring the time between other timer interrupts.
I monitor the test signal on the oscilloscope to ensure the input signal on my pin is correct. I tried to use another pin on the same port, but I observe identical behavior. I get interrupts on both rising and falling edges of the signal, despite the pin is configured to trigger the interrupt only on the falling edge.
I also tested the case with the rising edge only. Also in this case I get the interrupt on both edges.
The problem turned out to be a hardware error, a voltage spike I overlooked. The STM32 EXTI input worked correctly all the time. There was indeed a spurious falling edge.
Simulated problem illustration, the 10n capacitor causes voltage spikes and spurious edge detection. In the real circuit, when a digital oscilloscope was used and the time base was too long to capture the spike - the signal looked like a proper square wave. After shortening the time base I noticed the spike. As it is shown on the illustration, this behavior can be easily simulated in a circuit simulator:
SIMULATION LINK
Removing the capacitor from the circuit solved the problem.
To avoid getting noise and other spurious signals on the input shielded wires can be used. The real world circuit was tested and it works properly without the capacitor.
The opto-coupler is just a simplified model of the optical sensor used in the real machine.
I have a STM32F417IG microcontroller an external 16bit-DAC (TI DAC81404) that is supposed to generate a Signal with a sampling rate of 32kHz. The communication via SPI should not involve any CPU resources. That is why I want to use a timer triggered DMA to shift the data with a rate of 32kHz to the SPI data register in order to send the data to the DAC.
Information about the DAC
Whenever the DAC receives a channel address and the new corresponding 16bit value the DAC will renew its output voltage to the new received value. This is achieved by:
Pulling the CS/NSS/SYNC – pin to low
Sending the 24bit/3 byte long message and
Pulling the CS back to a high state
The first 8bit of the message are containing among other information the information where the output voltage should be applied. The next and concurrently the last 16bit are containing the new value.
Information about STM32
Unfortunately the microcontroller of ST are having a hardware problem with the NSS-pin. Starting the communication via SPI the NSS-pin is pulled low. Now the pin is low as long as SPI is enabled (. (reference manual page 877). That is sadly not the right way for communicate with device that are in need of a rise of the NSS after each message. A “solution” would be to toggle the NSS-pin manually as suggested in the manual (When a master is communicating with SPI slaves which need to be de-selected between transmissions, the NSS pin must be configured as GPIO or another GPIO must be used and toggled by software.)
Problem
If DMA is used the ordinary way the CPU is only used when starting the process. By toggling the NSS twice every 1/32000 s this leads to corresponding CPU interactions.
My question is whether I missed something in order to achieve a communication without CPU.
If not my goal is now to reduce the CPU processing time to a minimum. My pIan is to trigger DMA with a timer. So every 1/32k seconds the data register of SPI is filled with the 24bit data for the DAC.
The NSS could be toggled by a timer interrupt.
I have problems achieving it because I do not know how to link the timer with the DMA of the SPI using HAL-functions. Can anyone help me?
This is a tricky one. It might be difficult to avoid having one interrupt per sample with this combination of DAC and microcontroller.
However, one approach I would look at is to have the CS signal created as a timer output-compare (like PWM). You can use multiple channels of the same timer or link multiple timers to create a delay between the CS output and the DMA trigger. You should allow some room for jitter, because depending on what else is happening the DMA might not respond instantly. This won't hurt your DAC output signal though, because it only outputs the value on the rising edge of chip select (called SYNC in the DAC datasheet) which will still be from your first timer.
I'm working on a firmware development on a STM32L4. I need to sample an analog signal at around 200Hz. So basically one analog to digital conversion every 5ms.
Up to now, I was starting the ADC in continuous conversion mode, triggered by a timer. However this prevents to put the STM32 in Stop mode in between conversions, which would be very efficient in terms of power consumption since 99%+ ot the time the product has nothing to do.
So my idea is to use the single conversion mode: use a low power timer to wakeup the product from Stop mode every 5ms, launch a single conversion in the LPTIM interrupt handler (waiting for ADC end of conversion in polling), and go back to Stop mode.
Do you think it makes sense or do you see problems to proceed like this ? I'm not sure about polling for a single ADC conversion inside a handler, what do you think ? I think a single conversion on one channel should be pretty fast (I run at 80MHz, the datasheet mentions a maximum sampling time of 8us)
Do I have to disable/enable ADC (the bit ADEN) between each single conversion ?
Also, I have to know how long a single conversion lasts to assess whether the solution is interesting or not. I'm confused about the sampling time (bits SMP). The reference manual states: "This sampling time must be enough for the input voltage source to charge the embedded capacitor to the input voltage level." What is the way to find the right SMP value ?
There are no problems with the general idea, LPTIM1 can generate wakeup events through the EXTI controller even in Stop2 mode.
I'm not sure about polling for a single ADC conversion inside a handler, what do you think ?
You might want to put the MCU in Sleep mode in the timer interrupt, and have the ADC trigger an interrupt when the conversion is complete. So disable SLEEPDEEP in the timer interrupt, and enable it in the ADC interrupt.
What is the way to find the right SMP value ?
Empirical method: start with the longest sampling time, and start decreasing it. When the conversion result significantly changes, go one or two steps back.
I am doing a quadcopter using a STM32F407 discovery. I was finally able to stabilize it. Now i am trying to use the RC receiver so i can control my quadcopter movements. Is there a way to read the signal of PWM of my RC receiver channels ??
Also my RC receiver supports PPM and according to what i understand it receives a packet of duty cycles strong textbut still don't know how to receive this.
You can use the SPI interface to encode the PPM (or the PWM) signal of your RC receiver.
General approach:
Connect the PPM signal to the MISO pin and a second one of the controller (simultaneously). MOSI, CLK, and CS pins are not needed.
Initialize the SPI interface with a appropriate clock. With this frequency the signal will be shifted in the controller. Try to use 4kHz.
Depending on the idle state of the signal enable either rising or falling edge interrupt trigger on the second pin. This will used to trigger incoming frames.
If the interrupt occurs disable the trigger temporary and start spi transmission to get several bytes (outgoing ingored and not connected). Depending on the Frame length 8 or 10 Bytes should do it. This will catch frames up to 20ms.
After you get the all bytes enable the trigger again and repeat for the next frame.
The received data should contain the pattern of the pwm/ppm signal.
You should also match the sampling rate and the amount of bytes to receive with your RC receiver.
Do you guys know how to enable the clock stretching for I2C slave?
Is it enough to just put this function I2C_StretchClockCmd(I2C2, ENABLE) in the initialization of I2C?
How does clock stretching work exactly?
Seems for me yes, that's enough. Here is a background:
Clock Generation
The SCL clock is always generated by the I2C master. The specification requires minimum periods for the low and high phases of the clock signal. Hence, the actual clock rate may be lower than the nominal clock rate e.g. in I2C buses with large rise times due to high capacitances.
Clock Stretching
I2C devices can slow down communication by stretching SCL: During an SCL low phase, any I2C device on the bus may additionally hold down SCL to prevent it to rise high again, enabling them to slow down the SCL clock rate or to stop I2C communication for a while. This is also referred to as clock synchronization.
Note: The I2C specification does not specify any timeout conditions for clock stretching, i.e. any device can hold down SCL as long as it likes.
In an I2C communication the master device determines the clock speed. Unlike RS232 the I2C bus provides an explicit clock signal which relieves master and slave from synchronizing exactly to a predefined baud rate.
However, there are situations where an I2C slave is not able to co-operate with the clock speed given by the master and needs to slow down a little. This is done by a mechanism referred to as clock stretching.
An I2C slave is allowed to hold down the clock if it needs to reduce the bus speed. The master on the other hand is required to read back the clock signal after releasing it to high state and wait until the line has actually gone high.
How does the I2C clock speed affect the duration of clock stretching introduced by the I2C slave?
Clock stretching is a phenomenon where the I2C slave pulls the SCL line low on the 9th clock of every I2C data transfer (before the ACK stage). The clock is pulled low when the CPU is processing the I2C interrupt to evaluate either the address or process a data received from Master or to prepare the next data when Master is reading from the slave.
The time the clock is pull low depends on the time the CPU takes to process the interrupt and hence is dependent on the CPU speed and not the I2C clock speed.
Why we need the clock stretching? The same thing can be achieved at the time of acknowledgement of the slave on 9th bit the of clock.
The things can be achieved by holding the Data line to till the slave internal process get over.