I've got a strange issue I can't seem to get my head around. I'm using a STM32F411 board and ST32CubeIDE (eclipse based). I want to use PWM so I've used cubeMX to configure TIM4 in PWM output mode, with a prescaler and load value that is appropriate for the PWM frequency/pulse widths I want. I've also enabled global interrupt for TIM4 as I want to use the HAL_TIM_PWM_PulseFinishedCallback function later on.
Before the main loop, I initialise TIM4 and all 4 channels as so:
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_1); //Start up PWM
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_2); //Start up PWM
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_3); //Start up PWM
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_4); //Start up PWM
Then after I just set the PWM pulse widths manually:
htim4.Instance->CCR1 = 100;
htim4.Instance->CCR2 = 100;
htim4.Instance->CCR3 = 100;
htim4.Instance->CCR4 = 100;
For some reason however, when I turn compiler optimisation on 'Optimise for speed -Ofast'* the program seems to get stuck after the final line, whilst debugging, where CCR4 gets set, and can't progress.
This only happens when compiler optimisation for speed is enabled. By default it was set to optimise for debug and it was fine that way.
Optimizing for anything but debug can confuse the debugger.
Things that you can try: (You didn't specify your toolchain, I'm assuming it's something eclipse/gcc based.)
Enable instruction stepping to step through the assembly instructions one at a time. It should work even when debugging by source lines doesn't.
Set a breakpoint two or three lines further down in the code, and let the debugger run through the critical part.
Hit the pause button just to see where it got stuck. It might not be available if you don't have an active breakpoint somewhere in the code.
Related
Here's my configuration:
GPIO_InitTypeDef GPIO_InitStruct = {0};
GPIO_InitStruct.Pin = 8;
GPIO_InitStruct.Mode = GPIO_MODE_IT_FALLING;
GPIO_InitStruct.Pull = GPIO_NOPULL;
HAL_GPIO_Init(GPIOJ, &GPIO_InitStruct);
When I put the signal on the input pin (square, 2Hz, 3.3Vp-p) I get an interrupt every 250ms, so - on every RISING and falling edge of the signal. I changed the test signal duty cycle to test if it's really what is happening and it confirmed it. I get the interrupt on both edges.
I even debugged the HAL driver to test if it does what I think it does. And yes, it seems to configure the EXTI correctly, only for the falling edge for my pin.
What may be the cause of such behavior? My device is STM32H747I-DISCO discovery board with TouchGFX software used for presentation. The software works correctly, I tested it on measuring the time between other timer interrupts.
I monitor the test signal on the oscilloscope to ensure the input signal on my pin is correct. I tried to use another pin on the same port, but I observe identical behavior. I get interrupts on both rising and falling edges of the signal, despite the pin is configured to trigger the interrupt only on the falling edge.
I also tested the case with the rising edge only. Also in this case I get the interrupt on both edges.
The problem turned out to be a hardware error, a voltage spike I overlooked. The STM32 EXTI input worked correctly all the time. There was indeed a spurious falling edge.
Simulated problem illustration, the 10n capacitor causes voltage spikes and spurious edge detection. In the real circuit, when a digital oscilloscope was used and the time base was too long to capture the spike - the signal looked like a proper square wave. After shortening the time base I noticed the spike. As it is shown on the illustration, this behavior can be easily simulated in a circuit simulator:
SIMULATION LINK
Removing the capacitor from the circuit solved the problem.
To avoid getting noise and other spurious signals on the input shielded wires can be used. The real world circuit was tested and it works properly without the capacitor.
The opto-coupler is just a simplified model of the optical sensor used in the real machine.
For this specific problem, we are tasked to ensure that our ADC samples at a specific frequency, around 30kHz, as we need to sample an input which will have a frequency of at least 10KHz. The plan I had initially had was to set up a timer with interrupts and set the timer to have a 30KHz frequency, then have it generate an interrupt for every Hz and on this interrupt I would sample, however this is obviously a terrible idea since 30000 interrupts per second would most likely break everything I've done so far.
All the ADC examples we have done so far did not require us to sample at a specific frequency and thus, simply keeping the HAL_ADC_GetValue in the main while loop was sufficient.
We were also told that DMA would not be necessary to solve the problem.
Any tips?
I'm working on a firmware development on a STM32L4. I need to sample an analog signal at around 200Hz. So basically one analog to digital conversion every 5ms.
Up to now, I was starting the ADC in continuous conversion mode, triggered by a timer. However this prevents to put the STM32 in Stop mode in between conversions, which would be very efficient in terms of power consumption since 99%+ ot the time the product has nothing to do.
So my idea is to use the single conversion mode: use a low power timer to wakeup the product from Stop mode every 5ms, launch a single conversion in the LPTIM interrupt handler (waiting for ADC end of conversion in polling), and go back to Stop mode.
Do you think it makes sense or do you see problems to proceed like this ? I'm not sure about polling for a single ADC conversion inside a handler, what do you think ? I think a single conversion on one channel should be pretty fast (I run at 80MHz, the datasheet mentions a maximum sampling time of 8us)
Do I have to disable/enable ADC (the bit ADEN) between each single conversion ?
Also, I have to know how long a single conversion lasts to assess whether the solution is interesting or not. I'm confused about the sampling time (bits SMP). The reference manual states: "This sampling time must be enough for the input voltage source to charge the embedded capacitor to the input voltage level." What is the way to find the right SMP value ?
There are no problems with the general idea, LPTIM1 can generate wakeup events through the EXTI controller even in Stop2 mode.
I'm not sure about polling for a single ADC conversion inside a handler, what do you think ?
You might want to put the MCU in Sleep mode in the timer interrupt, and have the ADC trigger an interrupt when the conversion is complete. So disable SLEEPDEEP in the timer interrupt, and enable it in the ADC interrupt.
What is the way to find the right SMP value ?
Empirical method: start with the longest sampling time, and start decreasing it. When the conversion result significantly changes, go one or two steps back.
I am trying to read data from potentionmeter using an arduino microcontroller (tried both arduino UNO, arduino FIO) and using serial communication interface it to Simulink (I tried baud rates ranging from 57600-921600).
Here is the Arduino source code:
/*
AnalogReadSerial
Reads an analog input on pin 0, prints the result to the serial monitor.
*/
#define ana_pin A0
void setup() {
analogReference(DEFAULT);
Serial.begin(57600);
}
// the loop routine runs over and over again forever:
void loop() {
// read the input on analog pin 0:
int sensorValue = analogRead(ana_pin);
// print out the value you read:
Serial.print(sensorValue);
// delay(500); // delay in between reads for stability
}
I interfaced it with Tera Term software and there is instantaneous change of values corresponding to 3 V or 0V.
However when I tried to interface it with Simulink model using instrument control toolbox:
there is a 10 second lag when value changes from ASCII representation of 3V to 0V
The block sample time is 0.01 seconds and the model configuration parameters are adjusted accordingly (I tried it for 1 second or more and the delay still remains. Also, I was able to record data from another sensor and LPC1768 development board without any delay.
I also tried it with Arduino support libraries in Simulink:
And it seems that no data is received, as you can see from Scope1 in the png file the status signal is always 0. I have also attached hardware implementation properties from Arduino block in Simulink:
Could you help me understand what is going on and how to solve this issue?
#Patrick Trentin -
I get a delay of 4 seconds when I use baud rates of 230400, 460800 and 921600.
For baud rate of 57600, 115200 I get a delay of 10 seconds.
Thank you for pointing it out I did not focus on that previously.
However since I will be using the sensor in an application which accurately needs reading every 0.01 sec. I dont think I can work with 4 sec delay.
I'm using the lightning drivers with windows IoT core to drive a PWM output. I've attached a scope to the GPIO pin and I set the PWM duty cycle. I do this in an infinite loop. If I put a delay in the loop then the output signal looks fine. If I drop the delay however, the duty cycle (as seen on the scope) starts to flicker between 5 and 10%. Code below, can anyone explain this?
var controllers = await PwmController.GetControllersAsync(LightningPwmProvider.GetPwmProvider());
var pwmController = controllers[1];
pwmController.SetDesiredFrequency(50);
var motor1 = pwmController.OpenPin(5);
motor1.Start();
do
{
motor1.SetActiveDutyCyclePercentage(0.05);
Task.Delay(1000).Wait();
} while (true);
I'm just guessing here, but could it be that SetActiveDutyCyclePercentage will actually reset the PWM counter, so it'll mess up the current cycle in the PWM. If you do it repeatedly then it'll mess up a lot of the cycles, vs. putting in a delay. Think about PWM as a counter that flips the output when it hits 0. If you reset the counter (with the SetActiveDutyCyclePercentage call) then the current cycle's total number of counts = length (before it flips the output) will be skewed.