raspberry pi lightning dma pwm flickery - raspberry-pi

I'm using the lightning drivers with windows IoT core to drive a PWM output. I've attached a scope to the GPIO pin and I set the PWM duty cycle. I do this in an infinite loop. If I put a delay in the loop then the output signal looks fine. If I drop the delay however, the duty cycle (as seen on the scope) starts to flicker between 5 and 10%. Code below, can anyone explain this?
var controllers = await PwmController.GetControllersAsync(LightningPwmProvider.GetPwmProvider());
var pwmController = controllers[1];
pwmController.SetDesiredFrequency(50);
var motor1 = pwmController.OpenPin(5);
motor1.Start();
do
{
motor1.SetActiveDutyCyclePercentage(0.05);
Task.Delay(1000).Wait();
} while (true);

I'm just guessing here, but could it be that SetActiveDutyCyclePercentage will actually reset the PWM counter, so it'll mess up the current cycle in the PWM. If you do it repeatedly then it'll mess up a lot of the cycles, vs. putting in a delay. Think about PWM as a counter that flips the output when it hits 0. If you reset the counter (with the SetActiveDutyCyclePercentage call) then the current cycle's total number of counts = length (before it flips the output) will be skewed.

Related

HAL_ADC_PollForConversion - what exactly is it for?

I've been tinkering around STM32 (F103RB) for a couple of weeks now and one thing I don't get is
HAL_ADC_PollForConversion
function purpose. I mean, I don't see any impact of this function on ADC readings. Here's my code example:
Version 1 (with PollForConversion on)
while (1)
{
HAL_ADC_Start(&hadc1);
HAL_ADC_PollForConversion(&hadc1, HAL_MAX_DELAY);
uint32_t value = HAL_ADC_GetValue(&hadc1);
float voltage = 3.3f * value / 4096.0f;
printf("ADC = %lu (%.3f V)\n", value, voltage); //send the value via UART
HAL_Delay(250);
}
Version 2 (with PollForConvertion off)
while (1)
{
HAL_ADC_Start(&hadc1);
//HAL_ADC_PollForConversion(&hadc1, HAL_MAX_DELAY);
uint32_t value = HAL_ADC_GetValue(&hadc1);
float voltage = 3.3f * value / 4096.0f;
printf("ADC = %lu (%.3f V)\n", value, voltage); //send the value via UART
HAL_Delay(250);
}
As far as my observations go, it doesn't really matter whether I use PollForConversion or not - the readings on the UART monitor look the same (altough I am not 100% certain they actually are the same). Continous mode is disabled. What am I missing here?
After you start ADC with HAL_ADC_Start(), hardware starts ADC conversion. This conversion takes some time. Exact time depends on your ADC configuration in Cube, the greater value of ADC prescaler and cycle count the longer it takes.
After conversion is completed, EOC flag in ADC hardware is set and measured value is placed in register. You can read that value with HAL_ADC_GetValue() function. But if you read it before end of conversion, you will probably get data that is corrupted or an old value from previous measurement.
That's why you should always wait untill EOC flag is set - this is exactly what HAL_ADC_PollForConversion does.
In your example without polling you probably read value from previous measurement before the current one ended. Or maybe HAL functions that interact with hardware are slow enough to actually read data when EOC flag is set anyway. You can increase number of ADC clock cycles in Cube to maximum and try it then, you should get a value from last measurement.

STM32F411 stuck updating PWM duty cycle when compiler optimisation enabled

I've got a strange issue I can't seem to get my head around. I'm using a STM32F411 board and ST32CubeIDE (eclipse based). I want to use PWM so I've used cubeMX to configure TIM4 in PWM output mode, with a prescaler and load value that is appropriate for the PWM frequency/pulse widths I want. I've also enabled global interrupt for TIM4 as I want to use the HAL_TIM_PWM_PulseFinishedCallback function later on.
Before the main loop, I initialise TIM4 and all 4 channels as so:
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_1); //Start up PWM
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_2); //Start up PWM
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_3); //Start up PWM
HAL_TIM_PWM_Start_IT(&htim4, TIM_CHANNEL_4); //Start up PWM
Then after I just set the PWM pulse widths manually:
htim4.Instance->CCR1 = 100;
htim4.Instance->CCR2 = 100;
htim4.Instance->CCR3 = 100;
htim4.Instance->CCR4 = 100;
For some reason however, when I turn compiler optimisation on 'Optimise for speed -Ofast'* the program seems to get stuck after the final line, whilst debugging, where CCR4 gets set, and can't progress.
This only happens when compiler optimisation for speed is enabled. By default it was set to optimise for debug and it was fine that way.
Optimizing for anything but debug can confuse the debugger.
Things that you can try: (You didn't specify your toolchain, I'm assuming it's something eclipse/gcc based.)
Enable instruction stepping to step through the assembly instructions one at a time. It should work even when debugging by source lines doesn't.
Set a breakpoint two or three lines further down in the code, and let the debugger run through the critical part.
Hit the pause button just to see where it got stuck. It might not be available if you don't have an active breakpoint somewhere in the code.

cortex_m3 disable pwm after n pulses

using cortex M3, arduino due
does anyone know if its possible so get a pwm channel to disable itself after so many pulses.
what i want to try out is something like this
Interrupt 1 fires (timer0),
it sets the delay of the pwm and how many cycles to go for
pwm starts and each pulse increments the counter, once the counter reaches its limit the pwm disables itself
what im NOT interested in is any other loop outside of the pwm settings doing the counting/disabling
Just add a interrupt to the timer which does the PWM (either from "update" of from "compare" - the result will slightly vary, so you must pick the one you prefer) and increment the counter there. Once the counter reaches your target value just disable the timer from interrupt and that's all.
There is a better way to handle this if your controller has DMA and DMA Iteration counter.
Configure the DMA channel to transmit a dummy byte upon each pulse. configure the DMA to raise an interrupt when Iteration counter reaches the threshold. You can stop the PWM counter inside this ISR handling. Since we are using DMA for pulse counting, there is very little load on the CPU.

Is it possible to vary the ON time of a digital output pin of a PLC via structured text?

I am trying to simulate a PWM signal output from a digital only PLC. So is it possible to define the digital output pin ON and OFF time in each cycle?
Thanks in advance.
Most plcs with transistor outputs have a pulse generator that you can use. For example on a Schneider PLC you can use the PTO (pulse train output). If for example you were using the signal to move a motor you can define what speed is equivalent to the frequency of the pulses then in the code you can define a velocity to move
VAR
MC_MoveVelocity_PTO_0: MC_MoveVelocity_PTO;
Powerd: MC_Power_PTO;
mcPositive: MC_DIRECTION;
END_VAR
//enable pulse train output
Powerd(Axis:=PTO_0,Enable:=TRUE,DriveReady:=TRUE);
//command
MC_MoveVelocity_PTO_0(Axis:=PTO_0,Execute:=%IX1.1,ContinuousUpdate:=TRUE,Velocity:=100,Acceleration:=1000,Deceleration:=1000,Direction:=mcPositive);
This code should run every cycle so you don't need to update the ON and OFF time each cycle. You could adjust the Velocity it runs at each cycle if you really wanted to.
Or if you wanted to get really basic you could use a timer to turn ON and OFF your output.
VAR
PWM_Timer: BLINK;
DigitalOutput: BOOL;
offTime: TIME := t#10ms;
onTime: TIME:=t#10ms;
END_VAR
PWM_Timer(ENABLE:=TRUE , TIMELOW:=offTime , TIMEHIGH:=onTime , OUT=>DigitalOutput );
where the timer I used specifies the ON and OFF time that you could adjust. But you do not need to turn ON and OFF the output each cycle. The PLC will take care of that for you.
If you wanted to play around with turning the output ON/OFF each cycle to see what it would do you could do something like
IF DigitalOutput THEN
DigitalOutput:=FALSE;
ELSE
DigitalOutput:=TRUE;
END_IF;
So when the plc goes through it's scan it will see the output is off so it will turn it on. On the next cycle it will see that it is on so it will turn it off.
Hope this helps.

Delay interfacing Arduino and Simulink

I am trying to read data from potentionmeter using an arduino microcontroller (tried both arduino UNO, arduino FIO) and using serial communication interface it to Simulink (I tried baud rates ranging from 57600-921600).
Here is the Arduino source code:
/*
AnalogReadSerial
Reads an analog input on pin 0, prints the result to the serial monitor.
*/
#define ana_pin A0
void setup() {
analogReference(DEFAULT);
Serial.begin(57600);
}
// the loop routine runs over and over again forever:
void loop() {
// read the input on analog pin 0:
int sensorValue = analogRead(ana_pin);
// print out the value you read:
Serial.print(sensorValue);
// delay(500); // delay in between reads for stability
}
I interfaced it with Tera Term software and there is instantaneous change of values corresponding to 3 V or 0V.
However when I tried to interface it with Simulink model using instrument control toolbox:
there is a 10 second lag when value changes from ASCII representation of 3V to 0V
The block sample time is 0.01 seconds and the model configuration parameters are adjusted accordingly (I tried it for 1 second or more and the delay still remains. Also, I was able to record data from another sensor and LPC1768 development board without any delay.
I also tried it with Arduino support libraries in Simulink:
And it seems that no data is received, as you can see from Scope1 in the png file the status signal is always 0. I have also attached hardware implementation properties from Arduino block in Simulink:
Could you help me understand what is going on and how to solve this issue?
#Patrick Trentin -
I get a delay of 4 seconds when I use baud rates of 230400, 460800 and 921600.
For baud rate of 57600, 115200 I get a delay of 10 seconds.
Thank you for pointing it out I did not focus on that previously.
However since I will be using the sensor in an application which accurately needs reading every 0.01 sec. I dont think I can work with 4 sec delay.