using cortex M3, arduino due
does anyone know if its possible so get a pwm channel to disable itself after so many pulses.
what i want to try out is something like this
Interrupt 1 fires (timer0),
it sets the delay of the pwm and how many cycles to go for
pwm starts and each pulse increments the counter, once the counter reaches its limit the pwm disables itself
what im NOT interested in is any other loop outside of the pwm settings doing the counting/disabling
Just add a interrupt to the timer which does the PWM (either from "update" of from "compare" - the result will slightly vary, so you must pick the one you prefer) and increment the counter there. Once the counter reaches your target value just disable the timer from interrupt and that's all.
There is a better way to handle this if your controller has DMA and DMA Iteration counter.
Configure the DMA channel to transmit a dummy byte upon each pulse. configure the DMA to raise an interrupt when Iteration counter reaches the threshold. You can stop the PWM counter inside this ISR handling. Since we are using DMA for pulse counting, there is very little load on the CPU.
Related
I have a STM32F417IG microcontroller an external 16bit-DAC (TI DAC81404) that is supposed to generate a Signal with a sampling rate of 32kHz. The communication via SPI should not involve any CPU resources. That is why I want to use a timer triggered DMA to shift the data with a rate of 32kHz to the SPI data register in order to send the data to the DAC.
Information about the DAC
Whenever the DAC receives a channel address and the new corresponding 16bit value the DAC will renew its output voltage to the new received value. This is achieved by:
Pulling the CS/NSS/SYNC – pin to low
Sending the 24bit/3 byte long message and
Pulling the CS back to a high state
The first 8bit of the message are containing among other information the information where the output voltage should be applied. The next and concurrently the last 16bit are containing the new value.
Information about STM32
Unfortunately the microcontroller of ST are having a hardware problem with the NSS-pin. Starting the communication via SPI the NSS-pin is pulled low. Now the pin is low as long as SPI is enabled (. (reference manual page 877). That is sadly not the right way for communicate with device that are in need of a rise of the NSS after each message. A “solution” would be to toggle the NSS-pin manually as suggested in the manual (When a master is communicating with SPI slaves which need to be de-selected between transmissions, the NSS pin must be configured as GPIO or another GPIO must be used and toggled by software.)
Problem
If DMA is used the ordinary way the CPU is only used when starting the process. By toggling the NSS twice every 1/32000 s this leads to corresponding CPU interactions.
My question is whether I missed something in order to achieve a communication without CPU.
If not my goal is now to reduce the CPU processing time to a minimum. My pIan is to trigger DMA with a timer. So every 1/32k seconds the data register of SPI is filled with the 24bit data for the DAC.
The NSS could be toggled by a timer interrupt.
I have problems achieving it because I do not know how to link the timer with the DMA of the SPI using HAL-functions. Can anyone help me?
This is a tricky one. It might be difficult to avoid having one interrupt per sample with this combination of DAC and microcontroller.
However, one approach I would look at is to have the CS signal created as a timer output-compare (like PWM). You can use multiple channels of the same timer or link multiple timers to create a delay between the CS output and the DMA trigger. You should allow some room for jitter, because depending on what else is happening the DMA might not respond instantly. This won't hurt your DAC output signal though, because it only outputs the value on the rising edge of chip select (called SYNC in the DAC datasheet) which will still be from your first timer.
I'm a little confused about the trigger voltage for rising-falling triggered interrupts. As my previous understanding, the trigger voltage should be Vih and Vil. But some one told me Vih and Vil is not for edge triggered interrupts. The thing is, when I observe the waveform with a Oscilloscope along with an interrupt counter from Keil, I did see the interrupt happened when the voltage did not reach the Vih(rising edge). I use 3.3V voltage. The signal is only 1V around and the interrupts was triggered. I checked the manual of STM32 and did not find the answer.
Could some one help?
Thanks,
The levels are the same whether you are using an interrupt or polling.
VIH min is the minimum voltage that is guaranteed to be interpreted as a high, and VIL max is the maximum input that is guaranteed to be interpreted as a low.
Any voltage between these two levels could be interpreted either way.
As well as that, there isn't a single changeover voltage because the inputs have schmitt triggers with at least 200mV of hysteresis.
To guarantee to not trigger the rising edge you need to stay below VIL max, which at 3.3V is 1.155V.
I am doing a quadcopter using a STM32F407 discovery. I was finally able to stabilize it. Now i am trying to use the RC receiver so i can control my quadcopter movements. Is there a way to read the signal of PWM of my RC receiver channels ??
Also my RC receiver supports PPM and according to what i understand it receives a packet of duty cycles strong textbut still don't know how to receive this.
You can use the SPI interface to encode the PPM (or the PWM) signal of your RC receiver.
General approach:
Connect the PPM signal to the MISO pin and a second one of the controller (simultaneously). MOSI, CLK, and CS pins are not needed.
Initialize the SPI interface with a appropriate clock. With this frequency the signal will be shifted in the controller. Try to use 4kHz.
Depending on the idle state of the signal enable either rising or falling edge interrupt trigger on the second pin. This will used to trigger incoming frames.
If the interrupt occurs disable the trigger temporary and start spi transmission to get several bytes (outgoing ingored and not connected). Depending on the Frame length 8 or 10 Bytes should do it. This will catch frames up to 20ms.
After you get the all bytes enable the trigger again and repeat for the next frame.
The received data should contain the pattern of the pwm/ppm signal.
You should also match the sampling rate and the amount of bytes to receive with your RC receiver.
I'm using the lightning drivers with windows IoT core to drive a PWM output. I've attached a scope to the GPIO pin and I set the PWM duty cycle. I do this in an infinite loop. If I put a delay in the loop then the output signal looks fine. If I drop the delay however, the duty cycle (as seen on the scope) starts to flicker between 5 and 10%. Code below, can anyone explain this?
var controllers = await PwmController.GetControllersAsync(LightningPwmProvider.GetPwmProvider());
var pwmController = controllers[1];
pwmController.SetDesiredFrequency(50);
var motor1 = pwmController.OpenPin(5);
motor1.Start();
do
{
motor1.SetActiveDutyCyclePercentage(0.05);
Task.Delay(1000).Wait();
} while (true);
I'm just guessing here, but could it be that SetActiveDutyCyclePercentage will actually reset the PWM counter, so it'll mess up the current cycle in the PWM. If you do it repeatedly then it'll mess up a lot of the cycles, vs. putting in a delay. Think about PWM as a counter that flips the output when it hits 0. If you reset the counter (with the SetActiveDutyCyclePercentage call) then the current cycle's total number of counts = length (before it flips the output) will be skewed.
I have a high speed clock at 10 MHz going to the processor's TIM4 input capture pin (ch.3). I would like to verify that the clock is running at 10 MHz with the processor's input capture. I coded the processor with the input capture module, and it works fine for lower frequencies (around 1 kHz or so). Once I start to climb the frequency up to the MHz range, the processor starts to miss interrupts and thus gives me an incorrect frequency. I didn't see anywhere in the datasheet that states the maximum frequency that the input capture can read. I have an external clock of 8 MHz, and a core clock of 72 MHz, so I would imagine that I can read a 10 MHz signal. Any ideas?
Take a look at the TIM_ICInitStructure.TIM_ICPrescaler options. Usually you'll have it set to TIM_ICPSC_DIV1 so that interrupts are generated on every valid transition.
Prescaler values of 1,2,4 and 8 are available that will allow you to effictively reduce the rate of interrupt generation by that factor. For example, for a 10MHz signal with a prescaler of 8 you'd expect to count a frequency of 10Mhz/8 = 1.25MHz.
This is still quite tight for a 72MHz HCLK so you'll still need to optimise your IRQ handler carefully.
Looks like you're generating an interrupt request for every rising (or falling) edge of the clock.
If that is indeed the case, then think about this for a second: with a 10 MHz input signal, you're generating an interrupt about every 7 CPU cycles. In these 7 CPU cycles, you need to budget time to save registers to RAM, run the IRQ handler function prolog, run the actual code you wrote for the interrupt handler, run the IRQ handler function epilogue, and restore the registers.
Best case, if you set compiler flags to optimize for speed and you're not doing much processing in the interrupt handler, you're looking at tens of cycles to run all these tasks. Since you only have 7 cycles to run tens of cycles' worth of processing, it's no surprise that you're missing interrupts.
You can't use an interrupt routine at that frequency. You need to feed the 10MHz in as an external trigger to the timer. Then you can use the prescaler and the timer to divide down to a suitable lower interrupt frequency.