Formulae to calculate PWM output Timer (Counter settings) for STM32 - stm32

Need formulae to calculate Prescaler,Counter Period and a Pulse value of each channel on a Timer with given values of
Input clock frequency (APB)
Output Frequency (PWM)
Duty cycle (for each channel)

Can you solve this equations?
Frequency = ClockFreq / ((PSC + 1) * (ARR + 1))
Dutyin% = (CCRx * 100) / ARR for the fast PWM

Related

Frequency Adjusting with STM32 DAC

I used STM32F407VG to create a 30 khz sine wave. Timer settings are; Prescaler = 2-1, ARR = 1, also the clock is 84 Mhz(the clock which runs DAC).
I wrote a function called generate_sin();
#define SINE_ARY_SIZE (360)
const int MAX_SINE_DEGERI = 4095; // max_sine_value
const double BASLANGIC_NOKTASI = 2047.5; //starting point
uint32_t sine_ary[SINE_ARY_SIZE];
void generate_sine(){
for (int i = 0; i < SINE_ARY_SIZE; i++){
double deger = (sin(i*M_PI*360/180/SINE_ARY_SIZE) * BASLANGIC_NOKTASI) + BASLANGIC_NOKTASI; //double value
sine_ary[i] = (uint32_t)deger; // value
}
This is the function which creates sine wave. I used HAL DMA to send DAC output variables.
HAL_TIM_Base_Start(&htim2);
generate_sine();
HAL_DAC_Start_DMA(&hdac, DAC_CHANNEL_1, sine_ary, SINE_ARY_SIZE, DAC_ALIGN_12B_R);
These are the codes i used to do what i want. But im having a trouble to change frequency without changing prescaler or ARR.
So here is my question. Can i change frequency without changing timer settings ? For example i want to use buttons and whenever i push button i want my frequency to change.
The generate_sine function will give you one period of a sine wave which has SINE_ARY_SIZE of samples.
To increase the frequency you need to make the period shorter (for 2x frequency, you would have half the number of samples per period). So you should calculate the array for smaller SINE_ARY_SIZE (which will fill just part of the original buffer with a shorter sine wave) and also put this smaller value in the HAL_DAC_Start_DMA function.
Decreasing the frequency will require making the array longer.
You should declare the sine_ary with a maximum length that you will need (for lowest frequency). Make sure it fits in RAM.
#define MAXIMUM_ARRAY_LENGTH 360
uint32_t usedArrayLength = 180;
const double amplitude = 2047.5;
uint32_t sine_ary[MAXIMUM_ARRAY_LENGTH];
void generate_sine(){
for (int i = 0; i < usedArrayLength; i++){
double value = (sin(i*M_PI*2/usedArrayLength) * amplitude) + amplitude;
sine_ary[i] = (uint32_t)value; // value
}
This will have two times higher frequency than the original code, because it only has 180 samples per period, compared to 360.
Start it using
HAL_DAC_Start_DMA(&hdac, DAC_CHANNEL_1, sine_ary, usedArrayLength, DAC_ALIGN_12B_R);
To change the frequency, stop DAC, change the value of usedArrayLength (smaller value means higher frequency, must be less or equal to MAXIMUM_ARRAY_LENGTH). Then call the generate_sine function and start the DAC again by the same function (that now uses new usedArrayLength).
Frequency will be: Clock/prescaler/ARR/usedArrayLength
Also, you should use uint16_t for the array (values are from 0 to 4095, the DAC is 12bit I suppose) and DMA should be set to Half-word (2 bytes per value).

Calculate acceleration from data points

I have a servo motor, and this servo motor I would like to make it follow a "motion pattern" as closely as possible, and use the same value for acceleration and deceleration.
The attach picture illustrates the "motion pattern" (Y = velocity, X = Time)
motion pattern:
accelerates 0m/s to 0.100m/s.
constant velocity 0.100m/s for 4 sec.
decelerates to negative ?m/s.
accelerates to 0m/s, and motor position = 0.
How can i calculate the acceleration and deceleration?
What i have tried so far is:
Time = (total time - constant velocity time) 10 - 4 = 6sec.
Distances = (total distances - constant velocity distances ) 1 - 0.4 = 0.6meter.
acceleration = (2 * distances / (time^2) 2 * 0.6 / sqr(6) = 0.0333m/s.
But with this acceleration it over shoots in the negative direction by 500mm.
Take a look at the PLC Open motion function blocks, for example the MC_MoveRelative and the MC_MoveContinuesRelative block:
(Beckhoff documentation)
As Sergey already stated you can use those blocks to create a motion profile by entering all the parameters you need and integrating the blocks in a step chain.

STM32 HAL Nucleo F446RE Quadrature Encoder

I have a problem with the quadrature encoder mode on timer TIM3 of my
STM32F446RE /
NUCLEO-F446RE:
TIM3 counts on every rising edge on the first signal.
The CNT register counts up and I read the value with 1 Hz and then
I set the register to 0.
When I look on the
oscilloscope
the frequency is half as high as the value from the
CNT register output (1hz).
Why?
TIM3 counts on both edges on the first signal.
The
CNT register output (1 Hz)
is completely wrong.
My configuration is:
GPIO_InitTypeDef sInitEncoderPin1;
sInitEncoderPin1.Pin = pin1Encoder.pin; // A GPIO_PIN_6
sInitEncoderPin1.Mode = GPIO_MODE_AF_PP;
sInitEncoderPin1.Pull = GPIO_PULLUP;
sInitEncoderPin1.Speed = GPIO_SPEED_HIGH;
sInitEncoderPin1.Alternate = altFunctionEncoder; // GPIO_AF2_TIM3
GPIO_InitTypeDef sInitEncoderPin2;
sInitEncoderPin2.Pin = pin2Encoder.pin; // A GPIO_PIN_7
sInitEncoderPin2.Mode = GPIO_MODE_AF_PP;
sInitEncoderPin2.Pull = GPIO_PULLUP;
sInitEncoderPin2.Speed = GPIO_SPEED_HIGH;
sInitEncoderPin2.Alternate = altFunctionEncoder; // GPIO_AF2_TIM3
HAL_GPIO_Init(GPIOA, &sInitEncoderPin1);
HAL_GPIO_Init(GPIOA, &sInitEncoderPin2);
encoderTimer.Init.Period = 0xffff;
encoderTimer.Init.Prescaler = 0;
encoderTimer.Init.CounterMode = TIM_COUNTERMODE_UP;
encoderTimer.Init.ClockDivision = TIM_CLOCKDIVISION_DIV1;
encoderTimer.Init.RepetitionCounter = 0;
HAL_NVIC_SetPriorityGrouping(NVIC_PRIORITYGROUP_4);
HAL_NVIC_SetPriority(SysTick_IRQn, 0, 1);
encoder.EncoderMode = TIM_ENCODERMODE_TI1;
encoder.IC1Filter = 0x0f;
encoder.IC1Polarity = TIM_INPUTCHANNELPOLARITY_RISING; // TIM_INPUTCHANNELPOLARITY_BOTHEDGE
encoder.IC1Prescaler = TIM_ICPSC_DIV1;
encoder.IC1Selection = TIM_ICSELECTION_DIRECTTI;
encoder.IC2Filter = 0x0f;
encoder.IC2Polarity = TIM_INPUTCHANNELPOLARITY_RISING;
encoder.IC2Prescaler = TIM_ICPSC_DIV1;
encoder.IC2Selection = TIM_ICSELECTION_DIRECTTI;
HAL_TIM_Encoder_Init(&encoderTimer, &encoder);
HAL_TIM_Encoder_Start_IT(&encoderTimer, TIM_CHANNEL_ALL);
The
oscilloscope screenshot
shows a frequency of about 416 Hz.
The values shown in the
first shell output
are (very roughly!) twice as high (as the question points out already).
This appears (nearly...) correct to me since the shown configuration
encoder.EncoderMode = TIM_ENCODERMODE_TI1;
selects the "X2 resolution encoder mode", which counts 2 CNT increments per signal period.
In an application note on
timer overview,
(Sec. 4.3.4 / Fig. 7) there is an illustrative diagram how the encoder mode works in detail.
The
second screenshot
results from an incorrect TIM3 configuration:
The encoder mode (TIM_ENCODERMODE_TI1) assumes that both channels trigger only upon directed flanks in an alternating way (see the AN link above).
If one of the two channels triggers twice as many events due to
configuration
encoder.IC1Polarity = TIM_INPUTCHANNELPOLARITY_BOTHEDGE,
the counter will only count up one position and then "recognize" a "reversal" event (= change of direction).
Keeping in mind that
65535u = 0xFFFF = -1
the second screenshot only shows values -1, 0, +1 - which fits perfectly with this explanation.
The question remains why the first screenshot shows (reproducible) measurements between 800 and 822.
I assume that
the physical source of the encoder signal runs at a constant pace
the 1 Hz timer that triggered shell output is independent from TIM3, and
it has been started before the encoder timer
(i.e., above the shown code sample).
This may explain why the first two values look like nonsense (0: TIM3 has not been started yet. 545: TIM3 has been started during the shell output timer period).
Discarding the first two measurement samples, the average and standard deviation, resp., of the measured signal frequency are
808,9091 +/- 0,5950 [X2 increments per second]
404,4545 +/- 0,2975 [Hz]
which corresponds to a period of
2,4331 +/- 0,003 [ms].
Hence, the measured frequency is too low by about 11 Hz, i.e., measured period too high by nearly 30 µs, and this error is clearly beyond the statistical noise.
The question gives a hint where this error might come from:
The CNT register counts up and I read the value with 1 Hz and then I set the register to 0.
Whenever the 1 Hz "polling timer" expires, it triggers an interrupt
(or a logical event in polling software).
Processing of this interrupt/event may be delayed a little,
depending on other software (IRQ: deactivation times elsewhere in the software,
Polling: loop duration until event is polled).
Software reads CNT value.
Software resets CNT value to zero,
discarding further increments since the CNT value has been read.
TIM3 continues counting (from zero).
This hints that software needs 30 µs between (3.) and (4.), which would be quite a lot of time on an STM32F4.
Edit: I just re-checked the oscilloscope screenshot. The error is visible, but I believe it is smaller than I originally assumed (from counting flanks in the picture).

Unity predict endpoint from current velocity

My rocket's rigidbody velocity is Vector2(0,100) when I call a function. How can I calculate the world coordinate (enpoint) when the velocity reaches 0?
Gravity should be included in the formula.
Thanks!
It sounds like you want the integral of the velocity function, which should provide the total distance respective to time.
Your velocity is going to be v = (100 - ('t'ime * 'g'ravity)). We can solve for time like t = (-v + 100)/g -> t = (0 + 100)/g = 100/g. So you should reach zero velocity at t = 100/g (assuming all the same units).
The integral of your velocity will give you distance traveled. An integral calculator is here: http://www.integral-calculator.com/
The integral function of your velocity is 100t - (g*t^2)/2
From zero to a particular time t, you can just plug and play. So for example, if for a particular gravity you reach zero velocity at t = 10 seconds, you will have traveled (100 * 10) - ((g * 10^2)/2) distance. (so for gravity 9, you would get 1000 - (9 * 100)/2 = 550 units
Edit: To be clear - first you want to calculate how long it takes to get to velocity zero at a particular starting velocity and gravity:
t = vStart/g
Then plug that time value into the integral function above:
distance = (vStart * t) - ((g * t^2)/2)
(or clearly you could turn it into one function by replacing t with vStart/g in the second function, but if I were coding I would definitely calculate them in two steps to provide a sanity check in case my units were wrong)

MIDI tick to millisecond?

I realize that there are many questions here concerning converting MIDI ticks to milliseconds (ex: How to convert midi timeline into the actual timeline that should be played, Midi Ticks to Actual PlayBack Seconds !!! ( Midi Music), Midi timestamp in seconds) and I have looked at them all, tried to implement the suggestions, but i am still not getting it.
(Did I mention I am a little "math phobic")
Can anyone help me work a practical example? I am using the Bass lib from un4seen. I have all the data I need - I just don't trust my calculations.
Bass Methods
Tick
// position of midi stream
uint64_t tick = BASS_ChannelGetPosition(midiFileStream, BASS_POS_MIDI_TICK)
PPQN
//The Pulses Per Quarter Note (or ticks per beat) value of a MIDI stream.
float ppqn;
BASS_ChannelGetAttribute(handle, BASS_ATTRIB_MIDI_PPQN, &ppqn);
Tempo
//tempo in microseconds per quarter note.
uint32_t tempo = BASS_MIDI_StreamGetEvent( midiFileStream, -1, MIDI_EVENT_TEMPO);
My Attempt at Calculating MS value for tick:
float currentMilliseconds = tick * tempo / (ppqn * 1000);
The value I get appears correct but I don't have any confidence in it since I am not quite understanding the formula.
printf("tick %llu\n",tick);
printf("ppqn %f\n",ppqn);
printf("tempo %u\n",tempo);
printf("currentMilliseconds %f \n", currentMilliseconds);
Example output:
tick 479
ppqn 24.000000
tempo 599999
currentMilliseconds 11974.980469
Update
My confusion continues but based on this blog post I think I have the code right – at least the output seems accurate. Conversely, the answer provided by #Strikeskids below yields different results. Maybe I have an order of operations problem in there?
float kMillisecondsPerQuarterNote = tempo / 1000.0f;
float kMillisecondsPerTick = kMillisecondsPerQuarterNote / ppqn;
float deltaTimeInMilliseconds = tick * kMillisecondsPerTick;
printf("deltaTimeInMilliseconds %f \n", deltaTimeInMilliseconds);
.
float currentMillis = tick * 60000.0f / ppqn / tempo;
printf("currentMillis %f \n", currentMillis);
Output:
deltaTimeInMilliseconds 11049.982422
currentMillis 1.841670
Tempo is in beats per minute. Because you want to be getting a time, you should have it in the denominator of your fraction.
currentTime = currentTick * (beats / tick) * (minutes / beat) * (millis / minute)
millis = tick * (1/ppqn) * (1/tempo) * (1000*60)
to use integer arithmetic efficiently do
currentMillis = tick * 60000 / ppqn / tempo
This works:
float kMillisecondsPerQuarterNote = tempo / 1000.0f;
float kMillisecondsPerTick = kMillisecondsPerQuarterNote / ppqn;
float deltaTimeInMilliseconds = tick * kMillisecondsPerTick;
printf("deltaTimeInMilliseconds %f \n", deltaTimeInMilliseconds);