Systick timer on Cortex-M4: What is its prescaler? - stm32

I am a little bit confused regarding Cortex system timer on Cortex-M4 CPU.
Let's say, we have following configuration:
16MHz HSI as a clock source;
AHB1 prescaler sets to 1 (i.e. HSI divided by 1);
It means that main system bus (i.e. AHB1 or AHB) runs with the speed of 16 000 000 ticks per second. As far as I am concerned, system timer (so called SysTick) runs with the speed of main system bus, so it should count up to 16 000 000 each second. That seems obvious, but when I look at the Clock tree diagram in STM32F407xx reference manual I see this:
It looks like the system timer runs with the speed: (main system bus speed) / 8.
Is it true? I have configured system timer to generate interrupt each 16 000 000 ticks. Based on the configuration provided above (i.e. HSI as a clock source and AHB1 prescaler = 1) it generates interrupt each second, which toggles LED on and off. I have tried to measure the time between "blinks" and it seems to be exactly 1s. If there would be this prescaler (i.e. /8) then LED should toggle each 8s.
Below You can find code, which configures system clock source and system timer.
HSI frequency = 16 [MHz]
SYSTICKS_COUNT = 16 000 000
void system_clock_init(void)
{
LL_RCC_HSI_Enable();
while (LL_RCC_HSI_IsReady() != 1) {
;
}
LL_FLASH_SetLatency(LL_FLASH_LATENCY_0);
LL_RCC_SetAHBPrescaler(LL_RCC_SYSCLK_DIV_1);
LL_RCC_SetSysClkSource(LL_RCC_SYS_CLKSOURCE_HSI);
while (LL_RCC_GetSysClkSource() != LL_RCC_SYS_CLKSOURCE_STATUS_HSI) {
;
}
LL_RCC_SetAPB1Prescaler(LL_RCC_APB1_DIV_1);
LL_RCC_SetAPB2Prescaler(LL_RCC_APB2_DIV_1);
}
void system_clock_systick_config_init(void)
{
SysTick_Config(SYSTICKS_COUNT);
}
void SysTick_Handler(void)
{
led_toggle(LED_PIN_BOARD_GREEN);
}

The Reference Manual states at the end of the section "6.2 Clocks":
The RCC feeds the external clock of the Cortex System Timer (SysTick) with the AHB clock
(HCLK) divided by 8. The SysTick can work either with this clock or with the Cortex clock
(HCLK), configurable in the SysTick control and status register.
According to the STM32 Cortex-M4 programming manual Bit 2 of the SysTick control register (STK_CTRL) selects the clock source:
Bit 2 CLKSOURCE: Clock source selection
0: AHB/8
1: Processor clock (AHB)
According to the same manual, the default value should be 0 (using AHB/8). Apparently somewhere in your code this bit is set to 1 !?!

Related

STM32 generate 22Mhz clock on gpio out from SystemCoreClock 110Mhz

I want to generate clock for PCA9959 LED driver with my STM32L552. The LED driver needs an external clock at 20 MHz (+/- 15%). I'm trying to generate a 22 MHz clock on port PA8 on STM32L552. I managed to generate a PWM on port PA8, but I can't reach the frequency of ~22Mhz. I arrive at a maximum of 8Mhz.
Here are the PWM parameters:
I'm not sure I filled in the pwm parameters correctly. Normally with his settings I guess I should have a 22 MHz PWM with a 20% duty cycle.
PWM (MHz) = SystemCoreClock (MHz) / Prescaler => 22MhZ = 110MHz / 5
My clock configuration:
Thanks for your help.
The easiest way to output a high speed clock like this is with the MCO peripheral, rather than a timer. Fortunately for you the MCO pin is PA8. Perhaps the person who designed your board knew this and intended you to use MCO. Read the reference manual to see how.
If you do want to use a timer to do 22MHz, then as you have correctly identified you cannot get a 50% duty-cycle on your PWM. I would recommend starting with a 40% or 60%, with an output-compare value of 2-out-of-5 or 3-out-of-5, not 1 as you have above.
There is no detail in the PCA9959 datasheet about what the required mark-space ratio of the clock is, but I guess anything other than 50% could be a problem. You would be better to divide the clock by an even number. Either just divide 110MHz by 6 and output 18.33MHz, or else slow your core down a bit and divide by 4 (reduce the N parameter of your PLL).
Whether you use MCO or PWM don't forget to set the GPIO pin mode to the fastest slew rate available. Maybe the 8MHz you are measuring is the result of aliasing a faster clock that has been through the wrong GPIO mode. You could test this using a scope with at least 100MHz bandwidth.

STM32 Toggle PIN when target halted

I am using the STM32F7 series of microcontrollers and it would be most helpful to have some GPIO change value (either toggle, pulse, high-z, ...) whenever the core is halted by the debugger attached to the JTAG interface. Is anyone aware of such a feature?
There are the DBGMCU registers, which can selectively stop certain peripherals (mostly timers) when the core is stopped.
The idea is to somehow make a timer output a low level signal while it's running, and high level when it isn't. A single timer can't do that, but it's possible with two timers in a master-slave configuration.
Configure TIM3 to output a PWM signal with a very high duty cycle, starting low for two cycles, then going high for the rest of it's 65536 cycles long period. Slave it to TIM2, which runs with a period of 2 cycles, and resets TIM3 on counter overflow. Thus, TIM3 is forced permamently low as long as TIM2 is running, but it will output a 99.997% high PWM signal when TIM2 is stopped. Then TIM2 is configured to stop when the core is halted by the debugger, but TIM3 keeps running.
RCC->AHBENR |= RCC_AHBENR_GPIOBEN; // enable peripheral clocks, that might be different on your board
RCC->APB1ENR |= RCC_APB1ENR_TIM2EN | RCC_APB1ENR_TIM3EN;
// consult your datasheet for the right AF value
GPIOB->AFR[0] = (GPIOB->AFR[0] & ~GPIO_AFRL_AFRL0) | 2; // set PB0 to Alternate Function 2, TIM3
GPIOB->MODER = (GPIOB->MODER & ~GPIO_MODER_MODER0) | GPIO_MODER_MODER0_1; // set PB0 to Alternate Function
DBGMCU->APB1FZ |= DBGMCU_APB1_FZ_DBG_TIM2_STOP; // stop TIM2 when core is stopped
DBGMCU->APB1FZ &= ~DBGMCU_APB1_FZ_DBG_TIM3_STOP; // but don't stop TIM3
TIM2->ARR = 1; // master timer period
TIM2->CR2 = TIM_CR2_MMS_1; // master mode selection MMS=010 Update event
TIM2->CR1 = TIM_CR1_CEN; // enable timer 2
TIM3->ARR = 65535; // PWM period
TIM3->CCR3 = 2; // channel 3 PWM duty cycle
TIM3->CCMR2 = TIM_CCMR2_OC3M; // set channel 3 to PWM mode 2
TIM3->CCER = TIM_CCER_CC3E // enable channel 3 compare output
/* | TIM_CCER_CC3P */; // it's possible to invert output polarity
TIM3->SMCR = TIM_SMCR_TS_0 // trigger selection TS=001 ITR1 = TIM2 is master
| TIM_SMCR_SMS_2; // slave mode SMS=100 reset mode
TIM3->CR1 = TIM_CR1_CEN; // enable timer 3
I don't have an F7, it runs on my STM32L151 board that happens to have a LED on PB0, which is TIM3 channel 3. The LED is lighting up nicely when I hit the suspend button in the debugger, the low pulse is not noticeable to the naked eye at all. Apply an external low pass RC filter to make it disappear when it bothers whatever component it's connected to. It might be possible to output a clean signal using the Retriggerable one pulse mode of the advanced timers TIM1 or TIM8, but I don't have any experience with these.

Delay interfacing Arduino and Simulink

I am trying to read data from potentionmeter using an arduino microcontroller (tried both arduino UNO, arduino FIO) and using serial communication interface it to Simulink (I tried baud rates ranging from 57600-921600).
Here is the Arduino source code:
/*
AnalogReadSerial
Reads an analog input on pin 0, prints the result to the serial monitor.
*/
#define ana_pin A0
void setup() {
analogReference(DEFAULT);
Serial.begin(57600);
}
// the loop routine runs over and over again forever:
void loop() {
// read the input on analog pin 0:
int sensorValue = analogRead(ana_pin);
// print out the value you read:
Serial.print(sensorValue);
// delay(500); // delay in between reads for stability
}
I interfaced it with Tera Term software and there is instantaneous change of values corresponding to 3 V or 0V.
However when I tried to interface it with Simulink model using instrument control toolbox:
there is a 10 second lag when value changes from ASCII representation of 3V to 0V
The block sample time is 0.01 seconds and the model configuration parameters are adjusted accordingly (I tried it for 1 second or more and the delay still remains. Also, I was able to record data from another sensor and LPC1768 development board without any delay.
I also tried it with Arduino support libraries in Simulink:
And it seems that no data is received, as you can see from Scope1 in the png file the status signal is always 0. I have also attached hardware implementation properties from Arduino block in Simulink:
Could you help me understand what is going on and how to solve this issue?
#Patrick Trentin -
I get a delay of 4 seconds when I use baud rates of 230400, 460800 and 921600.
For baud rate of 57600, 115200 I get a delay of 10 seconds.
Thank you for pointing it out I did not focus on that previously.
However since I will be using the sensor in an application which accurately needs reading every 0.01 sec. I dont think I can work with 4 sec delay.

STM32F427's USART1 sometimes sets 8th data bit as if it would be parity bit

I'm working with STM32F427 UASRT1 via the following class:
void DebugUartOperator::Init() {
// for USART1 and USART6
::RCC_APB2PeriphClockCmd(RCC_APB2Periph_USART1, ENABLE);
// USART1 via PORTA
::RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOA, ENABLE);
::GPIO_PinAFConfig(GPIOA, GPIO_PinSource9, GPIO_AF_USART1);
::GPIO_PinAFConfig(GPIOA, GPIO_PinSource10, GPIO_AF_USART1);
GPIO_InitTypeDef GPIO_InitStruct;
// fills the struct with the default vals:
// all pins, mode IN, 2MHz, PP, NOPULL
::GPIO_StructInit(&GPIO_InitStruct);
// mission-specific settings:
GPIO_InitStruct.GPIO_Pin = GPIO_Pin_9 | GPIO_Pin_10;
GPIO_InitStruct.GPIO_Mode = GPIO_Mode_AF;
::GPIO_Init (GPIOA, &GPIO_InitStruct);
USART_InitTypeDef USART_InitStruct;
// 9600/8/1/no parity/no HWCtrl/rx+tx
::USART_StructInit(&USART_InitStruct);
USART_InitStruct.USART_BaudRate = 921600;
USART_InitStruct.USART_WordLength = USART_WordLength_9b;
USART_InitStruct.USART_StopBits = USART_StopBits_1;
USART_InitStruct.USART_Parity = USART_Parity_Odd;
::USART_Init(USART1, &USART_InitStruct);
::USART_Cmd(USART1, ENABLE);
}
void DebugUartOperator::SendChar(char a) {
// wait for TX register to become empty
while(::USART_GetFlagStatus(USART1, USART_FLAG_TXE) != SET);
::USART_SendData(USART1, static_cast<uint8_t>(a));
}
The problem is that every now and then USART starts ignoring actual 8th data bit and setting it as a parity bit (odd parity, to be specific). The strangest of all that it sometimes happens even after a long poweroff, without any prior reprogramming or something. For example yesterday evening it was all OK, then next morning I come to work, switch the device on and it starts working the way described. But it's not limited to this, it may randomly appear after some next restart.
That effect is clearly visible with the oscilloscope and with different UART-USB converters used with diffrent programs. It is even possible, once this effect have appeared, to reprogram a microcontroller to transmit test data sets. For example, 0x00 to 0xFF in endless cycle. It does not affect the problem. Changing speeds (down to 9600 bps), bits per word, parity control does not help - the effect remains intact even after repropramming (resulting for example in really abnormal 2 parity bits per byte). Or, at least, while UASRT is being initialized and used in the usual order according to my program's workflow.
The only way to fix it is to make the main() function do the following:
int main() {
DebugUartOperator dua;
dua.Init();
while(1) {
uint8_t i;
while(++i)
dua.SendChar(i);
dua.SendChar(i);
}
}
With this, after reprogramming and restart, the first few bytes (up to 5) are transmitted rotten but then everything works pretty well and continues to work well through further restarts and reprograms.
This effect is observed on 2 different STM32F427s on 2 physically different boards of the same layout. No regularity is noticed in its appearance. Signal polarity and levels conform USART requirements, no noise or bad contacts are dectected during investigation. There seems to be no affection to UASRT1 from the direction of other code used in my program (either mine or library one) or it is buried deeply. CMSIS-OS is used as RTOS in the project, with Keil uVision 5.0.5's RTX OS.
Need help.
In STM's you can specify wordlength for usart / uart transmission but wordlength is a sum of data bits and bit parity. So if you would like to have 8 bit data and even parity bit, you have to specify UART_WORDLENGTH_9B andUART_PARITY_EVEN.
You can also have 9 bits of data, with no parity. In reference manual for F427 section 30.6.4, Bit 12 we see that it should be possible to set 9 data bits, but term data bits is also applicable to parity bit.
Bit 12M: Word length
This bit determines the word length. It is set or cleared by software.
0: 1 Start bit, 8 Data bits, n Stop bit
1: 1 Start bit, 9 Data bits, n Stop bit
Final answer is in 30.6.4, Bit 10
This bit selects the hardware parity control (generation and
detection). When the parity control is enabled, the computed parity is
inserted at the MSB position (9th bit if M=1; 8th bit if M=0) and
parity is checked on the received data. This bit is set and cleared by
software. Once it is set, PCE is active after the current byte (in
reception and in transmission).

STM32F103 Input Capture Too Slow

I have a high speed clock at 10 MHz going to the processor's TIM4 input capture pin (ch.3). I would like to verify that the clock is running at 10 MHz with the processor's input capture. I coded the processor with the input capture module, and it works fine for lower frequencies (around 1 kHz or so). Once I start to climb the frequency up to the MHz range, the processor starts to miss interrupts and thus gives me an incorrect frequency. I didn't see anywhere in the datasheet that states the maximum frequency that the input capture can read. I have an external clock of 8 MHz, and a core clock of 72 MHz, so I would imagine that I can read a 10 MHz signal. Any ideas?
Take a look at the TIM_ICInitStructure.TIM_ICPrescaler options. Usually you'll have it set to TIM_ICPSC_DIV1 so that interrupts are generated on every valid transition.
Prescaler values of 1,2,4 and 8 are available that will allow you to effictively reduce the rate of interrupt generation by that factor. For example, for a 10MHz signal with a prescaler of 8 you'd expect to count a frequency of 10Mhz/8 = 1.25MHz.
This is still quite tight for a 72MHz HCLK so you'll still need to optimise your IRQ handler carefully.
Looks like you're generating an interrupt request for every rising (or falling) edge of the clock.
If that is indeed the case, then think about this for a second: with a 10 MHz input signal, you're generating an interrupt about every 7 CPU cycles. In these 7 CPU cycles, you need to budget time to save registers to RAM, run the IRQ handler function prolog, run the actual code you wrote for the interrupt handler, run the IRQ handler function epilogue, and restore the registers.
Best case, if you set compiler flags to optimize for speed and you're not doing much processing in the interrupt handler, you're looking at tens of cycles to run all these tasks. Since you only have 7 cycles to run tens of cycles' worth of processing, it's no surprise that you're missing interrupts.
You can't use an interrupt routine at that frequency. You need to feed the 10MHz in as an external trigger to the timer. Then you can use the prescaler and the timer to divide down to a suitable lower interrupt frequency.