cortex-m0 systick interrupt doesn't happens - stm32

I am writting firmware for stm32f072.
The problem is that SysTick interrupt doesn't happens.
Here is simple code for SysTick configuring:
SysTick_Config(1000);
This function is taken from CMSIS's core_cm0.h file:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1) > SysTick_LOAD_RELOAD_Msk) return (1); /* Reload value impossible */
SysTick->LOAD = (ticks & SysTick_LOAD_RELOAD_Msk) - 1; /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1<<__NVIC_PRIO_BITS) - 1); /* set Priority for Systick Interrupt */
SysTick->VAL = 0; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0); /* Function successful */
}
System timer counts as expected.
SysTick->CTRL's overcounting bit is set to 1 but there is no interrupt happens! Firmware doesn't jump to SysTick_Handler().
What I miss? This code is enough for stm32f1 and stm32f4 devices but not work for stm32f0.

I recommend you to take a look at the Code Snippets from ST. These are low level programs for F0 (and L0) families. Some of them use the SysTick (e.g. first two example from CLOCK CONTROLLER projects) and all things are preconfigured and hopefully works on your board too. It is written originally for the STM32F072 Discovery board. I used it with my custom board with some tiny modifications.

Related

LIS3DH sensor do not generate interrupt on pin

I config a LIS3DH sensor to generate interrupt when acceleration in any direction exceeds a certain threshold. I read INT1_SRC register and when I shake my board, IA bit is set and reset again. From this I realized that interrupt has been generated but I can not capture it on pin. I also check INT1 pin with oscilloscope but no signal.
Do any one know where would be the problem?
my configuration for sensor:
/* High-pass filter enabled on interrupt activity 1 */
lis3dh_high_pass_int_conf_set(&dev_ctx, LIS3DH_ON_INT1_GEN);
/* Enable HP filter for wake-up event detection.*/
/* Use this setting to remove gravity on data output */
lis3dh_high_pass_on_outputs_set(&dev_ctx, PROPERTY_ENABLE);
/* Enable AOI1 on int1 pin */
lis3dh_pin_int1_config_get(&dev_ctx, &ctrl_reg3);
ctrl_reg3.i1_ia1 = PROPERTY_ENABLE;
lis3dh_pin_int1_config_set(&dev_ctx, &ctrl_reg3);
/* Interrupt 1 pin latched */
lis3dh_int1_pin_notification_mode_set(&dev_ctx, LIS3DH_INT1_LATCHED);
/* Set full scale to 2 g */
lis3dh_full_scale_set(&dev_ctx, LIS3DH_2g);
/* Set interrupt threshold to 0x10 -> 250 */
lis3dh_int1_gen_threshold_set(&dev_ctx, 0x05);
/* Set no time duration */
lis3dh_int1_gen_duration_set(&dev_ctx, 0);
/* Dummy read to force the HP filter to current acceleration value. */
lis3dh_filter_reference_get(&dev_ctx, &dummy);
/* Configure wake-up interrupt event on all axis */
lis3dh_int1_gen_conf_get(&dev_ctx, &int1_cfg);
int1_cfg.zhie = PROPERTY_ENABLE;
int1_cfg.yhie = PROPERTY_ENABLE;
int1_cfg.xhie = PROPERTY_ENABLE;
int1_cfg.aoi = PROPERTY_DISABLE;
lis3dh_int1_gen_conf_set(&dev_ctx, &int1_cfg);
/* Set device in HR mode */
lis3dh_operating_mode_set(&dev_ctx, LIS3DH_HR_12bit);
/* Set Output Data Rate to 100 Hz */
lis3dh_data_rate_set(&dev_ctx, LIS3DH_ODR_100Hz);

STM32 Use DMA to generate bit pattern on GPIO PIN

I am trying to generate a bit pattern on a GPIO pin. I have set-up the DMA engine to transfer from an array of GPIO pin states to the GPIO BSRR register
Here is the code I am using to configure the DMA
hdma_tim16_ch1_up.Instance = DMA1_Channel3;
hdma_tim16_ch1_up.Init.Direction = DMA_PERIPH_TO_MEMORY;
hdma_tim16_ch1_up.Init.PeriphInc = DMA_PINC_DISABLE;
hdma_tim16_ch1_up.Init.MemInc = DMA_MINC_ENABLE;
hdma_tim16_ch1_up.Init.PeriphDataAlignment = DMA_PDATAALIGN_WORD;
hdma_tim16_ch1_up.Init.MemDataAlignment = DMA_MDATAALIGN_WORD;
hdma_tim16_ch1_up.Init.Mode = DMA_NORMAL;
hdma_tim16_ch1_up.Init.Priority = DMA_PRIORITY_LOW;
if (HAL_DMA_Init(&hdma_tim16_ch1_up) != HAL_OK)
{
Error_Handler();
}
/* Several peripheral DMA handle pointers point to the same DMA handle.
Be aware that there is only one channel to perform all the requested DMAs. */
__HAL_LINKDMA(tim_baseHandle,hdma[TIM_DMA_ID_CC1],hdma_tim16_ch1_up);
__HAL_LINKDMA(tim_baseHandle,hdma[TIM_DMA_ID_UPDATE],hdma_tim16_ch1_up);
Here is the code I use to setup the transfer:
uint32_t outputbuffer[] = {
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000,
0x0000100,0x01000000
/* ... */
};
if (HAL_DMA_Start_IT(htim16.hdma[TIM_DMA_ID_UPDATE], (uint32_t)outputbuffer, (uint32_t)&GPIOG->BSRR, 14) != HAL_OK)
{
/* Return error status */
return HAL_ERROR;
}
__HAL_TIM_ENABLE_DMA(&htim16,TIM_DMA_UPDATE);
HAL_TIM_Base_Start_IT(&htim16);
I am expecting to see every time the counter overflows, the DMA transfers 32 bits from the array and increments to the next array position until the DMA CNDTR register reads 0.
I set up a GPIO pin to toggle every time the timer over flows and I setup an alternating bit pattern in the array. I would expect the two GPIO pins to be similar in their output shape but I get one longer pulse on the line connected to the DMA. Any tips would be greatly appreciated
configure TIM2 as input capture direct mode (TIM2_CH1)
configure TIM2 DMA direction "memory to peripheral"
configure TIM2 data width Half word / Half word
configure GPIO pins as GPIO_OUTPUT, for example 16 pins GPIOD0..GPIOD15
copy and paste HAL_TIM_IC_Start_DMA() function from HAL library and give it a new name MY_TIM_IC_Start_DMA()
find HAL_DMA_Start_IT() function call in MY_TIM_IC_Start_DMA()
replace (uint32_t)&htim->Instance->CCR1 with (uint32_t)&GPIOD->ODR
if (HAL_DMA_Start_IT(htim->hdma[TIM_DMA_ID_CC1], (uint32_t)&GPIOD->ODR, (uint32_t)pData, Length) != HAL_OK)
Now you can start DMA to GPIO transfer by calling
MY_TIM_IC_Start_DMA(&htim2, TIM_CHANNEL_1,(uint32_t*)gpioBuffer,GPIO_BUFFER_SIZE);
Actual transfer must be triggered by providing pulses on TIM2_CH1 input pin (for example, by using output compare pin from other timer channel). Those pulses originally was used to save Timer2 CCR1 register values to DMA buffer. Code was tweaked to transfer DMA buffer value to GPIOD ODR register.
For GPIO to Memory transfer change TIM2 DMA direction to "peripheral to memory", configure GPIO pins as GPIO_INPUT and use GPIOD->IDR instead of ODR in HAL_DMA_Start_IT parameters in modified MY_TIM_IC_Start_DMA() function.

STM32L4 LSE Stops oscillating on Vbat

I'm currently having problem with RTC clock (LSE) on STM32L433.
When Vdd is supplied and rtc configured clock is working as expected.
As soon as i remove vdd (vbat provided by coin cell) lse crystal stops oscillating hence rtc problem but backup registers are preserved time is frozen for vdd off period.
What could be cause of the problem? is there any pwr register that i'm missing?
Thanks for response in advance.
RTC setup code (library for registers is libopencm3)
void rtc_setup(time_struct time, date_struct date)
{
usart_puts("1\n");
rtc_wkup_set = false;
/* cause a backup domain reset to select the clock source for RTC */
RCC_BDCR |= RCC_BDCR_BDRST;
RCC_BDCR &= ~RCC_BDCR_BDRST;
pwr_disable_backup_domain_write_protect();
RCC_BDCR &= ~(RCC_BDCR_RTCSEL_MASK << RCC_BDCR_RTCSEL_SHIFT);
RCC_BDCR |= RCC_BDCR_RTCEN | (RCC_BDCR_RTCSEL_LSE << RCC_BDCR_RTCSEL_SHIFT) | (RCC_BDCR_LSEDRV_SHIFT << RCC_BDCR_LSEDRV_HIGH) | RCC_BDCR_LSEON;
//TODO: Add timeout for RTC FAIL
while ((RCC_BDCR & RCC_BDCR_LSERDY) == 0)
;
usart_puts("2\n");
/* enable RTC */
rtc_unlock();
RTC_ISR |= RTC_ISR_INIT;
while (!(RTC_ISR & RTC_ISR_INITF))
;
RTC_CR &= ~(RTC_CR_FMT);
RTC_DR = (uint32_t)date;
RTC_TR = (uint32_t)time;
RTC_ISR &= ~(RTC_ISR_INIT);
usart_puts("3\n");
rtc_lock();
}
Issue was short on one of the legs of stm due to flux residue.
Issue solved

Disabling SPI peripheral on STM32H7 between two transmissions?

I'm still doing SPI experiments between two Nucleo STM32H743 boards.
I've configured SPI in Full-Duplex mode, with CRC enabled, with a SPI frequency of 25MHz (so Slave can transmit without issue).
DSIZE is 8 bits and FIFO threshold is 4.
On Master side, I'm sending 4 bytes then wait for 5 bytes from the Slave. I know I could use half-duplex or simplex mode but I want to understand what's going on in full-duplex mode.
volatile unsigned long *CR1 = (unsigned long *)0x40013000;
volatile unsigned long *CR2 = (unsigned long *)0x40013004;
volatile unsigned long *TXDR = (unsigned long *)0x40013020;
volatile unsigned long *RXDR = (unsigned long *)0x40013030;
volatile unsigned long *SR = (unsigned long *)0x40013014;
volatile unsigned long *IFCR = (unsigned long *)0x40013018;
volatile unsigned long *TXCRC = (unsigned long *)0x40013044;
volatile unsigned long *RXCRC = (unsigned long *)0x40013048;
volatile unsigned long *CFG2 = (unsigned long *)0x4001300C;
unsigned long SPI_TransmitCommandFullDuplex(uint32_t Data)
{
// size of transfer (TSIZE)
*CR2 = 4;
/* Enable SPI peripheral */
*CR1 |= SPI_CR1_SPE;
/* Master transfer start */
*CR1 |= SPI_CR1_CSTART;
*TXDR = Data;
while ( ((*SR) & SPI_FLAG_EOT) == 0 );
// clear flags
*IFCR = 0xFFFFFFFF;
// disable SPI
*CR1 &= ~SPI_CR1_SPE;
return 0;
}
void SPI_ReceiveResponseFullDuplex(uint8_t *pData)
{
unsigned long temp;
// size of transfer (TSIZE)
*CR2 = 5;
/* Enable SPI peripheral */
*CR1 |= SPI_CR1_SPE;
/* Master transfer start */
*CR1 |= SPI_CR1_CSTART;
*TXDR = 0;
*((volatile uint8_t *)TXDR) = 0;
while ( ((*SR) & SPI_FLAG_EOT) == 0 );
*((uint32_t *)pData) = *RXDR;
*((uint8_t *)(pData+4)) = *((volatile uint8_t *)RXDR);
// clear flags
*IFCR = 0xFFFFFFFF;
// disable SPI
*CR1 &= ~SPI_CR1_SPE;
return temp;
}
This is working fine (both functions are just called in sequence in the main).
Then I tried to remove the SPI disabling between the two steps (ie. I don't clear and set again the bit SPE) and I got stuck in function SPI_ReceiveResponseFullDuplex in the while.
Is it necessary to disable SPI between two transmissions or did I make a mistake in the configuration ?
The behaviour of SPE bit is not very clear in the reference manual. For example is it written clearly that, in half-duplex mode, the SPI has to be disabled to change the direction of communication. But nothing in fuill-duplex mode (or I missed it).
This errata item might be relevant here.
Master data transfer stall at system clock much faster than SCK
Description
With the system clock (spi_pclk) substantially faster than SCK (spi_ker_ck divided by a prescaler), SPI/I2S master data transfer can stall upon setting the CSTART bit within one SCK cycle after the EOT event (EOT flag raise) signaling the end of the previous transfer.
Workaround
Apply one of the following measures:
• Disable then enable SPI/I2S after each EOT event.
• Upon EOT event, wait for at least one SCK cycle before setting CSTART.
• Prevent EOT events from occurring, by setting transfer size to undefined (TSIZE = 0)
and by triggering transmission exclusively by TXFIFO writes.
Your SCK frequency is 25 MHz, the system clock can be 400 or 480MHz, 16 or 19 times SCK. When you remove the lines clearing and setting SPE, only these lines remain in effect after detecting EOT
*IFCR = 0xFFFFFFFF;
*CR2 = 5;
*CR1 |= SPI_CR1_CSTART;
When this sequence (quite probably) takes less than 16 clock cycles, then there is the problem described above. Looks like someone did a sloppy work again at the SPI clock system. What you did first, clearing and setting SPE is one of the recommended workarounds.
I would just set TSIZE=9 at the start, then write the command and the dummy bytes in one go, it makes no difference in full-duplex mode.
Keep in mind that in full duplex mode, another 4 bytes are received which must be read and discarded before getting the real answer. This was not a problem with your original code, because clearing SPE discards data still in the receive FIFO, but it would become one if the modified code worked, e.g there were some more delay before enabling CSTART again.

Is the sampling data missing or incorrect in DMA memory buffer, when using ADC with DMA circle mode?

My purpose is sampling signal by ADC channel with DMA data moving in STM32Fx board. Generate a square wave to ADC channel. If using DMA mode, some data is out of order or called mess. Same result happened on STM32F207 and STM32F373 board.
(1) When I collect converted data by using ADC EOC interrupt, the data array looks like a square wave. This is OK.
(2) I would like to try DMA circle instead of EOC IRQ, but the data array seems to mess up, some data missing or incorrect. It could worse if sampling rate was increasing. Below are my test results.
The picture shows my test result: EOC IRQ vs DMA circle mode
EOC IRQ vs DMA with sampling 62.5KHz The waveform in DMA became shorter.
The picture shows very worse DMA with sampling 200KHz The data on DMA mode mess up, but it's consistent by using ADC EOC IRQ.
enter code here
<<<< ADC config >>>>
/* ADC Common Init */
ADC_CommonInitStructure.ADC_Mode = ADC_Mode_Independent;
ADC_CommonInitStructure.ADC_Prescaler = ADC_Prescaler_Div8;
ADC_CommonInitStructure.ADC_DMAAccessMode = ADC_DMAAccessMode_Disabled;
ADC_CommonInitStructure.ADC_TwoSamplingDelay =
ADC_TwoSamplingDelay_5Cycles;
ADC_CommonInit(&ADC_CommonInitStructure);
/* ADC1 DeInit */
ADC_StructInit(&ADC_InitStructure);
/* Configure the ADC1 in continuous mode */
ADC_InitStructure.ADC_Resolution = ADC_Resolution_12b;
ADC_InitStructure.ADC_ScanConvMode = ENABLE;
ADC_InitStructure.ADC_ContinuousConvMode = ENABLE;
ADC_InitStructure.ADC_DataAlign = ADC_DataAlign_Right;
ADC_InitStructure.ADC_NbrOfConversion = 1;
ADC_Init(ADC1, &ADC_InitStructure);
/* ADC1 regular channels 6 configuration */
ADC_RegularChannelConfig(ADC1, ADC_Channel_6, 1,
ADC_SampleTime_480Cycles);
ADC_EOCOnEachRegularChannelCmd(ADC1, ENABLE);
#ifdef __DMA_ENABLE__
/* Enable DMA request after last transfer (Single-ADC mode) */
ADC_DMARequestAfterLastTransferCmd(ADC1, ENABLE);
/* Enable ADC1 DMA since ADC1 is the Master*/
ADC_DMACmd(ADC1, ENABLE);
#else
ADC_ITConfig(ADC1, ADC_IT_EOC, ENABLE);
ADC_ITConfig(ADC1, ADC_IT_OVR, ENABLE);
NVIC_InitStructure.NVIC_IRQChannel = ADC_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 1;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
#endif
/* Enable ADC1 */
ADC_Cmd(ADC1, ENABLE);
<<<< DMA config >>>>
/* DMA1 clock enable */
RCC_AHB1PeriphClockCmd( RCC_AHB1Periph_DMA2, ENABLE );
/* DMA1 Channel1 Config */
DMA_DeInit(DMA2_Stream0);
DMA_DoubleBufferModeConfig(DMA2_Stream0, (uint32_t)ADCValB, DMA_Memory_1);
DMA_DoubleBufferModeCmd(DMA2_Stream0, ENABLE);
DMA_InitStructure.DMA_Channel = DMA_Channel_0;
DMA_InitStructure.DMA_Memory0BaseAddr = (uint32_t)ADCValA;
DMA_InitStructure.DMA_PeripheralBaseAddr = ((uint32_t) ADC1) + 0x4C;
DMA_InitStructure.DMA_DIR = DMA_DIR_PeripheralToMemory;
DMA_InitStructure.DMA_BufferSize = NUM_OF_ADC; // 512 buffer size
DMA_InitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable;
DMA_InitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable;
DMA_InitStructure.DMA_PeripheralDataSize = DMA_PeripheralDataSize_HalfWord;
DMA_InitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_HalfWord;
DMA_InitStructure.DMA_Mode = DMA_Mode_Circular;
DMA_InitStructure.DMA_Priority = DMA_Priority_Medium;
DMA_InitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
DMA_InitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_HalfFull;
DMA_InitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
DMA_Init(DMA2_Stream0, &DMA_InitStructure);
DMA_ITConfig(DMA2_Stream0, DMA_IT_TC, ENABLE);
NVIC_InitStructure.NVIC_IRQChannel = DMA2_Stream0_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
/* DMA1 Channel1 enable */
DMA_Cmd(DMA2_Stream0, ENABLE);
Finally, I expect the result should be same as that using ADC EOC IRQ.
I have encountered the same problem, as a part of my complicated application, the ADC reads continuously an input PWM data and configured with DMA request in circular mode.
First, I tried to store the converted data in a memory buffer at each End Of Conversion Interrupt. This works well, so I am confident that the ADC is correctly converting the data.
Second, I started the ADC with DMA request in circular mode, (for each End Of Conversion Interrupt, the DMA handler get the ADC converted data and store in a memory buffer). At this stage, when I check the memory buffer via the Debugger - It seams that the data are messed and it's like the DMA was skipping values).
Third, I just wanted to verify, if it's a DMA problem or it's a Debugger one. I started the ADC with DMA request in "normal" mode. And all of a sudden, when opening the Debugger, the memory buffer data are correctly stored.
To summarize, the main difference between the second and the third method when opening the Debugger, is that the DMA handler is still running (in circular mode) and as a result the Debugger can't be able to show correctly the memory buffer data, due to the speed of the ADC conversion time/ DMA Handler requests (~1 µs). And in the other hand (in normal mode), the DMA handlers stops once it has completed filling the buffer.
To conclude, the DMA handler works fine and you can output the square wave with the DAC using your buffer. And if you want to view correctly the data using the Debugger you need to stop the DMA (after a period of time; for example).
HAL_ADC_Stop_DMA(&hadc);
Based on the tabular view of the data you provided, it appears that the DMA is configured properly as the data looks nearly identical to the data obtained from ADC EOC IRQ.
The only variable between relying on DMA and an IRQ is that there may be unexpected bus "collisions" between the DMA Controller and the CPU as, unlike when using an IRQ, they are both running concurrently, potentially resulting in wait states.
From the STM32 Reference Manual Section 13.4:
The DMA controller performs direct memory transfer by sharing the system bus with the Cortex-M4 ® F core. The DMA request may stop the CPU access to the system bus for some bus cycles, when the CPU and DMA are targeting the same destination (memory or peripheral).
And your observed sampling rate-dependent degradation in performance certainly corroborates this hypothesis, as the busmatrix must arbitrate more frequent accesses between the DMA Controller and the CPU.
Without seeing the rest of your code that sets up and reads from the buffer, it is hard to say what aspect of your application code may be causing this issue.