How to use the RTC clock with the STM32 using HSE with PLL - stm32

I am using the stm32F0xx series and am trying to get the RTC to work. I have an external 8MHz crystal connected and using PLL to create a sysclk of 48MHz. Obviously I would like to use this clock with the RTC. I have tried the following:
//(1) Write access for RTC registers
//(2) Enable init phase
//(3) Wait until it is allow to modify RTC register values
//(4) set prescaler,
//(5) New time in TR
//(6) Disable init phase
//(7) Disable write access for RTC registers
RTC->WPR = 0xCA; //(1)
RTC->WPR = 0x53; //(1)
RTC->ISR |= RTC_ISR_INIT; //(2)
while ((RTC->ISR & RTC_ISR_INITF) != RTC_ISR_INITF) //(3)
{
//add time out here for a robust application
}
RCC->BDCR = RCC_BDCR_RTCSEL_HSE;
RTC->PRER = 0x007C2E7C; //(4)
RTC->TR = RTC_TR_PM | 0x00000001; //(5)
RTC->ISR &=~ RTC_ISR_INIT; //(6)
RTC->WPR = 0xFE; //(7)
RTC->WPR = 0x64; //(7)
In the main loop there is an infinite for that turns two led's on and off. Without the RTC config this works fine but as soon as I add in the code above it stops working.
If I do this then the rest of the code breaks. Can I use HSE and if so am I using the prescalar correctly?

This example from actual working code for using HSE for RTC at STM32f429. It uses STM HAL software library, but can gives you a clue to solve.
Please note, that HSE already must be configured and used as frequency source before this code.
Remark: when reading, you should read not just time but also date.
i.e.:
HAL_RTC_GetTime(&RTChandle, &RTCtime, FORMAT_BIN); //first
HAL_RTC_GetDate(&RTChandle, &RTCdate, FORMAT_BIN); //second, even if you dont required
otherwise registers stay frozen (in this case you see ticks only under debugger but not in real run, because debug reads both registers)
// enable access to rtc register
HAL_PWR_EnableBkUpAccess();
// 1. 8Mhz oscillator (Source crystal! Not after PLL!) div by 8 = 1 Mhz
__HAL_RCC_RTC_CONFIG(RCC_RTCCLKSOURCE_HSE_DIV8);
RTChandle.Instance = RTC;
RTChandle.Init.HourFormat = RTC_HOURFORMAT_24;
// 2. (1 Mhz / 125) = 7999 ticks per second
RTChandle.Init.AsynchPrediv = 125 - 1;
RTChandle.Init.SynchPrediv = 8000 - 1;
RTChandle.Init.OutPut = RTC_OUTPUT_DISABLE;
RTChandle.Init.OutPutPolarity = RTC_OUTPUT_POLARITY_HIGH;
RTChandle.Init.OutPutType = RTC_OUTPUT_TYPE_OPENDRAIN;
// do init
HAL_RTC_Init(&RTChandle);
// enable hardware
__HAL_RCC_RTC_ENABLE();

Related

stm32 external interrupt pin mode changing

I am designing an ESC with stm32f103c8t6. In my design I am using BEMF circuitry to detect phase of the motor. From BEMF circuitry (with comparator LM339) I am reading 3 interrupt pins but when code running I need to change the pinmode (like rising edge detection to falling edge detection) and also I need to disable other 2(it depends on phase of the motor at that time) interrupt pins in order to not to read noise that comes from circuitry. How can I do that?
Thanks for your help,
Something like this to switch between falling/rising edge:
void isr_hallsensor(void) {
if (hallsensor_edge_select) {
//rising edge, magnet has left the detection zone.
gpio_hall_sensor.Mode = GPIO_MODE_IT_FALLING;
HW_GPIO_Init(HALLSENSOR_PORT, HALLSENSOR_PIN, &gpio_hall_sensor);
hallsensor_edge_select = 0;
__HAL_GPIO_EXTI_CLEAR_IT(HALLSENSOR_PIN);
} else {
//falling edge, magnet detected.
gpio_hall_sensor.Mode = GPIO_MODE_IT_RISING;
HW_GPIO_Init(HALLSENSOR_PORT, HALLSENSOR_PIN, &gpio_hall_sensor);
hallsensor_edge_select = 1;
__HAL_GPIO_EXTI_CLEAR_IT(HALLSENSOR_PIN);
}
}
Something like this to enable an interrupt:
__HAL_TIM_CLEAR_IT(&htim16, TIM_IT_UPDATE);
HAL_NVIC_SetPriority(TIM1_UP_TIM16_IRQn, 15, 15);
HAL_NVIC_EnableIRQ(TIM1_UP_TIM16_IRQn);
Something like this to disable an interrupt:
HAL_NVIC_DisableIRQ(TIM6_DAC_IRQn);
This will at least get you started, this is for STM32L4.

Problem AVR stuck and program counter lost ...?

I am facing a strange behavior i am working on project using ATMEL MCU (ATMEGA328p) with huge amount with strings so i stored it in flash memory and during run time i read it from flash memory and send it via UART.
i don't know if this the problem or not because i was using the same technique before in other projects but what is different here the amount of strings larger than before.
void PLL_void_UART_SendSrting_F(U8_t* RXBuffer,const char * str , U8_t UART_No)
{
unsigned int _indx=0;
memset(RXBuffer,'\0', A9G_RX_Index); // Initialize the string
RXBuffer[A9G_RX_Index-1]='\0';
// cli();
while((RXBuffer[_indx]=pgm_read_byte(&(*str))))
{
str++;
_indx++;
_delay_ms(5);
}
// sei();
PLL_void_UART_SendSrting(RXBuffer,0);
}
But after awhile the whole program stuck and even after doing hard reset , to work again i should unplug and plug the power again.
Notes :-
- I am sure that hard reset working
- I am using timers in background as system tick .
The code is unsafe; you do nothing to prevent a buffer overrun.
Consider the safer, and simpler:
void PLL_void_UART_SendString_F( U8_t* RXBuffer, const char* str, U8_t UART_No )
{
unsigned index = 0 ;
RXBuffer[A9G_RX_Index - 1] = '\0' ;
while( index < A9G_RX_Index - 1 &&
0 != (RXBuffer[index] = pgm_read_byte( &str[index] )) )
{
index++ ;
}
PLL_void_UART_SendSrting( RXBuffer, 0 ) ;
}
Even then you have to be certain RXBuffer is appropriately sized and str is nul terminated.
Thank you for support
I found the issues, it was because watch dog timer keep the MCU in restart mode even when i press the hardware rest. this is because, i was considering all registers,flags back to its default value after rest.WDT Block Digram
i was doing that in code when MCU start execution the code like that :
U8_t PLL_U8_System_Init()
{
static U8_t SetFalg=0;
PLL_void_TimerInit(); // General Timer Initialize
PLL_WDT_Init(); // Initialize WDT and clear WDRF
wdt_enable(WDTO_8S); // Enable WDT Each 2 S
........
}
but once WDT occurred and rest the program then CPU found WDRF flag is set so it keep in rest for ever since i did power rest.
Solution
i have to clear WDT timer once program start first, before execute any code and its work
U8_t PLL_U8_System_Init()
{
static U8_t SetFalg=0;
PLL_void_TimerInit(); // General Timer Initialize
PLL_WDT_Init(); // Initialize WDT and clear WDRF
wdt_enable(WDTO_8S); // Enable WDT Each 2 S
........
}
this is what written in data sheet
Note:  If the Watchdog is accidentally enabled, for example by a runaway pointer or
brown-out condition, the device will be reset and the Watchdog Timer will stay enabled. If
the code is not set up to handle the Watchdog, this might lead to an eternal loop of timeout resets. To avoid this situation, the application software should always clear the
Watchdog System Reset Flag (WDRF) and the WDE control bit in the initialization
routine, even if the Watchdog is not in use.

stm32f4 DMA does not always start after suspending

So this question is kind of a "sequel" of this one: Stm32f4: DMA + ADC Transfer pausing.
Again, I am trying to implement such an algorithm:
Initialize DMA with ADC in tripple interleaved mode on one channel
Wait for an external interrupt
Suspend DMA transfer and the ADC
Send the buffered data from memory through USART in the interrupt
Resume the DMA and ADCs
Exit the interrupt, goto 2.
The DMA and ADC suspends and resumes, but sometimes (in about 16% of interrupt calls) the resuming fails - DMA just writes first measurement from the ADCs and stops until next interrupt, in which DMA and ADC are restarted (since they are suspended and resumed again) and - well, everything returns back to normal until next such bug.
I've tried suspending DMA just like the Reference manual says:
In order to restart from the point where the transfer was stopped, the
software has to read the DMA_SxNDTR register after disabling the
stream by writing the EN bit in DMA_SxCR register (and then checking
that it is at ‘0’) to know the number of data items already collected.
Then:
– The peripheral and/or memory addresses have to be updated in order to adjust the address pointers
– The SxNDTR register has to be updated with the remaining number of data items to be transferred (the value read when the stream was disabled)
– The stream may then be re-enabled to restart the transfer from the point it was stopped
The only actual difference is in the written NDTR value while resuming DMA working. In my case it is buffer_size, in the RefMan case - it is the value read while pausing the DMA. In the RefMan case, DMA never starts again after pausing. In my case, as I said above, it starts, but not always.
How can I prevent this from happening?
The interrupt code looks like this currently:
void EXTI4_IRQHandler(void) {
uint16_t temp = DMA_GetFlagStatus(DMA2_Stream0, DMA_FLAG_TEIF0);
if(EXTI_GetITStatus(EXTI_Line4) != RESET) {
uint16_t fPoint1 = 0;
uint16_t fPoint2 = 0;
//Some delay using the TIM2
TIM_SetCounter(TIM2, 0);
TIM_Cmd(TIM2, ENABLE);
//Measure the first point NDTR
fPoint1 = DMA2_Stream0->NDTR;
while(TIM_GetITStatus(TIM2, TIM_IT_Update) != SET) {};
//Measure the second point here.
fPoint2 = DMA2_Stream0->NDTR;
if(fPoint1 == fPoint2) {
//The NDTR does not change!
//If it does not change, it is stuck at buffer_size - 1
}
//Disable the timer
TIM_ClearITPendingBit(TIM2, TIM_IT_Update);
TIM_Cmd(TIM2, DISABLE);
DMA_Cmd(DMA2_Stream0, DISABLE);
//Wait until the DMA will turn off
while((DMA2_Stream0->CR & (uint32_t)DMA_SxCR_EN) != 0x00) {};
//Turn off all ADCs
ADC_Cmd(ADC1, DISABLE);
ADC_Cmd(ADC2, DISABLE);
ADC_Cmd(ADC3, DISABLE);
//Send all the data here
//Turn everything back on
//Turn the DMA ON again
DMA_SetCurrDataCounter(DMA2_Stream0, BUFFERSIZE);
DMA_Cmd(DMA2_Stream0, ENABLE);
while((DMA2_Stream0->CR & (uint32_t)DMA_SxCR_EN) == 0x00) {};
//See note # RefMan (Rev. 12), p. 410
ADC->CCR &= ~((uint32_t)(0x000000FF));
ADC->CCR |= ADC_TripleMode_Interl;
ADC_Cmd(ADC1, ENABLE);
ADC_Cmd(ADC2, ENABLE);
ADC_Cmd(ADC3, ENABLE);
while((ADC1->CR2 & (uint32_t)ADC_CR2_ADON) == 0) {};
while((ADC2->CR2 & (uint32_t)ADC_CR2_ADON) == 0) {};
while((ADC3->CR2 & (uint32_t)ADC_CR2_ADON) == 0) {};
ADC_SoftwareStartConv(ADC1);
}
EXTI_ClearITPendingBit(EXTI_Line4);
}
I've found the solution myself. I was thinking it was a DMA problem; however, it turned ot to be ADC's problem.
The OVR flag in ADCx->CR register was always set when the transfer was "stuck". So, I added an interrupt to the ADC overrun situation and restarted DMA & ADC in it. The problem is solved now.

STM32F4 PLL Precision

Im trying to configure clocks on STM32F4 Discovery for precise time measurement. I have this configuration:
int main(void)
{
NVIC_InitTypeDef nvici;
GPIO_InitTypeDef gpioi;
TIM_TimeBaseInitTypeDef timtbi;
SystemInit();
RCC_HSEConfig(RCC_HSE_ON);
RCC_PLLConfig(RCC_PLLCFGR_PLLSRC_HSE, 8, 320, 8, 8);
RCC_PLLCmd(ENABLE);
RCC_SYSCLKConfig(RCC_SYSCLKSource_PLLCK);
RCC_HCLKConfig(RCC_SYSCLK_Div1);
RCC_PCLK1Config(RCC_HCLK_Div1);
RCC_PCLK2Config(RCC_HCLK_Div1);
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOD, ENABLE);
nvici.NVIC_IRQChannel = TIM2_IRQn;
nvici.NVIC_IRQChannelPreemptionPriority = 0;
nvici.NVIC_IRQChannelSubPriority = 1;
nvici.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&nvici);
gpioi.GPIO_Pin = GPIO_Pin_15;
gpioi.GPIO_Mode = GPIO_Mode_OUT;
gpioi.GPIO_OType = GPIO_OType_PP;
gpioi.GPIO_Speed = GPIO_Speed_100MHz;
gpioi.GPIO_PuPd = GPIO_PuPd_NOPULL;
GPIO_Init(GPIOD,&gpioi);
RCC_APB1PeriphClockCmd(RCC_APB1Periph_TIM2, ENABLE);
timtbi.TIM_Period = 20000000;
timtbi.TIM_Prescaler = 0;
timtbi.TIM_ClockDivision = 0;
timtbi.TIM_CounterMode = TIM_CounterMode_Up;
timtbi.TIM_RepetitionCounter = 0;
TIM_TimeBaseInit(TIM2, &timtbi);
TIM_ClearITPendingBit(TIM2, TIM_IT_Update);
TIM_ITConfig(TIM2, TIM_IT_Update, ENABLE);
TIM_Cmd(TIM2, ENABLE);
GPIO_SetBits(GPIOD,GPIO_Pin_15);
while(1)
{
}
}
void TIM2_IRQHandler()
{
TIM_ClearITPendingBit(TIM2, TIM_IT_Update);
GPIO_ToggleBits(GPIOD,GPIO_Pin_15);
}
with this i should have TIM2 sourced with 20MHz clock, but it appears to have diffrent frequency (about 10-30% diffrent). This problem appears for all other PLL configurations i tried, but when i use HSE as SYSCLK directly it works just fine. Am i doing something wrong, or is it PLL that isn't reliable?
Can't say with 100% certainty whether that's the problem, but after enabling the HSE using RCC_HSEConfig(), you should call RCC_WaitForHSEStartUp() since it takes a while for the HSE to start oscillating, and check the return code to make sure the call was successful and the HSE actually initialized.
Also, if you're using the system_stm32f4xx.c file that comes with the Standard Peripheral Library, you can scrap your PLL initialization code and just use the code that's called by SystemInit(). There are a few #defines that control the PLL configurations, near the beginning of the file (#define PLL_M, #define PLL_N and so on; their purpose should be self-evident). I always initialize my clocks using the code there, and they're always precise to within the crystal's accuracy. Note that this code assumes a 25 MHz oscillator by using PLL_M equal to 25, so you should set it to 8 for use with the STM32F4DISCOVERY board -- exactly as you've already done in your code. I'm not suggesting this because I have any prejudices against your code, but the code there has been tested far and wide, and in my experience it can be trusted.

mbed not sleep with RTOS

I want to create a low power application with mbed (LPC1768) and have been following tutorial by Jim Hamblen at: https://mbed.org/cookbook/Power-Management and also http://mbed.org/users/no2chem/notebook/mbed-power-controlconsumption/
I was able to wake the mbed from Sleep() by GPIO interrupt, UART interrupt, and Ticker. I use PowerControl library.
Here is my code:
#include "mbed.h"
#include "PowerControl/PowerControl.h"
#include "PowerControl/EthernetPowerControl.h"
// Need PowerControl *.h files from this URL
// http://mbed.org/users/no2chem/notebook/mbed-power-controlconsumption/
// Function to power down magic USB interface chip with new firmware
#define USR_POWERDOWN (0x104)
int semihost_powerdown() {
uint32_t arg;
return __semihost(USR_POWERDOWN, &arg);
}
DigitalOut myled1(LED1);
DigitalOut myled2(LED2);
DigitalOut myled3(LED3);
DigitalOut myled4(LED4);
bool rx_uart_irq = false;
Serial device(p28, p27); // tx, rx
InterruptIn button(p5);
// Circular buffers for serial TX and RX data - used by interrupt routines
const int buffer_size = 255;
// might need to increase buffer size for high baud rates
char tx_buffer[buffer_size];
char rx_buffer[buffer_size];
// Circular buffer pointers
// volatile makes read-modify-write atomic
volatile int tx_in=0;
volatile int tx_out=0;
volatile int rx_in=0;
volatile int rx_out=0;
// Line buffers for sprintf and sscanf
char tx_line[80];
char rx_line[80];
void Rx_interrupt();
void blink() {
myled2 = !myled2;
}
int main() {
//int result;
device.baud(9600);
device.attach(&Rx_interrupt, Serial::RxIrq);
// Normal mbed power level for this setup is around 690mW
// assuming 5V used on Vin pin
// If you don't need networking...
// Power down Ethernet interface - saves around 175mW
// Also need to unplug network cable - just a cable sucks power
PHY_PowerDown();
myled2 = 0;
// If you don't need the PC host USB interface....
// Power down magic USB interface chip - saves around 150mW
// Needs new firmware (URL below) and USB cable not connected
// http://mbed.org/users/simon/notebook/interface-powerdown/
// Supply power to mbed using Vin pin
//result = semihost_powerdown();
// Power consumption is now around half
// Turn off clock enables on unused I/O Peripherals (UARTs, Timers, PWM, SPI, CAN, I2C, A/D...)
// To save just a tiny bit more power - most are already off by default in this short code example
// See PowerControl.h for I/O device bit assignments
// Don't turn off GPIO - it is needed to blink the LEDs
Peripheral_PowerDown( ~( LPC1768_PCONP_PCUART0 |
LPC1768_PCONP_PCUART2 |
0));
// use Ticker interrupt and Sleep instead of a wait for time delay - saves up to 70mW
// Sleep halts and waits for an interrupt instead of executing instructions
// power is saved by not constantly fetching and decoding instructions
// Exact power level reduction depends on the amount of time spent in Sleep mode
//blinker.attach(&blink, 0.05);
//button.rise(&blink);
while (1) {
myled1 = 0;
printf("bye\n");
Sleep();
if(rx_uart_irq == true) {
printf("wake from uart irq\n");
}
myled1 = 1;
}
}
// Interupt Routine to read in data from serial port
void Rx_interrupt() {
myled2 = !myled2;
rx_uart_irq = true;
uint32_t IRR0= LPC_UART2->IIR;
while ((device.readable()) && (((rx_in + 1) % buffer_size) != rx_out)) {
rx_buffer[rx_in] = LPC_UART2->RBR;
rx_in = (rx_in + 1) % buffer_size;
}
}
Here is the problem: The Sleep() doesn't put the mbed to sleep when mbed-rtos library is added. Even when I don't use any function calls from the rtos library , Sleep() doesn't work.
My explanation: Probably the rtos has a timer running in the background and it generates an interrupt every now and then. (But it kinda doesn't make sense because I haven't use any function or object from rtos library)
My question:
Has any one made the Sleep() function work with rtos? if yes, please point me to the right direction or if you have the solution, please share.
I'm not sure if the Sleep() function is designed for RTOS use, but I doubt it. Someone with better knowledge in mbed-rtos could probably tell for sure, but I suspect that IRQ handling in the RTOS could cause the problem. If Sleep() relies on WFE then the MCU will sleep if there is no pending interrupt flag. In a super loop design you (should) have full control over this; with an RTOS you don't.
I suggest using Thread::wait() instead, which should have full knowledge about what the RTOS does. Can't tell if it causes a sleep, but I expect no less.
I used the following library once and it worked flawlessly. I am not sure if it would work with mbed 5 but its worth a try.
https://os.mbed.com/users/no2chem/code/PowerControl/