STM32L4 LSE Stops oscillating on Vbat - crystal-reports

I'm currently having problem with RTC clock (LSE) on STM32L433.
When Vdd is supplied and rtc configured clock is working as expected.
As soon as i remove vdd (vbat provided by coin cell) lse crystal stops oscillating hence rtc problem but backup registers are preserved time is frozen for vdd off period.
What could be cause of the problem? is there any pwr register that i'm missing?
Thanks for response in advance.
RTC setup code (library for registers is libopencm3)
void rtc_setup(time_struct time, date_struct date)
{
usart_puts("1\n");
rtc_wkup_set = false;
/* cause a backup domain reset to select the clock source for RTC */
RCC_BDCR |= RCC_BDCR_BDRST;
RCC_BDCR &= ~RCC_BDCR_BDRST;
pwr_disable_backup_domain_write_protect();
RCC_BDCR &= ~(RCC_BDCR_RTCSEL_MASK << RCC_BDCR_RTCSEL_SHIFT);
RCC_BDCR |= RCC_BDCR_RTCEN | (RCC_BDCR_RTCSEL_LSE << RCC_BDCR_RTCSEL_SHIFT) | (RCC_BDCR_LSEDRV_SHIFT << RCC_BDCR_LSEDRV_HIGH) | RCC_BDCR_LSEON;
//TODO: Add timeout for RTC FAIL
while ((RCC_BDCR & RCC_BDCR_LSERDY) == 0)
;
usart_puts("2\n");
/* enable RTC */
rtc_unlock();
RTC_ISR |= RTC_ISR_INIT;
while (!(RTC_ISR & RTC_ISR_INITF))
;
RTC_CR &= ~(RTC_CR_FMT);
RTC_DR = (uint32_t)date;
RTC_TR = (uint32_t)time;
RTC_ISR &= ~(RTC_ISR_INIT);
usart_puts("3\n");
rtc_lock();
}

Issue was short on one of the legs of stm due to flux residue.
Issue solved

Related

STM32: USART alternative function not working

I'm using the stm32f767zi, and I'm trying to send test data over the USART peripheral. I've done the same configuration as I always do on any device, but this time it does not output anything... I cannot find the mistake, can someone help ?
The clock setup
// Enables TIM8 (Delay), USART1 (STDOUT)
RCC->APB2ENR |= (RCC_APB2ENR_TIM8EN
| RCC_APB2ENR_USART1EN);
// Enables GPIOA, GPIOB, GPIOC, GPIOD, GPIOE, GPIOF, DMA1
RCC->AHB1ENR |= (RCC_AHB1ENR_GPIOAEN
| RCC_AHB1ENR_GPIOBEN
| RCC_AHB1ENR_GPIOCEN
| RCC_AHB1ENR_GPIODEN
| RCC_AHB1ENR_GPIOEEN
| RCC_AHB1ENR_GPIOFEN
| RCC_AHB1ENR_DMA1EN);
The initialization code
// Makes A8 (TX) and A9 (RX) Alternative Function
GPIOA->MODER &= ~(GPIO_MODER_MODER8_Msk
| GPIO_MODER_MODER9_Msk);
GPIOA->MODER |= ((0x2 << GPIO_MODER_MODER8_Pos)
| (0x2 << GPIO_MODER_MODER9_Pos));
// Selects AF7 for both A8 (TX) and A9 (RX).
GPIOA->AFR[1] &= ~(GPIO_AFRH_AFRH0_Msk
| GPIO_AFRH_AFRH1_Msk);
GPIOA->AFR[1] |= ((7 << GPIO_AFRH_AFRH0_Pos)
| (7 << GPIO_AFRH_AFRH1_Pos));
// Selects very high speed for A8 (TX) and A9 (RX)
GPIOA->OSPEEDR &= ~(GPIO_OSPEEDR_OSPEEDR8_Msk
| GPIO_OSPEEDR_OSPEEDR9_Msk);
GPIOA->OSPEEDR |= ((0x3 << GPIO_OSPEEDR_OSPEEDR8_Pos)
| (0x3 << GPIO_OSPEEDR_OSPEEDR9_Pos));
// Calculates and sets the baud rate.
m_USART->BRR = (((2 * clk) + baud) / (2 * baud));
// Configures the USART peripheral further.
m_USART->CR1 = USART_CR1_TE // Transmit Enable
| USART_CR1_RE // Receive Enable
| USART_CR1_UE; // USART Enable (EN)
and the write function:
*reinterpret_cast<uint8_t *>(m_USART->TDR) = c;
while (!(m_USART->ISR & USART_ISR_TC));
The problem you have is that you set the wrong pins. It should be PA9 & PA10
your write is also wrong
it should be :
*reinterpret_cast<volatile uint8_t *>(&m_USART->TDR) = c;
or C style:
*(volatile uint8_t *)(&m_USART->TDR) = c;
you are also checking the wrong flag.
TC is important if you want to disable the peripheral after the transition. In normal conditions use TXE flag instead.
while (!(m_USART->ISR & USART_ISR_TXE));
Your reinterpret_cast is unnecessary and incorrect.
I assume you actually wrote *reinterpret_cast<uint8_t *>(&m_USART->TDR) = c; but that is still wrong.
The person who wrote the standard device header has taken great care to make sure that USARTx->TDR already has the correct type, I strongly advise you to trust them and not cast it! In this particular case they will have made it volatile, and you have not, so it is possible that the compiler thinks it can make an optimization by not bothering to perform a write to something that you never read back.
The reason you probably got away with this on other STM32 parts is that their UARTs have just DR for both transmit and receive so reading DR for reception made the compiler think it couldn't eliminate the write.
Also, I don't know about this part, but many STM32 need an extra cycle between writing to RCC->xxxENR and using the respective peripheral, this is usually done with a read of the same register, eg:
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOAEN;
(void)RCC->AHB1ENR;
// now safe to access GPIOA registers

STM32F405 bare metal spi slave - MISO data messed up sometimes

I've set up two STM32 Boards, one as SPI-master, the other one as slave.
I write directly to registers without any framework.
Master to slave communication is working perfectly. But the slave sends garbage sometimes.
I first tried interrupts, but the slave would always send garbage and often receive garbage.
Now I implemented DMA. This is working way better, the slave now always receives correct data. But sending is still an issue.
If the transmission is 3 to 5 Bytes long the data from the slave is correct in 95% of all cases.
If the transmission is longer then 5 bytes, then after the 4th or 5th byte there is just random byte foo. But the first 4 bytes are nearly (95%) always correct.
The signals are clean, I checked them with an oscilloscope. The data which the master receives shows up properly on MISO. So I guess the slave somehow writes garbage into the SPI DR, or the data register gets messed up.
I know SPI slaves on non-FPGAs are tricky, but this really is unexpected...
Anyone can point me a direction? I'm desperate and thankful for any bit of advice.
This is the code
void DMA1_Stream3_IRQHandler( void )
{
if (spi2_slave)
{
while( (spi_spc->SR & (1<<1)) == 0 ); // must wait for TXE to be set!
while( spi_spc->SR & (1<<7) ); // must wait for busy to clear!
DMA1_Stream3->CR &= ~(1<<0); // Disable stream 3
while((DMA1_Stream3->CR & (1<<0)) != 0); // Wait till disabled
DMA1_Stream3->NDTR = 3; // Datenmenge zum Empfangen
DMA1_Stream3->CR |= (1<<0); // Enable DMA1_Stream3 (TX)
DMA1->LIFCR = (1<<27); // clear Transfer complete in Stream 3
// fire SPI2 finished CBF
if (spi2_xfer_done != 0)
{
if (spi2_xfer_len > 0)
{
spi2_xfer_done(spi2_rx_buffer, spi2_xfer_len);
}
}
}
else
{
while( spi_spc->SR & (1<<7) ); // must wait for busy to clear!
GPIOB->ODR |= (1<<12); // Pull up SS Pin
spi_spc->CR2 &= ~((1<<0) | (1<<1)); // Disable TX and RX DMA request lines
spi_spc->CR1 &= ~(1<<6); // 6:disableSPI
DMA1->LIFCR = (1<<27); // clear Transfer complete in Stream 3
// fire SPI2 finished CBF
if (spi2_xfer_done != 0)
{
spi2_xfer_done(spi2_rx_buffer, spi2_xfer_len);
}
while( (spi_spc->SR & (1<<1)) == 0 ); // must wait for TXE to be set!
}
}
// For Slave TX DMA
void DMA1_Stream4_IRQHandler( void )
{
DMA1_Stream4->CR &= ~(1<<0); // Disable stream 4
while((DMA1_Stream4->CR & (1<<0)) != 0); // Wait till disabled
spi_spc->CR2 &= ~(1<<1); // Disable TX DMA request lines
DMA1->HIFCR = (1<<5); // clear Transfer complete in Stream 4
}
void mcu_spi_spc_init_slave(void (*xfer_done)(uint8_t* data, uint32_t dlen))
{
spi2_slave = 1;
spi2_xfer_done = xfer_done;
for (int c=0;c<SPI2_BUFFER_SIZE;c++)
{
spi2_tx_buffer[c] = 'X';
spi2_rx_buffer[c] = 0;
}
// Enable the SPI2 peripheral clock
RCC->APB1ENR |= RCC_APB1ENR_SPI2EN;
// Enable port B Clock
RCC->AHB1ENR |= (1<<1);
// Enable DMA1 Clock
RCC->AHB1ENR |= RCC_AHB1ENR_DMA1EN;
// Reset the SPI2 peripheral to initial state
RCC->APB1RSTR |= RCC_APB1RSTR_SPI2RST;
RCC->APB1RSTR &= ~RCC_APB1RSTR_SPI2RST;
/*
* SPC SPI2 SS: Pin33 PB12
* SPC SPI2 SCK: Pin34 PB13
* SPC SPI2 MISO: Pin35 PB14
* SPC SPI2 MOSI: Pin36 PB15
*/
// Configure the SPI2 GPIO pins
GPIOB->MODER |= (2<<24) | (2<<26) | (2<<28) | (2<<30);
GPIOB->PUPDR |= (02<<26) | (2<<28) | (2<<30);
GPIOB->OSPEEDR |= (3<<24) | (3<<26) | (3<<28) | (3<<30); // "very High speed"
GPIOB->AFR[1] |= (5<<16) | (5<<20) | (5<<24) | (5<<28); // Alternate function 5 (SPI2)
//-------------------------------------------------------
// Clock Phase and Polarity = 0
// CR1 = LSByte to MSByte, MSBit first
// DFF = 8bit
// 6 MHz Clock (48MHz / 8)
spi_spc->CR1 = (7<<3) | (0<<2) | (0<<1) | (1<<0) // 0:CPHA, 1:CPOL, 2:MASTER, 3:CLOCK_DIVIDER
| (0<<7) | (0<<11); // 7:LSB first, 11:DFF(8Bit)
spi_spc->CR2 = (0<<2) | (1<<1) | (1<<0); // 2:SSOE, 0:Enable RX DMA IRQ, 1:Enable TX DMA IRQ
// DMA config (Stream3:RX p2mem, Stream4:TX mem2p
// DMA for RX Stream 3 Channel 0
DMA1_Stream3->CR &= ~(1<<0); // EN = 0: disable and reset
while((DMA1_Stream3->CR & (1<<0)) != 0); // Wait
DMA1_Stream4->CR &= ~(1<<0); // EN = 0: disable and reset
while((DMA1_Stream4->CR & (1<<0)) != 0); // Wait
DMA1->LIFCR = (0x3D<<22); // clear all ISRs related to Stream 3
DMA1->HIFCR = (0x3D<< 0); // clear all ISRs related to Stream 4
DMA1_Stream3->PAR = (uint32_t) (&(spi_spc->DR)); // Peripheral addresse
DMA1_Stream3->M0AR = (uint32_t) spi2_rx_buffer; // Memory addresse
DMA1_Stream3->NDTR = 3; // Datenmenge zum Empfangen
DMA1_Stream3->FCR &= ~(1<<2); // ENABLE Direct mode by CLEARING Bit 2
DMA1_Stream3->CR = (0<<25) | // 25:Channel selection(0)
(1<<10) | // 10:increment mem_ptr,
(0<<9) | // 9: Do not increment periph ptr
(0<<6) | // 6: Dir(P -> Mem)
(1<<4); // 4: finish ISR
// DMA for TX Stream 4 Channel 0
DMA1_Stream4->PAR = (uint32_t) (&(spi_spc->DR)); // Peripheral addresse
DMA1_Stream4->M0AR = (uint32_t) spi2_tx_buffer; // Memory addresse
DMA1_Stream4->NDTR = 1; // Datenmenge zum Senden (dummy)
DMA1_Stream4->FCR &= ~(1<<2); // ENABLE Direct mode by CLEARING Bit 2
DMA1_Stream4->CR = (0<<25) | // 25:Channel selection(0)
(1<<10) | // 10:increment mem_ptr,
(0<<9) | // 9: Do not increment periph ptr
(1<<6) | // 6: Dir(Mem -> P)
(1<<4);
// Setup the NVIC to enable interrupts.
// Use 4 bits for 'priority' and 0 bits for 'subpriority'.
NVIC_SetPriorityGrouping( 0 );
uint32_t pri_encoding = NVIC_EncodePriority( 0, 1, 0 );
NVIC_SetPriority( DMA1_Stream4_IRQn, pri_encoding );
NVIC_EnableIRQ( DMA1_Stream4_IRQn );
NVIC_SetPriority( DMA1_Stream3_IRQn, pri_encoding );
NVIC_EnableIRQ( DMA1_Stream3_IRQn );
DMA1_Stream3->CR |= (1<<1); // Enable DMA1_Stream3 (RX)
spi_spc->CR1 |= (1<<6); // 6:EnableSPI
}
In the future the system has to send and receive roughly 500 bytes.
So, I did it. It was a whole bunch of things. Also, my assumption in the question was wrong. My slave did not receive/send valid data.
The signals were shown as clean by the oscilloscope, but the scope itself was adding noise to the lines, that was not visible on the scope itself. Not measuring the lines helped.
I put 100 OHM resistors close to the MASTER pins. This was not working, out of desperation I put the resistors close to the slave instead. Suddenly I got valid data. (This has been the main culprit all along)
According to the comment of Ashley Miller, I implemented a circular buffer, where I always send a fixed length every time. So the slave knows exactly what to expect. This mitigated eventual errors that could be produced when switching off / resetting the DMA shortly after the transmission.
The UART tricked me also. When getting too much data at once ( as little as 20 or 30 bytes! ) my terminal program gliched and threw the bytes randomly around. So part of the problem was just that... I'm using GtkTerm for those who are interested.
The Clock mode CPOL= 0 and CPH = 0 doesn't work at all. I set both master and slave to the same setting and it just received garbage. If I loop back the master to itself (connect MISO to MOSI a.k.a. exclude the slave) then it works regardless of clock mode.
This seems to stem from a timing issue, where the slave has to react too fast and can't handle even the slowest possible speed (approx. 100 kHz). I did not go into details on this.
I hope I could help someone with this.

STM32F103 SPI Master Slave Receive problem

I try to make master and slave stm32f103 (bluepills) communicate. But I am having trouble with receiving. When I connect my master to logic analyser I can see MOSI as in the picture
Logic Analyser
In the picture, MOSI is sending "Y" letter. But not all clock pulses same.(I don't know if this is the reason of communication fail)
here is my schematics and my code I simplified as much as I can.
Master Code:
int i;
RCC ->APB2ENR |= 0x00001004; //SPI1,GPIOA clock en
GPIOA ->CRL &= 0x00000000;
GPIOA ->CRL |= 0xb0b33333;
SPI1->CR1 = SPI_CR1_SSM| SPI_CR1_SSI| SPI_CR1_MSTR|SPI_CR1_BR_2;
SPI1->CR1 |= SPI_CR1_SPE; // enable SPI
while(1){
SPI1 -> DR = 'A';
for(int i = 0 ;i<400000;i++);
while( !(SPI1->SR & SPI_SR_TXE) ); // wait until transmit buffer empty
}
and Slave
int i;
RCC ->APB2ENR |= 0x0000100c; //SPI1,GPIOA,GPIOB clock en
GPIOB ->CRH &= 0x00000000;
GPIOB ->CRH |= 0x33333333;
GPIOA ->CRL &= 0x00000000;
GPIOA ->CRL |= 0x4b443333;
GPIOA ->CRH &= 0x00000000;
GPIOA ->CRH |= 0x33333333;
SPI1->CR1 = SPI_CR1_SSM| SPI_CR1_SSI| SPI_CR1_BR_2;
SPI1->CR1 |= SPI_CR1_SPE; // enable SPI
SPI1->CR1 &=~SPI_CR1_MSTR; //disable master
for(int c=0;c<5;c++){
LCD_INIT(cmd[c]);
}
while(1){
while( !(SPI1->SR & SPI_SR_RXNE));
char a = SPI1 ->DR;
for (i=0;i<400000;i++);
LCD_DATA(a);
for (i=0;i<400000;i++);
}
}
My Schematic:
Schematic
The problem is slave is not receiving any data.
It stucks in the loop while( !(SPI1->SR & SPI_SR_RXNE));
First, what are your HCLK and APB2 bus frequencies? If I'm not mistaken, you seem to use (fPLCK / 32) for SPI clock and your logic analyzer shows ~2 or 3 MHz clock. If your APB2 frequency is higher than the 72 MHz limit, you may experience clock problems.
In the slave, you use SSM (software slave management) and activate SSI (internal slave select). The name of the SSI bit is misleading: It mimics the physical NSS pin. So when SSI = 1, the slave is not selected. This is probably the reason why the slave ignores the incoming bytes.

stm32l476 ADC not ready

I am trying to bring up one of the ADC's on the STM32L476 Nucleo board. I think i have things configured ok, but i must be missing a step. I know this can be done use the HAL API and CubeMX, but i prefer register level access when bringing up a new board. Here is my code - i think it's commented to enough so it can be understood. I stripped out the rest of the code to keep it simple.
The problem i don't understand is that when the code starts the while loop - the ADC is not ready - that is the ADC1->ISR[0] is not set - and does not get set. I have confirmed the bits are set where i think they should be using keil.
Can anyone spot what is missing?
#include <stm32l4xx.h>
#include <stdio.h>
#ifdef __cplusplus
extern "C"
#endif
int main(void)
{
uint32_t adcResult = 0;
/* Configure the clocks - using MSI as SYSCLK #16MHz */
RCC->CR &= 0xFFFFFF07; //Clear ~MSIRANGE bits and MSIRGSEL bit
RCC->CR |= 0x00000089; //Set MSI to 16MHz and MSIRGSEL bit
char *dataPtr = NULL;
//init ADC1
ADC1->CR &= 0xDFFFFFFF; //Take ADC out of deep power down - i break at this point to allow enough time - doesn't help
ADC1->CR |= 0x10000000; //Enable ADC1 votage regulator
RCC->AHB2ENR |= 0x00002001; //Enable the ADC clock, and GPIOA clk
GPIOA->ASCR |= 0x00000001; //Connect analog switch to GPIOA[0]
GPIOA->MODER |= 0x00000003; //Set A0 for analog input mode
ADC1->ISR |= 0x00000001; //Clear the ADRDY bit in the ADCx_ISR register by writing ‘1’.
ADC1->SQR1 |= 0x00000040; //Set for a sequence of 1 conversion on CH0
while (1)
{
ADC1->CR |= 0x00000004; //Convst
while(!(ADC1->ISR & 0x4));
adcResult = ADC1->DR;
sprintf(dataPtr, "%d", adcResult);
}
}
I solved this - finally. If anyone gets into the same place. I had set the SYSCLK as the ADC clock source, but this needs to be setup in RCC->CCIPR[29:28].
It's the little things...

cortex-m0 systick interrupt doesn't happens

I am writting firmware for stm32f072.
The problem is that SysTick interrupt doesn't happens.
Here is simple code for SysTick configuring:
SysTick_Config(1000);
This function is taken from CMSIS's core_cm0.h file:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1) > SysTick_LOAD_RELOAD_Msk) return (1); /* Reload value impossible */
SysTick->LOAD = (ticks & SysTick_LOAD_RELOAD_Msk) - 1; /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1<<__NVIC_PRIO_BITS) - 1); /* set Priority for Systick Interrupt */
SysTick->VAL = 0; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0); /* Function successful */
}
System timer counts as expected.
SysTick->CTRL's overcounting bit is set to 1 but there is no interrupt happens! Firmware doesn't jump to SysTick_Handler().
What I miss? This code is enough for stm32f1 and stm32f4 devices but not work for stm32f0.
I recommend you to take a look at the Code Snippets from ST. These are low level programs for F0 (and L0) families. Some of them use the SysTick (e.g. first two example from CLOCK CONTROLLER projects) and all things are preconfigured and hopefully works on your board too. It is written originally for the STM32F072 Discovery board. I used it with my custom board with some tiny modifications.