STM32 HAL Lib,Can not clear TIM SR register by use __HAL_TIM_CLEAR_FLAG? - stm32

The TIM SR register value always be 0x1F, And Can not use to clear the reg.
HAL Lib Always runs into time interrupt really fast, and Can not clear SR register.
How to fix the promble?
Cubemx set
NVIC
`
void TIM3_IRQHandler(void)
{
/* USER CODE BEGIN TIM3_IRQn 0 */
/* USER CODE END TIM3_IRQn 0 */
HAL_TIM_IRQHandler(&htim3);
/* USER CODE BEGIN TIM3_IRQn 1 */
/* USER CODE END TIM3_IRQn 1 */
}
`
`
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
if ( htim == &htim3){
__HAL_TIM_CLEAR_FLAG(&htim3,TIM_FLAG_UPDATE) ;
HAL_GPIO_TogglePin(GPIOC,GPIO_PIN_13) ;
}
}
`

I will base my explanation on STM32F746's TIM3. Timers across STM32 that share number are usually identical or very similar.
TIM3->SR has 0x1F? That's 5 flags set! TIM3 is a general purpose STM32 timer. These 5 flags mean you have counter interrupt and four capture/compare interrupt status flags set at the same time. Something weird is going on. Are you sure you're supposed to have those flags set? Well, if their interrupts are not enabled, it doesn't matter.
You can clear these flags in TIM3->SR by writing zero to the specific position you want to clear and 1 everywhere else. As per reference manual, this register ignores if you write 1. It doesn't set bits when you do that. It only resets when you write zero. So,
TIM3->SR = 0; //clear all interrupt flags
TIM3->SR = ~TIM_SR_UIF; //clear update interrupt flag only
This works because the bits in the reference manual are marked rc_w0 - read, clear by writing zero. If bits in your SR register work differently, you may have to clear them differently. For example, sometimes status register is read-only and you clear it via write to flag clear register. Check reference manual of your MCU.

Related

what's the application of sequential transmission of I2C in HAL library in STM32f746ng

I can understand that you can use first frame option for first frame and next frame options for others, but since you can use them as FIRS_FRAME_LAST_FRAME, what is the advantage of other? and when we must use them?
Findings:
A code use wile to continuously transmit two number and get a callback to see if module has accepted that, if this happen correctly the led must blink.
In this simple code I've tested every xferoption of sequential transmission, every options worked except: I2C_LAST_FRAME_NO_STOP and I2C_FIRST_FRAME.
Code:
while (1)
{
value=300;
*(uint16_t*) buffer=(value<<8)|(value>>8);//Data prepared for DAC module
HAL_I2C_Master_Seq_Transmit_IT (&hi2c1, (MCP4725A0_ADDR_A00<<1), buffer, 2,I2C_LAST_FRAME_NO_STOP);
HAL_Delay(1);
HAL_I2C_Master_Receive(&hi2c1, (MCP4725A0_ADDR_A00<<1), rxbuffer, 3, 1000);
if( (uint16_t)(((uint16_t)rxbuffer[1])<<8|((uint16_t)rxbuffer[2]))>>4 == value ){
HAL_GPIO_WritePin(LED_GPIO_Port,LED_Pin,GPIO_PIN_SET);}
HAL_Delay(50);
value=4000;
*(uint16_t*) buffer=(value<<8)|(value>>8);
HAL_I2C_Master_Seq_Transmit_IT (&hi2c1, (MCP4725A0_ADDR_A00<<1), buffer, 2,I2C_LAST_FRAME_NO_STOP);
HAL_Delay(1);
HAL_I2C_Master_Receive(&hi2c1, (MCP4725A0_ADDR_A00<<1), rxbuffer, 3, 1000);
if( (uint16_t)(((uint16_t)rxbuffer[1])<<8|((uint16_t)rxbuffer[2]))>>4 == value ){
HAL_GPIO_WritePin(LED_GPIO_Port,LED_Pin,GPIO_PIN_RESET);}
HAL_Delay(50);
}
The HAL sometimes poorly documents these variables functions, and you will need to dive into the reference manual !
Looking at what the #defines are
https://github.com/STMicroelectronics/STM32CubeF7/blob/f8bda023e34ce9935cb4efb9d1c299860137b6f3/Drivers/STM32F7xx_HAL_Driver/Inc/stm32f7xx_hal_i2c.h#L302-L307
/** #defgroup I2C_XFEROPTIONS I2C Sequential Transfer Options
* #{
*/
#define I2C_FIRST_FRAME ((uint32_t)I2C_SOFTEND_MODE)
#define I2C_FIRST_AND_NEXT_FRAME ((uint32_t)(I2C_RELOAD_MODE | I2C_SOFTEND_MODE))
#define I2C_NEXT_FRAME ((uint32_t)(I2C_RELOAD_MODE | I2C_SOFTEND_MODE))
#define I2C_FIRST_AND_LAST_FRAME ((uint32_t)I2C_AUTOEND_MODE)
#define I2C_LAST_FRAME ((uint32_t)I2C_AUTOEND_MODE)
#define I2C_LAST_FRAME_NO_STOP ((uint32_t)I2C_SOFTEND_MODE)
We can see references to RELOAD and AUTOEND and SOFTEND.
Digging into the reference manual
https://www.st.com/resource/en/reference_manual/rm0385-stm32f75xxx-and-stm32f74xxx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf#page=969
So we can see here the reference to
AUTOEND - as a way to automatically implement a STOP condition after the set bytes end
SOFTEND as a way to prevent the automatic STOP condition and require the software to decide.
Relationship to your observed behaviour
The define's using the SOFTEND mode is where you saw things not working, and this is to be expected, the I2C protocol was not being fulfilled as there was nothing in the code to indicate the STOP condition.
So what does this mean you can do - an example of a variable byte i2c slave receiver
I haven't found a shining example from ST for this, but let me illustrate an example I have implemented in a project for an I2C Slave.
Let us look at the callbacks that are called:
https://github.com/STMicroelectronics/STM32CubeF7/blob/master/Drivers/STM32F7xx_HAL_Driver/Src/stm32f7xx_hal_i2c.c#L76-L97
*** Interrupt mode IO operation ***
===================================
[..]
(+) Transmit in master mode an amount of data in non-blocking mode using HAL_I2C_Master_Transmit_IT()
(+) At transmission end of transfer, HAL_I2C_MasterTxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_MasterTxCpltCallback()
(+) Receive in master mode an amount of data in non-blocking mode using HAL_I2C_Master_Receive_IT()
(+) At reception end of transfer, HAL_I2C_MasterRxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_MasterRxCpltCallback()
(+) Transmit in slave mode an amount of data in non-blocking mode using HAL_I2C_Slave_Transmit_IT()
(+) At transmission end of transfer, HAL_I2C_SlaveTxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_SlaveTxCpltCallback()
(+) Receive in slave mode an amount of data in non-blocking mode using HAL_I2C_Slave_Receive_IT()
(+) At reception end of transfer, HAL_I2C_SlaveRxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_SlaveRxCpltCallback()
(+) In case of transfer Error, HAL_I2C_ErrorCallback() function is executed and users can
add their own code by customization of function pointer HAL_I2C_ErrorCallback()
(+) Abort a master I2C process communication with Interrupt using HAL_I2C_Master_Abort_IT()
(+) End of abort process, HAL_I2C_AbortCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_AbortCpltCallback()
(+) Discard a slave I2C process communication using __HAL_I2C_GENERATE_NACK() macro.
This action will inform Master to generate a Stop condition to discard the communication.
Therefore, you could implement a I2C slave that could read a variable/dynamic amount of data:
Receive 1 byte - using the SOFTEND based options
This prevents the stop condition being raised, but once this first byte is received will trigger the HAL_I2C_SlaveRxCpltCallback().
In the HAL_I2C_SlaveRxCpltCallback() check the value of the first byte and then request more data of any further length, but this time using an AUTOEND based option.

I2C transmit with DMA and HAL not working

This seems to be a problem that is somewhat common, but I have been unsuccessful with any of the solutions I have found online. Specifically I am trying to transmit a 1024 byte buffer (full 128x64 px image) to a SSD1306 display via I2C/DMA and the HAL generated in cubeIDE. I am using a STML432 nucleo board. I have no problem transmitting the buffer without DMA using HAL_I2C_Mem_Write
Based on other questions I have seen, the problem lies in the fact that the DMA finishes while the I2C bus is still working on the transmit. I just don't know how to remedy this and the examples given usually don't use the HAL (unfortunately, despite my efforts I am not quite competent to correctly apply them to the HAL myself I guess). I have tried using the interrupts for I2c and DMA with no luck, only about the first 254 bytes get transferred (just shy of two rows showing on the screen).
Here is my code for sending the buffer:
static void ssd1306_WriteMData_DMA(const uint8_t *data, uint16_t size)
{
while(HAL_I2C_GetState(&hi2c1) != HAL_I2C_STATE_READY);
HAL_I2C_Mem_Write_DMA(&hi2c1, I2C_ADDR, SSD1306_REG_MDAT, 1, (uint8_t*)data, size);
}
and the code for each interrupt handler:
void I2C1_EV_IRQHandler(void)
{
/* USER CODE BEGIN I2C1_EV_IRQn 0 */
if(I2C1->ISR & I2C_ISR_TCR){
I2C1->CR2 |= (I2C_CR2_STOP);// stop i2c
I2C1->ICR |= (I2C_ICR_STOPCF);// Reset the ICR flag.
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
}
/* USER CODE END I2C1_EV_IRQn 0 */
//HAL_I2C_EV_IRQHandler(&hi2c1);
/* USER CODE BEGIN I2C1_EV_IRQn 1 */
/* USER CODE END I2C1_EV_IRQn 1 */
}
void DMA1_Channel6_IRQHandler(void)
{
/* USER CODE BEGIN DMA1_Channel6_IRQn 0 */
// stop DMA
DMA1->IFCR |= DMA_IFCR_CTCIF6;
// clear flag
DMA1_Channel6->CCR &= ~DMA_CCR_EN;
/* USER CODE END DMA1_Channel6_IRQn 0 */
HAL_DMA_IRQHandler(&hdma_i2c1_tx);
/* USER CODE BEGIN DMA1_Channel6_IRQn 1 */
/* USER CODE END DMA1_Channel6_IRQn 1 */
}
I think that is all the pertinent code, let me know if there is something else I am missing. All of the initialization code for the peripherals was done through cubeMX, but I can post that if need be, or the settings. I feel like it is something really simple that I'm missing, but this is a bit over my head to be honest so I don't quite grasp exactly what's going on...
Thanks for any help!
Problem is in your custom DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler. Those functions will be called right after I2C transfers 255 bytes, which is MAX_NBYTE_SIZE for NBYTES. HAL already have all required interrupt routines inside stm32l4xx_hal_i2c.c:
Sets I2C transfer IRQ handler to I2C_Master_ISR_DMA;
Checks if data size is larger than 255 bytes and uses reload mode.
Sets I2C DMA complete callback to I2C_DMAMasterTransmitCplt;
Starts DMA using HAL_DMA_Start_IT()
Configures I2C registers using I2C_TransferConfig()
HAL driver will handle all I2C+DMA interrupts using I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt:
I2C_DMAMasterTransmitCplt will restart DMA for each chunk of 255 (MAX_NBYTE_SIZE) or less bytes.
I2C_Master_ISR_DMA will reset RELOAD/NBYTES registers using I2C_TransferConfig.
For last block of data I2C_AUTOEND_MODE is used.
So all you need is
remove "user code" from DMA1_Channel6_IRQHandler and I2C1_EV_IRQHandler functions
enable I2C1 event interrupt in STM32 Device Configuration Tool
configure DMA with data width byte/byte
perform a single call of HAL_I2C_Mem_Write_DMA(...) to start transfer
check HAL_I2C_STATE_READY before next transfer
See HAL_I2C_Mem_Write_DMA, I2C_Master_ISR_DMA and I2C_DMAMasterTransmitCplt source code in stm32l4xx_hal_i2c.c to understand how it works.
About why DMA finishes while I2C is still working: HAL driver sends I2C data over DMA using 255 byte chunks, stops DMA, starts DMA, clears I2C_CR2 NBYTES/RELOAD, enables DMA. DMA may be run continuously using DMA_CIRCULAR mode, but currently it is not implemented in HAL I2C drivers. Here is example of using I2C with DMA_CIRCULAR mode:
// DMA enabled single time
hi2c1.hdmatx->XferCpltCallback = MY_I2C_DMAMasterTransmitCplt;
HAL_DMA_Start_IT(hi2c1.hdmatx, (uint32_t)&i2cBuffer, (uint32_t)&hi2c1.Instance->TXDR, I2C_BUFFER_SIZE);
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_GENERATE_START_WRITE); // in first call using I2C_GENERATE_START_WRITE
uint32_t tmpisr = I2C_IT_TCI;
__HAL_I2C_ENABLE_IT(&hi2c1, tmpisr);
hi2c1.Instance->CR1 |= I2C_CR1_TXDMAEN;
Still need to clear I2C_CR2 NBYTES/RELOAD using MY_I2C_TransferConfig each 254 bytes (I do not use 255 to align interrupt firing to even index in array):
static HAL_StatusTypeDef MY_I2C_Master_ISR_DMA(struct __I2C_HandleTypeDef *hi2c, uint32_t ITFlags, uint32_t ITSources)
{
if (__HAL_I2C_GET_FLAG(&hi2c1, I2C_FLAG_TCR) == SET)
{
MY_I2C_TransferConfig(&hi2c1, (uint16_t)DAC_ADDR, 254, I2C_RELOAD_MODE, I2C_NO_STARTSTOP); // in repeated calls using I2C_NO_STARTSTOP
}
return HAL_OK;
}
With this approach DMA circular buffer size is not limited to 255 bytes:
#define I2C_BUFFER_SIZE 1024
uint8_t i2cBuffer[I2C_BUFFER_SIZE];
Main.c should have MY_I2C_TransferConfig() function, which is copy pasted version of private function HAL_I2C_TransferConfig() from stm32l4xx_hal_i2c.c. On earlier STM32 microcontrollers there is no NBYTES/RELOAD fields and I2C_CR2 does not need to be updated this way.
Using DMA in circular mode allows to achieve highest frame rate, you just need to fill DMA buffers in time using XferHalfCpltCallback and XferCpltCallback callbacks. Frames may be copied from larger buffer by using memcpy() or DMA MEMTOMEM transfer.
You haven't said which STM32 you are using. They have different bit definitions (because the I2C peripherals in the earlier released parts were rubbish) but it looks like you are using one of the later ones.
Basically you can find what you need in the bit definitions for the I2C registers in the reference manual. If you are setting stop before it has finished you need to look for a BUSY bit that gets cleared or BTF (byte transfer finished) bit that gets set when it is time for you to send stop.

Cannot exit sleep mode of bxCAN on STM32F429IGT in loopback mode

In short, SLAK bit won't reset when SLEEP bit is manually reset. In details :
I am trying to achieve a successful transmission in loopback mode before venturing into making a network. I had it working at a point after a lot of documentation reading, but now I have a new issue. (Sadly I do not remember what I changed, played with the timings maybe)
After setting the peripheral to loopback and providing coherent bit timing values (so I may have played with them but they are back to being ok), I generate the code with Cube. This implies that the flow should first exit the sleep mode, enter the init mode, do the settings, exit the init mode, and start normal mode. According to the reference manual :
If software requests entry to initialization mode by setting the INRQ bit while bxCAN is in
Sleep mode, it must also clear the SLEEP bit. [...] After the SLEEP bit has been cleared, Sleep mode is exited once bxCAN has synchronized
with the CAN bus [...]. The Sleep mode is exited
once the SLAK bit has been cleared by hardware
and
To synchronize, bxCAN waits until the CAN bus is idle, this means 11
consecutive recessive bits have been monitored on CANRX.
According to wiki
A 0 data bit encodes a dominant state, while a 1 data bit encodes a recessive state
So
Checking the code generated by Cube this is exactly what is happening. I pasted here the essential part from stm32f4xx_hal_can.c :
HAL_StatusTypeDef HAL_CAN_Init(CAN_HandleTypeDef *hcan)
{
[...]
/* Exit from sleep mode */
CLEAR_BIT(hcan->Instance->MCR, CAN_MCR_SLEEP);
/* Get tick */
tickstart = HAL_GetTick();
/* Check Sleep mode leave acknowledge */
while ((hcan->Instance->MSR & CAN_MSR_SLAK) != 0U)
{
if ((HAL_GetTick() - tickstart) > CAN_TIMEOUT_VALUE)
{
[...]
/*Error*/
}
}
/* Request initialisation */
SET_BIT(hcan->Instance->MCR, CAN_MCR_INRQ);
/* Get tick */
tickstart = HAL_GetTick();
/* Wait initialisation acknowledge */
while ((hcan->Instance->MSR & CAN_MSR_INAK) == 0U)
{
if ((HAL_GetTick() - tickstart) > CAN_TIMEOUT_VALUE)
{
[...]
/*Error*/
}
The SLEEP bit of CAN_MSR is reset and waits for the SLAK bit from CAN_MSR to be reset by the hardware. CAN_TIMEOUT_VALUE is set to 10, basically giving time for the 11 recessive bits to settle in.
And this is where I am stuck. SLACK would not reset... I tried to remove if ((HAL_GetTick() - tickstart) > CAN_TIMEOUT_VALUE) so that the MCU waits indefinitely for a SLAK reset. Did not help.
Looking at the CAN_MSR RX register, giving the current value on RX, while waiting for SLACK to change, I noticed that it is always at 0. So I tried to set GPIOs as pull-up and pull-downs for RX and TX, but I think it has no effect since, in loopback mode, RX of bxCAN is isolated from GPIOs :) This meaning also, that the issue should not be on the hardware side (like wiring and stuff, external things, not internal hardware). Leading me to believe that something is wrong during the global HAL_Init() or MX_GPIO_Init() or other stuff, but since it is generated by Cube and I did not change anything, I don't see how it could have an effect on SLAK not going away.
My idea was maybe to do a software reset, on something, but I don't know where this path will lead me since powering off and on the chip do not resolve the issue...

STM32F072RB does not receive/send data over SPI in slave mode

I am using the
STM32F072RB
uC to receive and transmit data over SPI2 in slave mode with the following configuration:
CR1 = 0x0078
CR2 = 0x0700
AFRH = 0x55353500
MODER = 0xa2a0556a
The register APB1ENR is also properly configured.
The current program just checks the RXNE flag, reads the received data from DR and sends a random value writing to DR.
The status register when I receive data has the following value:
SR = 0x1403
The master sends data properly and I checked the signals at the slave pins (clock phase and polarity are identical on both sides and the NSS signal is cleared before sending SCK and data over MOSI).
I even configured the pins as inputs and I know I could read any digital signal the master could send.
With the current configuration it seems the slave receives something because the RXNE is set when the master sends data but the read value is always 0x00.
I have tried different configurations (software/hardware NSS, different data sizes, etc.) but I always get 0x00.
Moreover, the random value I send after reading DR is not sent to the outputs.
This is my current function, which is called continuously:
unsigned char spi_rx_slave(unsigned char spiPort, unsigned char *receiveBuffer)
{
uint8_t temp;
static unsigned long sr;
if (!spi_isOpen(spiPort))
{
sendDebug("%s() Error: spiPort not in use!\r\n",__func__);
return false;
}
if (spiDescriptor[spiPort]->powerdown == true)
{
sendDebug("%s() Error: spiPort in powerdown!\r\n",__func__);
return false;
}
/* wait till spi is not busy anymore */
while((spiDescriptor[spiPort]->spiBase->SR) & SPI_SR_BSY)
{
sendDebug("SPI is busy(1)\r\n");
vTaskDelay(2);
}
sendDebug("CR1 = 0x%04x, ", spiDescriptor[spiPort]->spiBase->CR1);
sendDebug("CR2 = 0x%04x, ", spiDescriptor[spiPort]->spiBase->CR2);
sendDebug("AFRH address = 0x%08x, AFRH value = %08x, ", (unsigned long*)(GPIOB_BASE+0x24), *(unsigned long*)(GPIOB_BASE+0x24));
sendDebug("MODER address = 0x%08x, MODER value = %08x\r\n", (unsigned long*)(GPIOB_BASE), *(unsigned long*)(GPIOB_BASE));
sr = spiDescriptor[spiPort]->spiBase->SR;
while(sr & SPI_SR_RXNE)
{
/* get RX byte */
temp = *(uint8_t *)&(spiDescriptor[spiPort]->spiBase->DR);
spiDescriptor[spiPort]->spiBase->DR = 0x53;
sendDebug("-------->DR address = 0x%08x, data received: 0x%02x\r\n", &spiDescriptor[spiPort]->spiBase->DR, temp);
sendDebug("SR = 0x%04x\r\n", sr);
vTaskDelay(1);
sr = spiDescriptor[spiPort]->spiBase->SR;
}
while((spiDescriptor[spiPort]->spiBase->SR) & SPI_SR_BSY)
{
sendDebug("SPI is busy(2)\r\n");
vTaskDelay(2);
}
return true;
}
What am I doing wrong?
Is there anything I did not configure properly?
Thanks in advance.
Regards,
Javier
Edit:
I switched to software NSS and copied the register values from a STM32CubeMX example I found online. I cannot use those libraries for this project but I would like to have the same behaviour.
The new values are:
CR1 = 0x0278
which means
fPCLK/256 (the proper one for the communication speed),
SPI enabled and
SSM = 1 (software NSS).
CR2 = 0x1700
which means
8-bit data and
RXNE event is generated if the FIFO level is greater than or equal to 1/4 (8-bit).
AFRH = 0x55303500
MODER = 0xa8a1556a
which means
MISO, MOSI and SCK alternate function 5 (SPI2)
NSS is not configured because now it is in software mode (slave is always selected).
I am still getting the same results and the eval kit with those libraries works fine using SPI1 instead.
Therefore there must be another issue that has nothing to do with the register values.
Might there be any clock issue e.g. the pins need to get some clock?
Thanks!
The question points to a couple of mistakes which may explain why no receive has been observed:
GPIO configuration points to some wrong Alternate Functions / Modes:
The question didn't state it precisely, but I assume that
AFRH = 0x55303500
MODER = 0xa8a1556a
refers to GPIOB (otherwise, it wouldn't make sense with SPI2).
This corresponds to the following pin configuration (see the
Reference Manual,
sec. 8.4.1, 8.4.10 and the
Datasheet,
Table 16):
PB15 - Alternate Function - AF5 = [INVALID]
PB14 - Alternate Function - AF5 = [I2C2_SDA]
PB13 - Alternate Function - AF3 = [TSC_G6_IO3]
PB12 - GP Input (reset state)
PB11 - Alternate Function - AF3 = [TIM_CH4]
PB10 - Alternate Function - AF5 = [SPI2_SCK / I2S2_CK]
PB09 - GP Input (reset state)
PB08 - GP Output
PB07 - Alternate Function - (unknown which, see register AFRL)
PB06 - GP Output
PB05 - Alternate Function - (unknown which, see register AFRL)
PB04 - GP Output
PB03 - GP Output
PB02 - Alternate Function - (unknown which, see register AFRL)
PB01 - Alternate Function - (unknown which, see register AFRL)
PB00 - Alternate Function - (unknown which, see register AFRL)
This is obviously not what the software is required to do.
Solution: Make sure to configure PB15=>AF0, PB14=>AF0, either PB13=>AF0 or PB10=>AF0, depending on your hardware.
In order to avoid mistakes in doing so, you should follow the hint of #P__J__ and use speaking macros for constants assigned to MODER, AFRH etc.
Using the HAL library provided by ST is a truly controversial subject among SO users, but one should really consider to use at least a header like stm32f072xb.h with macros like GPIO_AFRH_AFSEL15.
If one represents all configuration register values as (bitwise) ORs of such macros, it is easier to re-check configuration against datasheets, and the famous
rubber duck
will directly know what an unhappy developer is talking about.
Other clock activations might be missing:
The question confirms that
The register APB1ENR is also properly configured.
This is correct (as long as bit 14 is set).
Additionally, GPIOB must be powered, i. e., bit 18 of RCC_AHBENR must be set.
See again the
Reference Manual,
sec. 6.4.8 and 6.4.6.
GPIO pins may be in wrong mode during debugging:
I even configured the pins as inputs and I know I could read any digital signal the master could send. With the current configuration it seems the slave receives something because the RXNE is set when the master sends data but the read value is always 0x00.
Please note that for every GPIO pin, a unique mode is selected through the MODER register. If this is set to "Input" (0b00), the Alternate Function is disconnected and won't work with external signals.

MSP430 clock problems after reset

I use the following routine to configure the clock of my MSP430 (msp430g2231) microcontroller:
void configure_clock(void) {
if (CALBC1_1MHZ == 0xFF || CALDCO_1MHZ == 0xFF) { // Checks the clock constants
while(TRUE); // If callibration constants are erased, TRAP!
}
BCSCTL1 |= CALBC1_1MHZ; // Sets DCO range
DCOCTL |= CALDCO_1MHZ; // Set DCO step and modulation
BCSCTL1 &= ~(XTS | XT2OFF); // Disables XT2 and sets low frequency mode
BCSCTL3 |= (LFXT1S_0 | XCAP_3); // Selects LFXT1 crystal with 12,5pF
do {
IFG1 &= ~OFIFG;
__delay_cycles(1000);
} while (IFG1 & OFIFG); // Waits until crystal stabilizes
BCSCTL2 |= (SELM_2 | SELS); // Selects SMCLK and MCLK from LFXT1CLK
}
The problem is that the first time the code runs (just after powering up the microcontroller) everything works as expected and I get 32768 kHz clock. But if I press the reset button on the board (MSP430 Launchpad) the clock does not seem to work correctly, the code executes much slowly (like 10 times or so). Any ideas on the clock configuration?
Thanks!
Pere
First you can look at the power supply voltage. In case there is some spike during the startup then DCO wont work. In that case try to use a delay right before the Alignment of the values to BCSCTL1.
__delay_cycles(10000);
BCSCTL1 = CALBC1_1MHZ; // Sets DCO range
This will ensure that the startup spike is suppressed.
The next suspect would be decoupling on your target board. I mean the Capacitor on the VCC as well as the one used in the Reset. TI recommends 1nF-2nF for the Reset line and a 0.1uF for the VCC. But in case you are using the LaunchPad as you platform then that should not be a problem.
Also for the calibration value assignments use assignment operators and not logical operators. As the other values being 0 is a default.
BCSCTL1 = CALBC1_1MHZ; // Set DCO
DCOCTL = CALDCO_1MHZ;
If you are planning to Run the XT2 it is not available in G2231. Its LFXT1 directly.
You dont need explicit initialization for the 32.768KHz crystal to work.
It just work when you power up. So the additional initialization step is not needed.
In order to find better help please have a look at slac463a for software examples related to clock setting.
The only things I can suggest with your code are below. Whether or not they fix your issue I don't know as it seems strange that the first run is OK but after a reset it is not. Do you access the clock configuration anywhere else? What code do you call on reset?
You always use bit manipulation to include or exclude values into the registers. You should start with a known value and then adjust bits from there otherwise you may be incorporating bits from a previous state. For example, instead of:
BCSCTL1 |= CALBC1_1MHZ;
BCSCTL1 &= ~(XTS | XT2OFF);
You can set it to a definitive value by doing something like this:
BCSCTL1 = XT2OFF | (CALBC1_1MHZ & 0x0F);
The other suggestion is that XT2OFF has to be set in order to turn off XT2. You are clearing the bit, so are leaving it on. This is in conflict with your comment so might be an error.