MSP430 clock problems after reset - real-time

I use the following routine to configure the clock of my MSP430 (msp430g2231) microcontroller:
void configure_clock(void) {
if (CALBC1_1MHZ == 0xFF || CALDCO_1MHZ == 0xFF) { // Checks the clock constants
while(TRUE); // If callibration constants are erased, TRAP!
}
BCSCTL1 |= CALBC1_1MHZ; // Sets DCO range
DCOCTL |= CALDCO_1MHZ; // Set DCO step and modulation
BCSCTL1 &= ~(XTS | XT2OFF); // Disables XT2 and sets low frequency mode
BCSCTL3 |= (LFXT1S_0 | XCAP_3); // Selects LFXT1 crystal with 12,5pF
do {
IFG1 &= ~OFIFG;
__delay_cycles(1000);
} while (IFG1 & OFIFG); // Waits until crystal stabilizes
BCSCTL2 |= (SELM_2 | SELS); // Selects SMCLK and MCLK from LFXT1CLK
}
The problem is that the first time the code runs (just after powering up the microcontroller) everything works as expected and I get 32768 kHz clock. But if I press the reset button on the board (MSP430 Launchpad) the clock does not seem to work correctly, the code executes much slowly (like 10 times or so). Any ideas on the clock configuration?
Thanks!
Pere

First you can look at the power supply voltage. In case there is some spike during the startup then DCO wont work. In that case try to use a delay right before the Alignment of the values to BCSCTL1.
__delay_cycles(10000);
BCSCTL1 = CALBC1_1MHZ; // Sets DCO range
This will ensure that the startup spike is suppressed.
The next suspect would be decoupling on your target board. I mean the Capacitor on the VCC as well as the one used in the Reset. TI recommends 1nF-2nF for the Reset line and a 0.1uF for the VCC. But in case you are using the LaunchPad as you platform then that should not be a problem.
Also for the calibration value assignments use assignment operators and not logical operators. As the other values being 0 is a default.
BCSCTL1 = CALBC1_1MHZ; // Set DCO
DCOCTL = CALDCO_1MHZ;
If you are planning to Run the XT2 it is not available in G2231. Its LFXT1 directly.
You dont need explicit initialization for the 32.768KHz crystal to work.
It just work when you power up. So the additional initialization step is not needed.
In order to find better help please have a look at slac463a for software examples related to clock setting.

The only things I can suggest with your code are below. Whether or not they fix your issue I don't know as it seems strange that the first run is OK but after a reset it is not. Do you access the clock configuration anywhere else? What code do you call on reset?
You always use bit manipulation to include or exclude values into the registers. You should start with a known value and then adjust bits from there otherwise you may be incorporating bits from a previous state. For example, instead of:
BCSCTL1 |= CALBC1_1MHZ;
BCSCTL1 &= ~(XTS | XT2OFF);
You can set it to a definitive value by doing something like this:
BCSCTL1 = XT2OFF | (CALBC1_1MHZ & 0x0F);
The other suggestion is that XT2OFF has to be set in order to turn off XT2. You are clearing the bit, so are leaving it on. This is in conflict with your comment so might be an error.

Related

QSPI connection on STM32 microcontrollers with other peripherals instead of Flash memories

I will start a project which needs a QSPI protocol. The component I will use is a 16-bit ADC which supports QSPI with all combinations of clock phase and polarity. Unfortunately, I couldn't find a source on the internet that points to QSPI on STM32, which works with other components rather than Flash memories. Now, my question: Can I use STM32's QSPI protocol to communicate with other devices that support QSPI? Or is it just configured to be used for memories?
The ADC component I want to use is: ADS9224R (16-bit, 3MSPS)
Here is the image of the datasheet that illustrates this device supports the full QSPI protocol.
Many thanks
page 33 of the datasheet
The STM32 QSPI can work in several modes. The Memory Mapped mode is specifically designed for memories. The Indirect mode however can be used for any peripheral. In this mode you can specify the format of the commands that are exchanged: presence of an instruction, of an adress, of data, etc...
See register QUADSPI_CCR.
QUADSPI supports indirect mode, where for each data transaction you manually specify command, number of bytes in address part, number of data bytes, number of lines used for each part of the communication and so on. Don't know whether HAL supports all of that, it would probably be more efficient to work directly with QUADSPI registers - there are simply too many levers and controls you need to set up, and if the library is missing something, things may not work as you want, and QUADSPI is pretty unpleasant to debug. Luckily, after initial setup, you probably won't need to change very much in its settings.
In fact, some time ago, when I was learning QUADSPI, I wrote my own indirect read/write for QUADSPI flash. Purely a demo program for myself. With a bit of tweaking it shouldn't be hard to adapt it. From my personal experience, QUADSPI is a little hard at first, I spent a pair of weeks debugging it with logic analyzer until I got it to work. Or maybe it was due to my general inexperience.
Below you can find one of my functions, which can be used after initial setup of QUADSPI. Other communication functions are around the same length. You only need to set some settings in a few registers. Be careful with the order of your register manipulations - there is no "start communication" flag/bit/command. Communication starts automatically when you set some parameters in specific registers. This is explicitly stated in the reference manual, QUADSPI section, which was the only documentation I used to write my code. There is surprisingly limited information on QUADSPI available on the Internet, even less with registers.
Here is a piece from my basic example code on registers:
void QSPI_readMemoryBytesQuad(uint32_t address, uint32_t length, uint8_t destination[]) {
while (QUADSPI->SR & QUADSPI_SR_BUSY); //Make sure no operation is going on
QUADSPI->FCR = QUADSPI_FCR_CTOF | QUADSPI_FCR_CSMF | QUADSPI_FCR_CTCF | QUADSPI_FCR_CTEF; // clear all flags
QUADSPI->DLR = length - 1U; //Set number of bytes to read
QUADSPI->CR = (QUADSPI->CR & ~(QUADSPI_CR_FTHRES)) | (0x00 << QUADSPI_CR_FTHRES_Pos); //Set FIFO threshold to 1
/*
* Set communication configuration register
* Functional mode: Indirect read
* Data mode: 4 Lines
* Instruction mode: 4 Lines
* Address mode: 4 Lines
* Address size: 24 Bits
* Dummy cycles: 6 Cycles
* Instruction: Quad Output Fast Read
*
* Set 24-bit Address
*
*/
QUADSPI->CCR =
(QSPI_FMODE_INDIRECT_READ << QUADSPI_CCR_FMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_DMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_IMODE_Pos) |
(QIO_QUAD << QUADSPI_CCR_ADMODE_Pos) |
(QSPI_ADSIZE_24 << QUADSPI_CCR_ADSIZE_Pos) |
(0x06 << QUADSPI_CCR_DCYC_Pos) |
(MT25QL128ABA1EW9_COMMAND_QUAD_OUTPUT_FAST_READ << QUADSPI_CCR_INSTRUCTION_Pos);
QUADSPI->AR = (0xFFFFFF) & address;
/* ---------- Communication Starts Automatically ----------*/
while (QUADSPI->SR & QUADSPI_SR_BUSY) {
if (QUADSPI->SR & QUADSPI_SR_FTF) {
*destination = *((uint8_t*) &(QUADSPI->DR)); //Read a byte from data register, byte access
destination++;
}
}
QUADSPI->FCR = QUADSPI_FCR_CTOF | QUADSPI_FCR_CSMF | QUADSPI_FCR_CTCF | QUADSPI_FCR_CTEF; //Clear flags
}
It is a little crude, but it may be a good starting point for you, and it's well-tested and definitely works. You can find all my functions here (GitHub). Combine it with reading the QUADSPI section of the reference manual, and you should start to get a grasp of how to make it work.
Your job will be to determine what kind of commands and in what format you need to send to your QSPI slave device. That information is available in the device's datasheet. Make sure you send command and address and every other part on the correct number of QUADSPI lines. For example, sometimes you need to have command on 1 line and data on all 4, all in the same transaction. Make sure you set dummy cycles, if they are required for some operation. Pay special attention at how you read data that you receive via QUADSPI. You can read it in 32-bit words at once (if incoming data is a whole number of 32-bit words). In my case - in the function provided here - I read it by individual bytes, hence such a scary looking *destination = *((uint8_t*) &(QUADSPI->DR));, where I take an address of the data register, cast it to pointer to uint8_t and dereference it. Otherwise, if you read DR just as QUADSPI->DR, your MCU reads 32-bit word for every byte that arrives, and QUADSPI goes crazy and hangs and shows various errors and triggers FIFO threshold flags and stuff. Just be mindful of how you read that register.

HAL_GetTick() always returns 0

I'm currently working on a project with an existing codebase where HAL_GetTick() works in some places, but when I try to call the function in other files it returns 0.
HAL_Delay() does work for some reason.
Am I missing something obvious?
It sounds to me that your codebase may have an override for that function. See this post about HAL_GetTick():
STM32 and HAL function GetTick()
As an alternative you can do the following. If you know the frequency of a timer you can use the following code snippet:
const uint32_t freq = 1000000; // Freq in Hz
uint32_t get_ticks()
{
uint32_t ticks = __HAL_TIM_GET_COUNTER(&htim2);
return ticks;
}
double ticks_to_seconds(uint32_t ticks)
{
double seconds = (double) ticks/freq;
return seconds;
}
The internal (global) tick counter variable used by the HAL is incremented in an ISR. If you happened to disable IRQs without, the counter won't be incremented any more. The same applies if you didn't enable interrupts at all since startup.
Note: You reported the (reproducible?) result value 0, which hints that the tick counter has never been incremented since power-up. This points us to the assumption that you forgot to
enable the underlying interrupt (SysTick unless you selected some TIM, e.g., in CubeMX)
enable interrupts globally
retain the CubeHAL SysTick/TIMx handler code after you replaced it by your own (<-- less realistic option...).
I forgot to mention that I also use LoRaWAN. Apparently LoRaWAN also has functions like HAL_InitTick() and HAL_Delay(). I've come to the conclusion that LoRaWAN somehow overrides the timer.
How I fixed this was by going into mlm32I0xx_hal_msp.c and redefine HAL_GetTick():
uint32_t HAL_GetTick(void){
return HW_RTC_Tick2ms(HW_RTC_GetTimerValue());
}
Hopefully I can help someone else with this solution.

writing to registers - naming convention in stm32

Ive been trying to setup an ADC manually within SMT32cubeIDE for an STM32F0103K6. I think i know which registers and flags i need, but I can't seem to write to any of them because all the names are wrong - ? I tried using the names in the SFR view in cubeIDE like -
ADC ->CR |= 1<<ADEN; // enable ADC
ADC ->ADC_SMPR |= 1<<0 | 1<<1; // speed divider select
ADC ->ADC_CHSELR |= 1<<0 | 1<<1; // set sequence to adc0, adc1
ADC ->ADC_CFGR1 |= 1<<DISCEN; //discontinuous mode
ADC ->ADC_CR |= 1<<2; // start conversion
but the compiler doesn't recognise any of them. The names in the reference manual are all the same except they have ADC (i.e. ADC_CR) and none of the names of the bits seem to be recognised either.
where am i going wrong?
That’s easy. Find the CMSIS header file(s) and use the correct identifiers.
Usually register bit definitions look like ADC_CR1_DISCEN.

can not read temp from ds18b20

I am using stm32 to read the ds18b20 with HAL library
I think the init is correct but the read and write is not
anyone can tell me why it is not right?
for the write,here is the code
if ((data & (1 << i)) != 0)
{
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(1);
MX_GPIO_Set(0);
delay_ms(60);
}
else
{
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(60);
MX_GPIO_Set(0);
}
it is write one bit data.
and for the read code
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
delay_ms(2);
MX_GPIO_Set(0);
if (HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_1) == GPIO_PIN_SET)
{
value |= 1 << i;
}
delay_ms(60);
the MX_GPIO_Set(1) means set the GPIO output
where is wrong?
please do not tell me use library or code in github.I want to write code myself so I can understand the ds18b20.
The DS18B20 uses the One-Wire protocol.
https://en.wikipedia.org/wiki/1-Wire
Each bit takes about 60 microseconds to transmit.
1s are HIGH during most of the transmission and 0s are LOW during most of the transmission. The start of the next bit is indicated by a pulse.
One thing that stands out to me is that you're using delay_ms (milliseconds), when you likely want to be using delay_us (microseconds).
Also, you're relying on the bit's timing to be exact (which it probably won't be). Instead, base your timing on the pulse.
It's more complicated than that.
When reading, you need to be continually checking the pin's value and interpreting what it means rather than putting in delays and hoping that the timing matches up.
I have not tested this code and it's incomplete.
This is just to illustrate a technique.
To start off, we're going to set our output to LOW and wait
for the sensor to go LOW for at least 200us. (Ideally 500us. 200us is our minimum requirement.)
This is the "RESET" sequence that tells us that new data is about to start.
const int SleepIntervalMircoseconds = 5;
// Start off by setting our output to LOW (aka GPIO_PIN_RESET).
MX_GPIO_Set(1);
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_1, GPIO_PIN_RESET);
// Switch back to reading mode.
MX_GPIO_Set(0);
const int ResetRequiredMiroseconds = 200;
int pinState = GPIO_PIN_SET;
int resetElapsedMicroseconds = 0;
while (pinState != GPIO_PIN_RESET || resetElapsedMicroseconds < ResetRequiredMiroseconds) {
pinState = HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_1);
if (pinState != GPIO_PIN_RESET) {
resetElapsedMicroseconds = 0;
continue;
}
delay_us(SleepIntervalMircoseconds);
// Note that the elapsed microseconds is really an estimate here.
// The actual elapsed time will be 5us + the amount of time it took to execute the code.
resetElapsedMicroseconds += SleepIntervalMircoseconds;
}
This only gets us started.
After you've received the reset signal, you need to indicate to the other side that you've received it by setting your value HIGH for certain amount of time.
I'm unable to comment on your code, because important parts like the GPIO setup and the source for the functions called are missing. However, if the bit timing gives you trouble, you might try this.
Using a UART to Implement a 1-Wire Bus Master
then you don't have to deal with delays and timings other than calculating the UART baud rate.
All STM32 UARTs support one wire operation with an open drain GPIO pin. Connect the I/O pin of the device to an UART TX pin. Configure the pin as open-drain alternate function output, with the alternate function number for the UART if applicable. Enable the UART and set it to single wire operation in the control registers.
Set the UART baud rate to 7407, and send 0xF0, that's the reset pulse. Wait for RXNE and read the UART data register. If it's not 0xF0, then the device is answering with a presence pulse.
Set the UART baud rate to 133333, and you can start communicating. To send a 0 bit, write 0x00 to the UART data register. To send a 1 bit, write 0xFF to the UART data register. To receive a bit, write 0xFF, wait for RXNE, and read the data register. If the byte read is 0xFF, then it's a 1, otherwise (any other value read) it's a 0.

STM32F072RB does not receive/send data over SPI in slave mode

I am using the
STM32F072RB
uC to receive and transmit data over SPI2 in slave mode with the following configuration:
CR1 = 0x0078
CR2 = 0x0700
AFRH = 0x55353500
MODER = 0xa2a0556a
The register APB1ENR is also properly configured.
The current program just checks the RXNE flag, reads the received data from DR and sends a random value writing to DR.
The status register when I receive data has the following value:
SR = 0x1403
The master sends data properly and I checked the signals at the slave pins (clock phase and polarity are identical on both sides and the NSS signal is cleared before sending SCK and data over MOSI).
I even configured the pins as inputs and I know I could read any digital signal the master could send.
With the current configuration it seems the slave receives something because the RXNE is set when the master sends data but the read value is always 0x00.
I have tried different configurations (software/hardware NSS, different data sizes, etc.) but I always get 0x00.
Moreover, the random value I send after reading DR is not sent to the outputs.
This is my current function, which is called continuously:
unsigned char spi_rx_slave(unsigned char spiPort, unsigned char *receiveBuffer)
{
uint8_t temp;
static unsigned long sr;
if (!spi_isOpen(spiPort))
{
sendDebug("%s() Error: spiPort not in use!\r\n",__func__);
return false;
}
if (spiDescriptor[spiPort]->powerdown == true)
{
sendDebug("%s() Error: spiPort in powerdown!\r\n",__func__);
return false;
}
/* wait till spi is not busy anymore */
while((spiDescriptor[spiPort]->spiBase->SR) & SPI_SR_BSY)
{
sendDebug("SPI is busy(1)\r\n");
vTaskDelay(2);
}
sendDebug("CR1 = 0x%04x, ", spiDescriptor[spiPort]->spiBase->CR1);
sendDebug("CR2 = 0x%04x, ", spiDescriptor[spiPort]->spiBase->CR2);
sendDebug("AFRH address = 0x%08x, AFRH value = %08x, ", (unsigned long*)(GPIOB_BASE+0x24), *(unsigned long*)(GPIOB_BASE+0x24));
sendDebug("MODER address = 0x%08x, MODER value = %08x\r\n", (unsigned long*)(GPIOB_BASE), *(unsigned long*)(GPIOB_BASE));
sr = spiDescriptor[spiPort]->spiBase->SR;
while(sr & SPI_SR_RXNE)
{
/* get RX byte */
temp = *(uint8_t *)&(spiDescriptor[spiPort]->spiBase->DR);
spiDescriptor[spiPort]->spiBase->DR = 0x53;
sendDebug("-------->DR address = 0x%08x, data received: 0x%02x\r\n", &spiDescriptor[spiPort]->spiBase->DR, temp);
sendDebug("SR = 0x%04x\r\n", sr);
vTaskDelay(1);
sr = spiDescriptor[spiPort]->spiBase->SR;
}
while((spiDescriptor[spiPort]->spiBase->SR) & SPI_SR_BSY)
{
sendDebug("SPI is busy(2)\r\n");
vTaskDelay(2);
}
return true;
}
What am I doing wrong?
Is there anything I did not configure properly?
Thanks in advance.
Regards,
Javier
Edit:
I switched to software NSS and copied the register values from a STM32CubeMX example I found online. I cannot use those libraries for this project but I would like to have the same behaviour.
The new values are:
CR1 = 0x0278
which means
fPCLK/256 (the proper one for the communication speed),
SPI enabled and
SSM = 1 (software NSS).
CR2 = 0x1700
which means
8-bit data and
RXNE event is generated if the FIFO level is greater than or equal to 1/4 (8-bit).
AFRH = 0x55303500
MODER = 0xa8a1556a
which means
MISO, MOSI and SCK alternate function 5 (SPI2)
NSS is not configured because now it is in software mode (slave is always selected).
I am still getting the same results and the eval kit with those libraries works fine using SPI1 instead.
Therefore there must be another issue that has nothing to do with the register values.
Might there be any clock issue e.g. the pins need to get some clock?
Thanks!
The question points to a couple of mistakes which may explain why no receive has been observed:
GPIO configuration points to some wrong Alternate Functions / Modes:
The question didn't state it precisely, but I assume that
AFRH = 0x55303500
MODER = 0xa8a1556a
refers to GPIOB (otherwise, it wouldn't make sense with SPI2).
This corresponds to the following pin configuration (see the
Reference Manual,
sec. 8.4.1, 8.4.10 and the
Datasheet,
Table 16):
PB15 - Alternate Function - AF5 = [INVALID]
PB14 - Alternate Function - AF5 = [I2C2_SDA]
PB13 - Alternate Function - AF3 = [TSC_G6_IO3]
PB12 - GP Input (reset state)
PB11 - Alternate Function - AF3 = [TIM_CH4]
PB10 - Alternate Function - AF5 = [SPI2_SCK / I2S2_CK]
PB09 - GP Input (reset state)
PB08 - GP Output
PB07 - Alternate Function - (unknown which, see register AFRL)
PB06 - GP Output
PB05 - Alternate Function - (unknown which, see register AFRL)
PB04 - GP Output
PB03 - GP Output
PB02 - Alternate Function - (unknown which, see register AFRL)
PB01 - Alternate Function - (unknown which, see register AFRL)
PB00 - Alternate Function - (unknown which, see register AFRL)
This is obviously not what the software is required to do.
Solution: Make sure to configure PB15=>AF0, PB14=>AF0, either PB13=>AF0 or PB10=>AF0, depending on your hardware.
In order to avoid mistakes in doing so, you should follow the hint of #P__J__ and use speaking macros for constants assigned to MODER, AFRH etc.
Using the HAL library provided by ST is a truly controversial subject among SO users, but one should really consider to use at least a header like stm32f072xb.h with macros like GPIO_AFRH_AFSEL15.
If one represents all configuration register values as (bitwise) ORs of such macros, it is easier to re-check configuration against datasheets, and the famous
rubber duck
will directly know what an unhappy developer is talking about.
Other clock activations might be missing:
The question confirms that
The register APB1ENR is also properly configured.
This is correct (as long as bit 14 is set).
Additionally, GPIOB must be powered, i. e., bit 18 of RCC_AHBENR must be set.
See again the
Reference Manual,
sec. 6.4.8 and 6.4.6.
GPIO pins may be in wrong mode during debugging:
I even configured the pins as inputs and I know I could read any digital signal the master could send. With the current configuration it seems the slave receives something because the RXNE is set when the master sends data but the read value is always 0x00.
Please note that for every GPIO pin, a unique mode is selected through the MODER register. If this is set to "Input" (0b00), the Alternate Function is disconnected and won't work with external signals.