How do I find out the size of the flash memory in the code itself? - stm32

I want to get the limiting address of the flash in the code itself, or at least the size of this flash.
I found only the start address of the flash in the stm32f302xc.h file, but did not find the end address.
/** #addtogroup Peripheral_memory_map
* #{
*/
#define FLASH_BASE 0x08000000UL /*!< FLASH base address in the alias region */
#define SRAM_BASE 0x20000000UL /*!< SRAM base address in the alias region */
#define PERIPH_BASE 0x40000000UL /*!< Peripheral base address in the alias region */
#define SRAM_BB_BASE 0x22000000UL /*!< SRAM base address in the bit-band region */
#define PERIPH_BB_BASE 0x42000000UL /*!< Peripheral base address in the bit-band region */
What defines are responsible for this, thanks.

What you want is described in the reference manual RM0366 in section 29.2 Memory size data register.
ST provide this functionaility but for some reason they don't always give an easy way to access it in the headers.
The address of this register is FLASHSIZE_BASE. You have to read it at run-time, eg:
uint16_t flash_size_kb = *(const uint16_t*)FLASHSIZE_BASE;

STM32 MCU families come in different flash sizes, that's why it's not defined in their header files. Although STM32CubeMX code generator could, in theory, add it when you select the exact model of your MCU. But this would mean code might not work if you flash it to variant with a different flash size.
However some STM32 MCUs come with a Flash size data register that contains information about the flash size. This address is often found in a header file and sometimes a macro is included. Note that it not a compile time constant, but a ROM constant that needs to be read at runtime.
stm32f3xx_hal_flash_ex.h contains the following register address:
#define FLASH_SIZE_DATA_REGISTER (0x1FFFF7CCU)
You can use it like this:
const size_t FLASH_SIZE = (*((uint16_t*)FLASH_SIZE_DATA_REGISTER)) << 10;
For the stm32g4xx there is a macro defined in stm32g4xx_hal_flash.h:
FLASH_SIZE
Be careful with writing your own macro for this as the flash size is not always linear with the value in the register. In the case of the stm32g4xx the value 0xFFFF means either 128kiB or 512kiB as seen below:
#define FLASH_SIZE_DATA_REGISTER FLASHSIZE_BASE
#if defined (FLASH_OPTR_DBANK)
#define FLASH_SIZE ((((*((uint16_t *)FLASH_SIZE_DATA_REGISTER)) == 0xFFFFU)) ? (0x200UL << 10U) : \
(((*((uint32_t *)FLASH_SIZE_DATA_REGISTER)) & 0xFFFFUL) << 10U))
#define FLASH_BANK_SIZE (FLASH_SIZE >> 1)
#define FLASH_PAGE_NB 128U
#define FLASH_PAGE_SIZE_128_BITS 0x1000U /* 4 KB */
#else
#define FLASH_SIZE ((((*((uint16_t *)FLASH_SIZE_DATA_REGISTER)) == 0xFFFFU)) ? (0x80UL << 10U) : \
(((*((uint32_t *)FLASH_SIZE_DATA_REGISTER)) & 0xFFFFUL) << 10U))
#define FLASH_BANK_SIZE (FLASH_SIZE)
#define FLASH_PAGE_NB ((FLASH_SIZE == 0x00080000U) ? 256U : 64U)
#endif
The same applies to flash page size. Page size depends on the family and sometimes on the flash size. Sometimes ST provides a macro for this too.

Related

How does STM32 demo USB-DFU boot loader check if user code is loaded?

STM32 HAL demo USB-DFU boot loader contains this code:
/* Test if user code is programmed starting from address 0x0800C000 */
if (((*(__IO uint32_t *) USBD_DFU_APP_DEFAULT_ADD) & 0x2FFC0000) == 0x20000000)
{
/* Jump to user application */
JumpAddress = *(__IO uint32_t *) (USBD_DFU_APP_DEFAULT_ADD + 4);
JumpToApplication = (pFunction) JumpAddress;
/* Initialize user application's Stack Pointer */
__set_MSP(*(__IO uint32_t *) USBD_DFU_APP_DEFAULT_ADD);
JumpToApplication();
}
How does this predicate ((*(__IO uint32_t *) USBD_DFU_APP_DEFAULT_ADD) & 0x2FFC0000) == 0x20000000 determine whether or not user code is loaded on STM32H7A3 MPU?
What is this magic 0x2FFC0000 mask?
It is very simple and a very bad way. It simply checks if at USBD_DFU_APP_DEFAULT_ADD address (where initial stack pointer value should be) the value AND-et with mask is equal to some value.
I personally always add the CRC32 at the end of the app to check if the app is there and if the app is valid.
... determine whether or not user code is loaded on STM32H7A3 MPU?
It does not have anything in common with MPU
This sample code distributed with CubeMX STM32Cube_FW_H7_V1.9.0 package initially verifies if the app start address (stack top) lies in RAM address space - between 0x20000000 and 0x2003FFFF (256k).
For STM32H7A3ZI MPU (e.g. Nucleo-H7A3ZI-Q) this is incorrect because "regular" RAM (not DTCRAM) starts at address 0x24000000 and is 1024k large. It seems that the correct check for this MPU should be: if((stackAddr & 0x24E00000) == 0x24000000) ...
Although I do not quite understand why for this MPU default stack address configured by CubeMX is 0x24100000 which is top RAM address + 1.

STM32 secure boot loader security enable fails consistently with "[SBOOT] Security issue : Execution stopped !"

I have my STM32F405RGT6 secure boot loader running with security flags disabled. So I try to introduce the security flags/options one by one. Independently of which flag I enable in app_sfu.h, the code fails in the first FLOW_CONTROL_CHECK in the SFU_BOOT_SM_VerifyUserFwSignature function in sfu_boot.c
I have added logging that shows exactly what happens:
/* Double security check :
- testing "static protections" twice will avoid basic hardware attack
- flow control reached : dynamic protections checked
- re-execute static then dynamic check
- errors caught by FLOW_CONTROL ==> infinite loop */
TRACE("= [SBOOT] FLOW_CONTROL_CHECK(%x, %x)\n", uFlowProtectValue, FLOW_CTRL_RUNTIME_PROTECT);
FLOW_CONTROL_CHECK(uFlowProtectValue, FLOW_CTRL_RUNTIME_PROTECT);
Output from the trace shows this:
= [SBOOT] FLOW_CONTROL_CHECK(1554b, 30f1)
The FLOW_CONTROL_CHECK macro compares the two values. If they differ, the program fails.
As I understand the code, the uFlowProtectValue contains the run time protection values that are active at the actual execution time, while FLOW_CTRL_RUNTIME_PROTECT is a compile time #define that should be the same as what we're running with.
The core of the problem is that the run time protection value is what I expect it to be, while the compile time #define never differs from 0x30f1.
The #define comes to be in ST-provided code that your mother might not approve of, not in the least because it doesn't seem to work:
/**
* #brief SFU_BOOT Flow Control : Control values static protections
*/
#define FLOW_CTRL_UBE (FLOW_CTRL_INIT_VALUE ^ FLOW_STEP_UBE)
#define FLOW_CTRL_WRP (FLOW_CTRL_UBE ^ FLOW_STEP_WRP)
#define FLOW_CTRL_PCROP (FLOW_CTRL_WRP ^ FLOW_STEP_PCROP)
#define FLOW_CTRL_SEC_MEM (FLOW_CTRL_PCROP ^ FLOW_STEP_SEC_MEM)
#define FLOW_CTRL_RDP (FLOW_CTRL_SEC_MEM ^ FLOW_STEP_RDP)
#define FLOW_CTRL_STATIC_PROTECT FLOW_CTRL_RDP
/**
* #brief SFU_BOOT Flow Control : Control values runtime protections
*/
#define FLOW_CTRL_TAMPER (FLOW_CTRL_STATIC_PROTECT ^ FLOW_STEP_TAMPER)
#define FLOW_CTRL_MPU (FLOW_CTRL_TAMPER ^ FLOW_STEP_MPU)
#define FLOW_CTRL_FWALL (FLOW_CTRL_MPU ^ FLOW_STEP_FWALL)
#define FLOW_CTRL_DMA (FLOW_CTRL_FWALL ^ FLOW_STEP_DMA)
#define FLOW_CTRL_IWDG (FLOW_CTRL_DMA ^ FLOW_STEP_IWDG)
#define FLOW_CTRL_DAP (FLOW_CTRL_IWDG ^ FLOW_STEP_DAP)
#define FLOW_CTRL_RUNTIME_PROTECT FLOW_CTRL_DAP
The hex numbers from my trace output above are from when I enable the internal watch dog, IWDG.
The values are XOR'ed from three involved bitmaps:
#define FLOW_CTRL_INIT_VALUE 0x00005776U /*!< Init value definition */
#define FLOW_STEP_UBE 0x00006787U /*!< Step UBE value */
#define FLOW_STEP_IWDG 0x000165baU /*!< Step IWDG value */
The XOR of the two first is 0x30f1, and if you add FLOW_STEP_IWDG to that, you get 0x1554b.
So the run time value with IWDG enabled is correct, while the compile time value is wrong.
How can that be?
Ok, so this is just too silly: All the involved code is supplied by ST Microelectronics.
The sfu_boot.h file which uses all the security definitions from app_sfu.h does not #include app_sfu.h and it has no built-in checks to verify that app_sfu.h has indeed been included somewhere in the include chains. So I added #include "app_sfu.h" to ST Microelectronic's provided sfu_boot.h and the problem goes away.
Sorry for the inconvenience :-)

Why does D2 RAM work correctly even when clock is disabled?

TL;DR: documentation states I have to enable a specific memory region in the microcontroller before I can use it. However, I can use it before enabling it, or even after disabling it. How is this possible?
I'm currently developing an application for the STM32H743 microcontroller. I don't understand how the RAM seems to work correctly while the clock is disabled.
This MCU has multiple memories, spread over multiple power domains:
In D1 domain it has ITCMRAM + DTCMRAM + AXI SRAM (64 + 128 + 512 kB)
In D2 domain it has SRAM1 + SRAM2 + SRAM3 (128 + 128 + 32 kB)
In D3 domain it has SRAM4 + Backup SRAM (64 + 4 kB)
I want to use the SRAM1. In the reference manual (RM0433 Rev. 7) it is stated at page 366 that:
If the CPU wants to use memories located into D2 domain (SRAM1, SRAM2 and SRAM3), it has to enable them.
In the register settings at page 452 it is described how to do this:
RCC AHB2 Clock Register (RCC_AHB2ENR):
SRAM1EN: SRAM1 block enable
Set and reset by software.
When set, this bit indicates that the SRAM1 is allocated by the CPU. It causes the D2 domain to
take into account also the CPU operation modes, i.e. keeping D2 domain in DRun when the CPU is
in CRun.
0: SRAM1 interface clock is disabled. (default after reset)
1: SRAM1 interface clock is enabled.
So, the default value (after reset) is 0, which means the SRAM1 interface is disabled.
In this thread on the STM Community forum the question was why the D2 RAM wasn't working correctly and the solution was to enable the D2 RAM clocks. The "correct" way to do this is in SystemInit() (part of the STM32H7 HAL). In system_stm32h7xx.c we can find the following code parts:
/************************* Miscellaneous Configuration ************************/
/*!< Uncomment the following line if you need to use initialized data in D2 domain SRAM (AHB SRAM)
*/
// #define DATA_IN_D2_SRAM
(...)
void SystemInit(void)
{
(...)
#if defined(DATA_IN_D2_SRAM)
/* in case of initialized data in D2 SRAM (AHB SRAM) , enable the D2 SRAM clock (AHB SRAM clock)
*/
# if defined(RCC_AHB2ENR_D2SRAM3EN)
RCC->AHB2ENR |= (RCC_AHB2ENR_D2SRAM1EN | RCC_AHB2ENR_D2SRAM2EN | RCC_AHB2ENR_D2SRAM3EN);
# elif defined(RCC_AHB2ENR_D2SRAM2EN)
RCC->AHB2ENR |= (RCC_AHB2ENR_D2SRAM1EN | RCC_AHB2ENR_D2SRAM2EN);
# else
RCC->AHB2ENR |= (RCC_AHB2ENR_AHBSRAM1EN | RCC_AHB2ENR_AHBSRAM2EN);
# endif /* RCC_AHB2ENR_D2SRAM3EN */
tmpreg = RCC->AHB2ENR;
(void)tmpreg;
#endif /* DATA_IN_D2_SRAM */
(...)
}
So, to use D2 SRAM, the macro DATA_IN_D2_SRAM should be defined (or you must manually enable the clock using __HAL_RCC_D2SRAM1_CLK_ENABLE()).
However, I don't have this macro defined, and even when I manually disable the clocks, the RAM seems to be working perfectly fine.
My main task (I'm running FreeRTOS, and this is the only task right now) is like this:
void main_task(void * argument)
{
__HAL_RCC_D2SRAM1_CLK_DISABLE();
__HAL_RCC_D2SRAM2_CLK_DISABLE();
__HAL_RCC_D2SRAM3_CLK_DISABLE();
mem_test(); // expected to fail, but runs successfully
for (;;) {}
}
The memory test completely fills the D2 SRAM with known data, then calculates a CRC over it. The CRC is correct. I already have verified that the buffer is really placed in the D2 SRAM (memory address 0x30000400 is within the range 0x30000000-0x3001FFFF of SRAM1). The value of RCC->AHB2ENR is confirmed to be 0 (all clocks disabled). I also confirmed that the address of RCC->AHB2ENR is 0x580244DC, as stated in the datasheet.
The data cache is disabled.
What am I missing here? Why is this memory readable and writable when the clocks are disabled?
UPDATE: On request, here is the code of my memory test, from which I conclude that the memory can be written and read successfully:
// NB: The sections are defined in the linker script.
static char test_data_d1[16] __attribute__((section(".RAM_D1_data"))) = "Test data in D1";
static char test_data_d2[16] __attribute__((section(".RAM_D2_data"))) = "Test data in D2";
static char test_data_d3[16] __attribute__((section(".RAM_D3_data"))) = "Test data in D3";
static char buffer_d1[256 * 1024ul] __attribute__((section(".RAM_D1_bss")));
static char buffer_d2[256 * 1024ul] __attribute__((section(".RAM_D2_bss")));
static char buffer_d3[ 32 * 1024ul] __attribute__((section(".RAM_D3_bss")));
static void mem_test(void)
{
// Fill the buffers each with a different test pattern.
fill_buffer_with_test_data(buffer_d1, sizeof(buffer_d1), test_data_d1);
fill_buffer_with_test_data(buffer_d2, sizeof(buffer_d2), test_data_d2);
fill_buffer_with_test_data(buffer_d3, sizeof(buffer_d3), test_data_d3);
uint32_t crc_d1 = crc32b((uint8_t const *)buffer_d1, sizeof(buffer_d1));
uint32_t crc_d2 = crc32b((uint8_t const *)buffer_d2, sizeof(buffer_d2));
uint32_t crc_d3 = crc32b((uint8_t const *)buffer_d3, sizeof(buffer_d3));
printf("CRC buffer_d1 = 0x%08lX\n", crc_d1);
printf("CRC buffer_d2 = 0x%08lX\n", crc_d2);
printf("CRC buffer_d3 = 0x%08lX\n", crc_d3);
assert(0xC29DFAED == crc_d1); // Python: hex(binascii.crc32(16384 * b'Test data in D1\0'))
assert(0x73B70C2A == crc_d2); // Python: hex(binascii.crc32(16384 * b'Test data in D2\0'))
assert(0xC30AE71E == crc_d3); // Python: hex(binascii.crc32(2048 * b'Test data in D3\0'))
}
After lots of testing and investigating I found out that the D2 SRAM was disabled (as documented and expected) in a minimal application using the SysTick and only a few LEDs to make the test results visible. However, when using a timer (TIM1) instead of SysTick, or when enabling a USART, the D2 SRAM was enabled as well, even when I did not enable it in my code. In fact, adding either one of the following lines of code would implicitly enable the D2 SRAM:
__HAL_RCC_TIM1_CLK_ENABLE();
__HAL_RCC_USART3_CLK_ENABLE();
STM support has confirmed this behavior:
D2 SRAM is activated as soon as any peripheral in D2 is activated. It means that If you enable clock for any peripheral located in D2 domain (AHB1, AHB2, APB1 and APB2), D2 SRAM is active even if RCC->AHB2ENR is 0.
I'm still looking for a reliable source (reference manual) where this behavior is documented, but it seems to be a plausible explanation.
In practice I think this means that the D2 SRAM will almost always be enabled automagically so you don't have to care about it, at least for the most common use cases (e.g. when using any peripheral or the DMA controllers). Only when you want to use the D2 SRAM but none of the D2 peripherals, you would have to manually enable the SRAM clocks. This would also be the case for the startup code, where (if you choose to implement this) the D2 SRAM will be initialized before any of the peripherals are enabled.

STM32 HAL_I2C_Master_Transmit - Why we need to shift address?

after stumbling upon very strange thing I would like to find out if anyone could provide reasonable explanation.
I have SHT31 humidity sensor running on I2C and after trying to run it on STM32F2 it didn't work.
uint8_t __data[5]={0};
__data[0] = SHT31_SOFTRESET >> 8;
__data[1] = SHT31_SOFTRESET & 0xFF;
HAL_I2C_Master_Transmit(&hi2c3,((uint16_t)0x44)<<1,__data,2,1000);
I have opened the function and saw:
/**
* #brief Transmits in master mode an amount of data in blocking mode.
* #param hi2c Pointer to a I2C_HandleTypeDef structure that contains
* the configuration information for the specified I2C.
* #param DevAddress Target device address: The device 7 bits address value
* in datasheet must be shifted to the left before calling the interface
* #param pData Pointer to data buffer
* #param Size Amount of data to be sent
* #param Timeout Timeout duration
* #retval HAL status
*/
HAL_StatusTypeDef HAL_I2C_Master_Transmit(I2C_HandleTypeDef *hi2c, uint16_t DevAddress, uint8_t *pData, uint16_t Size, uint32_t Timeout)
{
/* Init tickstart for timeout management*/
uint32_t tickstart = HAL_GetTick();
if (hi2c->State == HAL_I2C_STATE_READY)
....... and it goes ....
So I followed the comment and frustration from my scope (looking why my bits are not going on the wire) and did:
HAL_I2C_Master_Transmit(&hi2c3,((uint16_t)0x44)<<1,__data,2,1000);
Finally my bits are going out and device ACKs me back - voila it works!
But why?? What would be the reason behind putting burden on the programmer to shift the address?
Because the programmer should probably be made aware if he wants to read or write data to or from the I2C slave device.
In common I2C communication the first seven bits of the "address byte" contains the slave address, whereas the last bit is a read/write bit. 0 is write and 1 is read.
In your case, you want to write data to the device (to perform a soft reset) and therefore a simple left shift will do the trick.
It has never been agreed whether an I2C address is to be specified:
such that it needs to be shifted for transmission, or
such that it does not need to be shifted for transmission.
Therefore some device datasheets specify it in variant 1 and some in variant 2. Similarly, some I2C APIs take the address in variant 1 and some in variant 2.
If the device and the API use a different variant, it's the programmer's burden to shift the address.
It creates a lot of confusion and is quite annoying. I doubt it will every be clarified.
Sorry for the late reply, I just bumped my head against this myself. This should be considered a bug but ST refuses to acknowledge it as such. If you research the reference manual for the I2C section, the OAR1 register states that the address is stored in bits 7:1 for 7 bit mode. Bits 0, 8 and 9 are ignored. The HAL routine that sets the address should then shift the 7 LSB's so that bits 6:0 of your address get written to bits 7:1 of the OAR1 register. This doesn't happen. Essentially, because the code was released, it is now a "feature" and not a bug. Another way to look at it is that the address byte that you send to the HAL is left aligned. This is extremely irritating as it is not consistent for 7 and 10 bit addresses.

How can I change the start address on flash?

I'm using STM32F746ZG and FreeRTOS.
The start address of flash is 0x08000000. But I want to change it to 0x08040000. I've searched this issue through google but I didn't find the solution.
I changed the linker script like the following.
MEMORY
{
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 320K
/* FLASH (rx) : ORIGIN = 0x8000000, LENGTH = 1024K */
FLASH (rx) : ORIGIN = 0x8040000, LENGTH = 768K
}
If I only change it and run the debugger, it has the problem.
If I change the VECT_TAB_OFFSET from 0x00 to 0x4000, it works fine.
/* #define VECT_TAB_SRAM */
#define VECT_TAB_OFFSET 0x40000 /* 0x00 */
SCB->VTOR = FLASH_BASE | VECT_TAB_OFFSET;
But if I don't use debugger, it doesn't work anything.
It means it only works when using ST-Linker.
Please let me know if you know the solution.
Thank you for in advance of your reply.
The boot address can be set in the option bytes.
You can set any address in the flash with 16k increments. There are two 16 bit registers in the option bytes area, one is used when the boot pin is low at reset, the other when the pin is high. Write the desired address shifted right by 14 bits, i.e. divided by 16384.
To boot from 0x08040000, write 0x2010 into the register as described in the Option bytes programming chapter of the reference manual.
You could also write a bootloader. Bootloader sits on the 0x0800 0000 address and loads your application firmware meaning jumps to it.
This is the other way to do it.
You need to place 8 bytes at the original beginning of the FLASH. Stm32 boots always from the address 0x00000000 which is aliased to the one of the memories (depending on the boot pins and options).
The first word contains the stack pointer the second one your reset handler. You never get to your code as it boots always from the same address.
You will need to modify your linker script and the startup files where vectors are defined