STM32F7x6 - Setting readout protection from code without power cycle - stm32

I am working on a custom battery powered board with a STM32F746 on which I would like to set the RDP option byte to level 1 from code. Basically, while booting the software checks if the option bytes are set correctly and if not it will reprogram the option bytes to the correct values. This means that any unit shipped to customers is ensured to have the read-out protection set and we don't have to worry about the assembly house forgetting to set it and thus exposing the firmware.
To do this, I follow the procedure in the reference manual:
unlock the option bytes by writing the correct keys to FLASH_OPTKEYR and clearing OPTLOCK
set the desired option values in FLASH_OPTCR
set OPTSTRT in FLASH_OPTCR
This works fine for all option bytes except RDP, which locks the MCU after setting OPTSTRT. As far as I can tell, this is intended behaviour, and requires a reset of the device through a power cycle after which the option byte is set correctly. But since our board is battery powered (soldered connection) this is a bit cumbersome and so my first question is if there exists another way of doing this.
A bit of googling revealed that some STMs can also activate the altered option bytes by transitioning from the STBY state, here is a specific example by ST: https://www.youtube.com/watch?v=f7vs0NwZPFo&ab_channel=STMicroelectronics.
So far I have no luck with this method: when returning from STBY the RDP is not set. The example deals with an STM32L4, does anyone know if this method is supported on the F7?

Related

How do I add a missing peripheral register to a STM32 MCU model in Renode?

I am trying out this MCU / SoC emulator, Renode.
I loaded their existing model template under platforms/cpus/stm32l072.repl, which just includes the repl file for stm32l071 and adds one little thing.
When I then load & run a program binary built with STM32CubeIDE and ST's LL library, and the code hits the initial function of SystemClock_Config(), where the Flash:ACR register is being probed in a loop, to observe an expected change in value, it gets stuck there, as the Renode Monitor window is outputting:
[WARNING] sysbus: Read from an unimplemented register Flash:ACR (0x40022000), returning a value from SVD: 0x0
This seems to be expected, not all existing templates model nearly everything out of the box. I also found that the stm32L071 model is missing some of the USARTs and NVIC channels. I saw how, probably, the latter might be added, but there seems to be not a single among the default models defining that Flash:ACR register that I could use as example.
How would one add such a missing register for this particular MCU model?
Note1: For this test, I'm using a STM32 firmware binary which works as intended on actual hardware, e.g. a devboard for this MCU.
Note2:
The stated advantage of Renode over QEMU, which does apparently not emulate peripherals, is also allowing to stick together a more complex system, out of mocked external e.g. I2C and other devices (apparently C# modules, not yet looked into it).
They say "use the same binary as on the real system".
Which is my reason for trying this out - sounds like a lot of potential for implementing systems where the hardware is not yet fully available, and also automatted testing.
So the obvious thing, commenting out a lot of parts in init code, to only test some hardware-independent code while sidestepping such issues, would defeat the purpose here.
If you want to just provide the ACR register for the flash to pass your init, use a tag.
You can either provide it via REPL (recommended, like here https://github.com/renode/renode/blob/master/platforms/cpus/stm32l071.repl#L175) or via RESC.
Assuming that your software would like to read value 0xDEADBEEF. In the repl you'd use:
sysbus:
init:
Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
In the resc or in the Monitor it would be just:
sysbus Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
If you want more complex logic, you can use a Python peripheral, as described in the docs (https://renode.readthedocs.io/en/latest/basic/using-python.html#python-peripherals-in-a-platform-description):
flash: Python.PythonPeripheral # sysbus 0x40022000
size: 0x1000
initable: false
filename: "script_with_complex_python_logic.py"
```
If you really need advanced implementation, then you need to create a complete C# model.
As you correctly mentioned, we do not want you to modify your binary. But we're ok with mocking some parts we're not interested in for a particular use case if the software passes with these mocks.
Disclaimer: I'm one of the Renode developers.

Is there a process to remap/switch CANTX/CANRX pins STM32H7 FDCAN module?

Testing out an STM32 with an FDCAN module (updated from the older BxCAN peripheral). CAN Classic at 500kbps.
I am running into an issue that when using the default pair of pins (D0/D1 in my case) I get expected behavior, but when switching the pins to the secondary option (B8/B9) using GPIO remapping, I get strange output on the bus.
I tried baud settings and options like protocol exception. Nothing seems to explain where this scope output is coming from.
I'm using the HAL to get this working, so I'm certain I'm not missing any registers on remapping. I've DeInit and ReInit the FDCAN module, started/stopped, etc. There seems to be no documented "process" for remapping pins. The entire FDCAN section of the reference module doesn't have the letters GPIO.
Picture: Yellow is the CANTX 0-3V signal (low is dominant). Purple is the CAN+ signal that idles at 2.5V and pulls past ~3.5V on a dominant. There is nothing else on this line, so I'm not concerned about the sawtoothing. The large initial CAN "SOF" pulse is wrong for timing. The long recessives are nonsense. Then the small value 1 bits are of the correct 2uS pulses for 500kbps. Changing the data put into the FDCAN FIFO makes no difference, the output is always the same.
Solved.
After sending this message, the INIT bits were set in the FDCAN->CCCR register. There were values in the error counters. Indicates an internal error. I was using the HAL as a time saver, but it was over-writting my desired GPIO settings.
I would set the pins B8/B9 to AF mode for FDCAN. Then call FDCAN_DeInit/Init, which via an MSP_INIT callback also calls GPIO Init, but for the original D0/D1 pins. Meaning the B8/B9 I set, and the D0/D1 pins were enabled at the same time.
This is an obvious problem. The HAL is fine for prototyping, but careful because it will try and "help". Undefined behavior at best and I normally wouldn't even post such a dumb mistake.
However... Maybe someone else finds it interesting that whatever the FDCAN state machine is doing here, makes this unique output seen in the scope picture. I initially didn't double check my pin setup, because it looked right, I was getting output on the scope, just the wrong output. I spent much more time going over peripheral settings and data I was feeding to it.

What is the difference between Program Status Word (PSW) and Program Counter (PC)?

In an Operating Systems course, the instructor introduced PSW and PC when he talked about Interrupt Handling.
His explanation was
PC holds the address of the next instruction to be fetched
PSW contains execution status information
But later I searched online and found that PSW = PC + status register. This makes me quite confused.
On the one hand, I am not sure what "execution status information" refers to. On the other hand, if PSW has the functions of a PC, why do we still need it?
Appreciate any explanation.
This isn't really standardized terminology. Most architectures have some register that plays the role of a status word, containing bits to indicate things like whether an add instruction caused a carry. But different architectures give it different names, and what exactly is included can vary widely. I'm not aware of any architecture that includes the program counter as part of their status word, but if they want to do that, well, who's going to stop them?
This is the kind of thing where you just have to look at the definition given by whatever book or article you are reading (or infer it from context), and realize that a different author may use the word differently.
In general, interrupts are hardware level subroutine calls. They do the same thing as a subroutine call (change the algorithm that the processor is executing) however they do it without warning the "executing code" that they are now operating.
In order to not damage the "executing code" all information that it was using must be stored. This includes the Program Counter (usually saved to the stack by the interrupt hardware in the same way that a subroutine call does) and all of the registers that the interrupt function will alter- these must be saved by pushing them onto the stack. The registers etc must be restored before the return from interrupt (RETI) instruction - the PC is restored by the RETI itself.
The PSW (often called the flag register) is a very important register and must generally be saved first. It contains bits like Zero (the last calculation resulted in a zero result) Carry (the last calculation resulted in a carry ie the result number is bigger than the register can hold) and several other flags. I suggest that you read the data sheet of an 8 bit microcontroller for an idea of what these flags might be. suffice it to say that these flags are needed in order to perform conditional jumps. And whilst they will often be ignored you can't take that chance.
You are probably correct in Your instructor using the term PSW to mean all all of the registers.
The subject of interrupts contains concepts that are common to subroutine calls in general (e.g. don't leave data that you don't want overwritten in a register before entering a subroutine). And later on in operating systems, the concept of context switches that occur during multi-tasking.
Peter

How to read STM32H7 SRAM ECC status?

I want to periodically check the state of the RAM ECC detection in my firmware on a STM32H753.
The ECC mechanism is well described in the Reference Manual for the Flash but not very well (in my opinion) for the RAM.
I've seen that there are dedicated interrupts for RAM ECC but it is not clear if they are triggered for single error or double error ?
Also, why a dedicated interrupt and not a BusFault (like it is done on the Flash if I understood correctly ?)
Also , I don't want to get interrupted for a single error (that is properly corrected). Instead I want to get the status periodically and log the error somewhere. What is the register to read to check whether a single error happened ?
I've got an answer on STMicro forum, I thought it might be helpful to post it here. The answer is from a member called Berendi.
"RAMECC status and interrupt flags are defined in the stm32h7??xx.h headers, and there are some code examples in stm32h7xx_hal_ramecc.h/.c
Apparently it's possible to independently mask single/double ecc fault interrupts.
There is application note AN5342 as well, but it's not very helpful."

Where is the mode bit?

I just read this in "Operating System Concepts" from Silberschatz, p. 18:
A bit, called the mode bit, is added to the hardware of the computer
to indicate the current mode: kernel(0) or user(1). With the mode bit,
we are able to distinguish between a task that is executed on behalf
of the operating system and one that is executed on behalf of the
user.
Where is the mode bit stored?
(Is it a register in the CPU? Can you read the mode bit? As far as I understand it, the CPU has to be able to read the mode bit. How does it know which program gets mode bit 0? Do programs with a special adress get mode bit 0? Who does set the mode bit / how is it set?)
Please note that your question depends highly on the CPU itselt; though it's uncommon you might come across certain processors where this concept of user-level/kernel-level does not even exist.
The cs register has another important function: it includes a 2-bit
field that specifies the Current Privilege Level (CPL) of the CPU. The
value 0 denotes the highest privilege level, while the value 3 denotes
the lowest one. Linux uses only levels 0 and 3, which are respectively
called Kernel Mode and User Mode.
(Taken from "Understanding the Linux Kernel 3e", section 2.2.1)
Also note, this depends on the CPU as you can clearly see and it'll change from one to another but the concept, generally, holds.
Who sets it? Typically, the kernel/cpu and a user-process cannot change it but let me explain something here.
**This is an over-simplification, do not take it as it is**
Let's assume that the kernel is loaded and the first application has just started(the first shell), the kernel loads everything for this application to start, sets the bit in the cs register(if you are running x86) and then jumps to the code of the Shell process.
The shell will continue to execute all of its instructions in this context, if the process contains some privileged instruction, the cpu will fetch it and won't execute it; it'll give an exception(hardware exception) that tells the kernel someone tried to execute a privileged instruction and here the kernel code handles the job(CPU sets the cs to kernel mode and jumps to some known-location to handle this type of errors(maybe terminating the process, maybe something else).
So how can a process do something privileged? Talking to a certain device for instance?
Here comes the System Calls; the kernel will do this job for you.
What happens is the following:
You set what you want in a certain place(For instance you set that you want to access a file, the file location is x, you are accessing for reading etc) in some registers(the kernel documentation will let you know about this) and then(on x86) you will call int0x80 instruction.
This interrupts the CPU, stops your work, sets the mode to kernel mode, jumps the IP register to some known-location that has the code which serves file-IO requests and moves from there.
Once your data is ready, the kernel will set this data in a place you can access(memory location, register; it depends on the CPU/Kernel/what you requested), sets the cs flag to user-mode and jumps back to your instruction next to the it int 0x80 instruction.
Finally, this happens whenever a switch happens, the kernel gets notified something happened so the CPU terminates your current instruction, changes the CPU status and jumps to where the code that handles this thing; the process explained above, roughly speaking, applies to how a switch between kernel mode and user-mode happens.
It's a CPU register. It's only accessible if you're already in kernel mode.
The details of how it gets set depend on the CPU design. In most common hardware, it gets set automatically when executing a special opcode that's used to perform system calls. However, there are other architectures where certain memory pages may have a flag set that indicates that they are "gateways" to the kernel -- calling a function on these pages sets the kernel mode bit.
These days it's given other names such as Supervisor Mode or a protection ring.