I get an Error when setting PCROP STM32H7 (STM32H743) - stm32cubeide

Goal
I'm trying to set a PCROP area on my STM32H743VI microcontroller, but I'm getting the error code HAL_FLASH_ERROR_OB_CHANGE when executing HAL_FLASH_OB_Launch() and my PCROP area is not set.
The relevant part of the code I'm using should be the following sections
My code
#include "stm32h7xx_hal.h"
FLASH_OBProgramInitTypeDef OBInit;
HAL_FLASHEx_OBGetConfig(&OBInit);
HAL_FLASH_Unlock();
HAL_FLASH_OB_Unlock();
// program OB
OBInit.OptionType = OPTIONBYTE_PCROP;
OBInit.PCROPStartAddr = 0x8030000;
OBInit.PCROPEndAddr = 0x8031000;
OBInit.PCROPConfig = OB_PCROP_RDP_ERASE;
OBInit.Banks = FLASH_BANK_1; // (1, 2 oder BOTH)
HAL_FLASHEx_OBProgram(&OBInit);
/* write Option Bytes */
if (HAL_FLASH_OB_Launch() != HAL_OK) {
// Error handling
while (1)
{
}
}
HAL_FLASH_OB_Lock();
HAL_FLASH_Lock();
The code is mostly inspired by the youtube video "Security Part3 - STM32 Security features - 07 - PCROP lab" (by STMicroelectronics) and the working code I have to change RDP level.
Setup
My secret function is in the expected address range, I did that by adding the memory area in the Flash.Id File
MEMORY
{
[...]
PCROP (x) : ORIGIN = 0x08030000, LENGTH = 16K
}
and putting the function file into the section accordingly
SECTIONS
{
[...]
.PCROPed :
{
. = ALIGN(4);
*led_blinking.o (.text .text*)
. = ALIGN(4);
} > PCROP
[...]
}
I set the flag -mslow-flash-data for the file I'm keeping my secret function in. I did this without really understanding why, following the tutorial in the video (see above).
What I tried
I debugged my program with my J-Trace Debugger and it seems like I'm doing the Option byte modification sequence described in the STM32H743/753 reference manual (p. 159) succesfully.
Securing an entire Flashpage (start 0x080020000, end 0x0803FFFF) didn't work either, even though I did not expect it to make a difference.
I also tried the other option for PCROPConfig, namely the OB_PCROP_RDP_NOT_ERASE option.
HAL_FLASHEx_OBProgram(&OBInit) works as intended and the ObInit Configuration is set correctly into the _FLASH->PRAR_PRG1 register. For my code the content of the register is 0x80880080
I did disconnect and reconnect the microcontroller from power and debugger in case I did not POR correctly.
I checked the errata sheet, but there is no errata which would be applicable to my problem.
EDIT
I changed the area which is protected by PCROP in the My code section.
I did this, since my code is generally functional and I found out it's not a good idea to protect an area in the first flash page!

Hmm, the code looks good so far.
Are you sure PCROP is disabled before setting it again?
Check if PRAR_CUR1 is not set from somewhere else. PCROP will raise a failure when trying to set although it is set.

Related

OSDev: OS fails when I deinit physical memory regions

I am currently running into problems. So, I am making my own OS, and currently have no problems except for one.
I don't know if it's with my linker script or what, but for some reason, when I attempt to deinit a region of physical memory, QEMU simply doesn't show anything on the screen.
But, as soon as I comment the function out, it works just fine. I tried packing the binary file full of random junk as well, OS still works fine, everything displays in QEMU. I tried doing hefty function calling for things that the OS doesn't even support yet, still runs. But, for some reason, when I call my function to de-init a region of used physical memory, the whole thing just crashes and doesn't work.
I am only including the linker script below, I will link my github page to the OS project for anyone to skim through it and possible help out. This is seriously super annoying.
SECTIONS {
.text 0x1F00 :
{
*(kernel_entry);
*(.text*);
}
.idt BLOCK(.) : ALIGN(.)
{
_idt = .;
. = . * 0x100;
}
.rodata :
{
*(.rodata*);
}
.bss :
{
*(.bss*);
}
end = .;
}
I did roll my own Bootloader as well. The github layout goes as so:
All bootloader content is in folder Bootloader, all kernel content(including linker script) is in folder Kernel.
GitHub page

STM32 - Cortex M3: STIR register

I'm trying to write code in order to generate software trigger. Datasheets seem confusing to me and I cannot find any example on NET.
In the "Cortex-M3 devices Generic User Guide" in 4.2.8 I read to write in NVIC->STIR register to generate a software interrupt:
[STIR register][1]
But, if I understood correctly USERSETMPEND must be set to 1 for unprivileged software (I think it's my case).
But If I want to access to SCR I have to be "privileged"
[SCR register][2]
So, how can I enter in privileged mode? Is it really necessary enter in this mode only for interrupt configuration?
What I've done in code is
SCB->CCR |= 0x02; // USERSETMPEND = 1. But I must be "privileged"
HAL_NVIC_EnableIRQ(EXTI0_IRQn); // enable interrupt line
.... some code....
... ...
if(my_software_trigger)
{
my_software_trigger = 0;
NVIC->STIR = EXTI0_IRQn; // triggers interrupt??
}
Is it the correct approach? Anyone has experience in this matter? Any help is really appreciatedd..
P.s. Sorry for my english
Marco.

what is proper way to change properties/generated code of custom HID in STM32CubeIDE

I`m trying to create custom HID device with STM32F103C8, IDE that i choose is STM32CubeIDE and the tutorial that i was following is at ST youtube official channel.
ST offers great tool "Device configuration tool" where i can configure microcontroler, and a lot of code based on my configuration will be generated. That generated code has "user code parts" where user creates his logic, and if he needs to reconfigure microcontroller "Device configuration tool" will not remove that parts of code.
Problem:
To configure custom usb HID i need to change code generated by "Device configuration tool" in places where is no place for user code and that changes will be removed if i run "Device configuration tool" again.
Fields that i only can set in "Device configuration tool" are this:
But that is not enough i also need to change CUSTOM_HID_EPIN_SIZE and CUSTOM_HID_EPOUT_SIZE defines which represent amount of bytes device and host send to each other at once, and also if i change the size of "data pack" i will need to change the default generated callback function that receives that data and works with it, for example the tool generates code like this:
{
USBD_CUSTOM_HID_HandleTypeDef *hhid = (USBD_CUSTOM_HID_HandleTypeDef *)pdev->pClassData;
if (hhid->IsReportAvailable == 1U)
{
((USBD_CUSTOM_HID_ItfTypeDef *)pdev->pUserData)->OutEvent(hhid->Report_buf[0],
hhid->Report_buf[1]);
hhid->IsReportAvailable = 0U;
}
return USBD_OK;
}
but i need the pointer to "Report_buf" not the copy of its first 2 elements, and the default generated code pass only copy of 2 first bytes, and i cant change this in "Device configuration tool".
My current solution:
Actually i solved this issue, but i don`t think i solved it the right way and it works. I have changed the template files which are here "STM32CubeIDE_1.3.0\STM32CubeIDE\plugins\com.st.stm32cube.common.mx_5.6.0.202002181639\db\templates"
And also changed files at "STM32CubeIDE_1.3.0\en.stm32cubef1.zip_expanded\STM32Cube_FW_F1_V1.8.0\Middlewares\ST\STM32_USB_Device_Library\Class\HID"
I don`t think this is the right way to do it, does any one know the right way to do this thing ?
I also found same question on ST forum here but it was not resolved.
what you want to achieve is exactly what is explained by st trainer on this link.
https://www.youtube.com/watch?v=3JGRt3BFYrM
Trainer is step by step explaining how to change the code to use the pointer to buffer
if (hhid->IsReportAvailable == 1U)
{
((USBD_CUSTOM_HID_ItfTypeDef *)pdev->pUserData)->OutEvent(hhid->Report_buf);
hhid->IsReportAvailable = 0U;
}

Opencascade crash when calling calling Transfer()

I have tested two cases:
I use STEPCAFControl_Reader then STEPControl_Reader to read my step file but both methods crash when I call STEPCAFControl_Reader::Transfer, repsectively STEPControl_Reader:: TransferRoots.
By using STEPControl_Reader, I displayed a log on my console, then there is a message like this:
1 F:(BOUNDED_SURFACE,B_SPLINE_SURFACE,B_SPLINE_SURFACE_WITH_KNOTS,GEOMETRIC_REPRESENTATION_ITEM,RATIONAL_B_SPLINE_SURFACE,REPRESENTATION_ITEM,SURFACE): Count of Parameters is not 1 for representation_item
EDIT:
There is a null reference inside TransferRoots() method.
const Handle(Transfer_TransientProcess) &proc = thesession->TransferReader()->TransientProcess();
if (proc->GetProgress().IsNull())
{
//This condition does not exist from the source code
std::cout << "GetProgress is null" << std::endl;
return 0;
}
Message_ProgressSentry PS ( proc->GetProgress(), "Root", 0, nb, 1 );
My app and FreeCAD crash but if I use CAD Assitant which OCC official viewer, it loads.
It looks like comments already provide an answer to the question - or more precisely answers:
STEPCAFControl_Reader::ReadFile() returns reading status, which should be checked before calling STEPCAFControl_Reader::Transfer().
Normally, it is a good practice to put OCCT algorithm into try/catch block and check for OCCT exceptions (Standard_Failure).
Add OCC_CATCH_SIGNALS at the beginning of try statements (required only on Linux) and OSD::SetSignal(false) within working thread creation to redirect abnormal cases (access violation, NULL dereference and others) to C++ exceptions (OSD_Signal which is subclass of Standard_Failure). This may conflict other signal handlers in mixed environment - so check also documentation of other frameworks used by application.
If you catch failures like NULL dereference on calling OCCT algorithm with valid arguments - this is a bug in OCCT which is desirable to be fixed in one or another way, even if input STEP file contains syntax/logical errors triggering such kind of issues. Report the issue on OCCT Bugtracker with sufficient information for reproducing bug, including sample files - it is not helpful to developers just saying that OCCT crashes somewhere. Consider also contributing into this open source project by debugging OCCT code and suggesting patches.
Check STEP file reading log for possible errors in the file itself. Consider reporting an issue to system producing a broken file, even if main file content can be loaded by STEP readers.
It is a common practice to use OSD::SetSignal() within OCCT-based applications (like CAD Assistant) to improve their robustness on non-fatal errors in application/OCCT code. It is more user friendly reporting an internal error message instead of silently crashing.
But it should be noted, that OSD::SetSignal() doesn't guarantee application not being crashed nor that application can work properly after catching such failure - due to asynchronous nature of some signals, the memory can be already corrupted at the moment, when C++ exception has been raised leading to all kinds of undesired behavior. For that reason, it is better not ignoring such kind of exceptions, even if it looks like application works fine with them.
OSD::SetSignal(false); // should be called ones at application startup
STEPCAFControl_Reader aReader;
try
{
OCC_CATCH_SIGNALS // necessary for redirecting signals on Linux
if (aReader.ReadFile (theFilePath) != IFSelect_RetDone) { return false; }
if (!aReader.Transfer (myXdeDoc)) { return false; }
}
catch (Standard_Failure const& theFailure)
{
std::cerr << "STEP import failed: " << theFailure.GetMessageString() << "\n";
return false;
}
return true;

CLIPS (clear) command fails / throws exception in pyclips

I have a pyclips / clips program for which I wrote some unit tests using pytest.
Each test case involes an initial clips.Clear() followed by the execution of real clips COOL code via clips.Load(rule_file.clp). Running each test individually works fine.
Yet, when telling pytest to run all tests, some fail with ClipsError: S03: environment could not be cleared. In fact, it depends on the order of the tests in the .py file. There seem to be test cases, that cause the subsequent test case to throw the exception.
Maybe some clips code is still "in use" so that the clearing fails?
I read here that (clear)
Clears CLIPS. Removes all constructs and all associated data structures (such as facts and instances) from the CLIPS environment. A clear may be performed safely at any time, however, certain constructs will not allow themselves to be deleted while they are in use.
Could this be the case here? What is causing the (clear) command to fail?
EDIT:
I was able to narrow down the problem. It occurs under the following circumstances:
test_case_A comes right before test_case_B.
In test_case_A there is a test such as
(test (eq (type ?f_bio_puts) clips_FUNCTION))
but f_bio_puts has been set to
(slot f_bio_puts (default [nil]))
So testing the type of a slot variable, which has been set to [nil] initially, seems to cause the (clear) command to fail. Any ideas?
EDIT 2
I think I know what is causing the problem. It is the test line. I adapted my code to make it run in the clips Dialog Windows. And I got this error when loading via (batch ...)
[INSFUN2] No such instance nil in function type.
[DRIVE1] This error occurred in the join network
Problem resided in associated join
Of pattern #1 in rule part_1
I guess it is a bug of pyclips that this is masked.
Change the EnvClear function in the CLIPS source code construct.c file adding the following lines of code to reset the error flags:
globle void EnvClear(
void *theEnv)
{
struct callFunctionItem *theFunction;
/*==============================*/
/* Clear error flags if issued */
/* from an embedded controller. */
/*==============================*/
if ((EvaluationData(theEnv)->CurrentEvaluationDepth == 0) &&
(! CommandLineData(theEnv)->EvaluatingTopLevelCommand) &&
(EvaluationData(theEnv)->CurrentExpression == NULL))
{
SetEvaluationError(theEnv,FALSE);
SetHaltExecution(theEnv,FALSE);
}