I am using NUCLEO-H755ZI-Q board, Which has two cores cortex m4 and cortex m7, To enable the lwip I have to enable CPU ICache and CPU DCache
But during runtime the program exits through SCB_EnableDCache(); function. Kindly help me if had missed something.
I was able to resolve this issue. this was more specific to cube ide 1.7 when i rolled back to 1.6 it was working fine. Issue is with the code generation from cube mx
Reference from CMSIS-Core, the description for __STATIC_FORCEINLINE void SCB_EnableDCache(void)
Before enabling the data cache, you must invalidate the entire data
cache SCB_InvalidateDCache(), because external memory might have
changed from when the cache was disabled. After reset, you must
invalidate SCB_InvalidateDCache() each cache before enabling it.
So I think you can try to SCB_InvalidateDCache() before SCB_EnableDCache().
On the other hand, can you provide the error message or the state of the error?
I think that helps figure out the root cause.
Related
I'm facing an unexpected problem with stm32f103c8. I'm programming the chip and after reset, it starts to run the program, but after a few seconds the microcontroller getting mixed up and stops running the program. After that when I try to reprogram the microcontroller, IDE(IAR EWARM) tells "target held in reset state".
It's very unusual issue because sometimes when I connect nRST pin directly to the VCC(3.3V), microcontroller runs program but unfortunately the current goes over 120mA and chip breaks down finally.
I'm using STM32CubeMX to generate the codes and my programmer is STLINK V2(clone), also tried Jlink V8.0(clone) but didn't change the result.
Could it be because of the clone programmers?
Can anyone help me solve this problem?
Thanks
Never connect nRST directly to VDD/VCC. This is a bi-directional input-output which must only ever be connected to an open-drain/open-collector signal. It can be pulled low externally or from within, it must never ever be pulled or driven high other than by the internal pull-up resistor.
When your debugger or programmer has finished programming the flash and wants to start running the new program then it requires to be able to pull this line low, which it might do externally if you connect this line to it in hardware, or else it has to be able to pull it low by software using the internal reset pulse-generator. If it does this and you have tied the line high externally then you are effectively shorting out your power supply, which is the cause of the over-current condition that you have observed.
Maybe the original problem is that your counterfeit ST-Link has its reset output configured as push-pull when it should be open-drain.
I would suggest that the easiest way to proceed is to leave the nRST line unconnected and configure your programming tool to use a software reset only.
I encountered the following issue while using Keil MDK 5 for STM32H743.
I had a communication problem with my SPI code and after a while I found out that it was due to the Periodic Windows Update.
When it is activated, it seems that the debugger is reading regularly the SPI data register, which reads the FIFO (so changes the state of the FIFO). Consequently when the software reads the FIFO, some bytes have been "lost" (or consumed by the debugger).
Is it an expected behaviour ? Do you know if it is due to Keil or to the STM32 ?
I don't fully understand how an access from the debugger to a register is working: I guess there is a read command sent over SWD but then, internally does the access to memory go through AHB / APB like for code executing on the CPU ?
Any registers that modify behaviour by being read (such as clearing status bits) can be problematic when debugging and the registers are shown in the debug window.
The best bet is to only look at the registers when you stop (close the DR window for the peripheral), and always be aware that you may clear status bits etc.
It is the way the processor works and nothing to do with the debugger.
It is a very common debug issue with serial comms etc.
if you have DR display in your watch window (or any other similar windows on the debugger screen) and you step through the code every time you step (or generally break) the data is read.
That is the only possible reason.
I've been working on code to test the speed of the APIC using the PIT. There are several problems I can't figure out. First, when testing my ISRs for the two timers, I get general protection faults on the iretq instructions. Second, neither timer actually fires any interrupts. Any help on this would be much appreciated.
Link to the relevant file.
The general protection faults were caused by having an invalid CS register due to not reloading the CS register after loading the GDT. It's not in the code I linked to at all.
I have been having an issue in my project with LWIP. I am using a ST32F4 MCU and running with no OS. The network seems to run fine, the protocols all work, but then (usually a day or two later) the stack just stops running. It seems to happen when trying to make a new connection, but I can't confirm that because I haven't been able to locate what is causing it in the code.
Has anyone else come across this issue? I think it may be the same as this guy.
Do you call any LwIP functions from any interrupt-handlers, like UART etc?
How do you feed packets in/out of LwIP? Directly via interrupt handlers, or do you push them in from your "main-loop" ?
Lock-ups can also be signs of double free, or use-after-free of pbufs.
I also experiences that one project was unstable with wierd random locks-ups when running at the top-rated frequency of the STM32. If I clocked my STM32 at 100MHz instead of 120MHz, all my problems went away....
I have multiple proxies in a message flow.Is there a way in OSB by which I can monitor the memory utilization of each proxy ? I'm getting OOM, want to investigate which proxy is eating away all/most memory.
Thanks !
If you're getting OOME then it's either because a proxy is not freeing up all the memory it uses (so will eventually fail even with one request at a time), or you use too much memory per invocation and it dies over a certain threshold but is fine under low load. Do you know which it is?
Either way, you will want to generate a heap dump on OOME so you can investigate what's going on. It's annoying but sometimes necessary. A colleague had to do that recently to fix some issues (one problem was an SB-transport platform bug, one was a thread starvation issue due to a platform work manager bug, the last one due to a Muxer bug when used in exalogic).
If it just performs poorly under load, then you'll need to do the usual OSB optimisations, like use fewer Assign steps (but assign more variables per step), do a lot more in xquery rather than proxy steps, especially loops that don't need a service callout, since they can easily be rolled into a for loop in xquery; you know, all the standard stuff.