Specman debugging OS11 in gen - specman

I'm getting OS 11 failure in a gen action. The constraints for this gen action are intensive, and it's too complex to debug.
How can we debug this failure and determine the source of this OS 11?

An OS 11 error probably means you're trying to dereference a NULL pointer. Make sure you load your code (not compile it) to see where this is happening (this applies to all OS 11 errors and not just the ones in constraints). Compiled code removes a lot of debug information (to run faster), making it difficult to trace the exact portion of e code that is causing the problem.
Specman provides a great constraint debugger that can assist you further. I don't know the commands by heart but you have to set a break point when the failing CFS (connected field set) is being generated. Search for break on gen in the documentation.

Related

TestRealTime: How to test a realtime Operating System with Rational Test Real Time

On an AUTOSAR realtime Operating System (OS), the software architecture is layered separately (User space, systemcall interface, kernel space). Also, the switching between user context and kernel context is handled by hardware-specific infrastructure and typically the context switching handler is written in assembly code.
IBM® Rational® Test RealTime v8.0.1 (RTRT) currently treats embedded-assembly-code as mentioned in the below Q&A.
https://www.ibm.com/support/pages/how-treat-embedded-assembly-code ( ** )
RTRT tool using code insertion technololy (technically known as instrumentation process) to insert its own code to measure code coverage of the system under test.
In my case, OS with full pre-emptive design doesn't have the termination points. As the result, OS always runs unless loss of power supply. If there's no work, OS shall be in sleep (normally an idle state and do nothing). If any unexpected errors or exceptions occurs, OS shall be shutdown and run into an infinite loop. These indicated that OS is always running.
I learnt from ( ** ) and ensure context switching working correctly.
But I don't know how to teach RTRT to finish its postprocessing (consisting of attolcov and attolpostpro) in a right way. Note that OS has worked correctly throughout all my tasks already and was confirmed by debugger. SHUTDOWN OS procedure has been executed correctly and OS has been in INFINITE loop (such as while(1){};)
After RTRT ends all its processes, the coverage report of OS module is still empty.
Based on IBM guideline for RTRT
https://www.ibm.com/developerworks/community/forums/atom/download/attachment_14076432_RTRT_User_Guide.pdf?nodeId=de3b0048-968c-4111-897e-b73654af32af
RTRT provides two breakpoints to mark the logging point (priv_writeln) and termination point (priv_close) of its process.
I already tried to drive from INFINITE (my OS) to priv_close (RTRT) by interacting PC register and all Context Switching registers with the Lauterbach debugger but RTRT coverage report was empty even thougth none of errors happened. No error meant that the context switch from kernel space to user space is able to work well and main() function returned correctly.
Solved the problem.
It definitely came from context switching process of Operating System.
In my case, I did a RAM dump to see how user context memory (befor Starting OS) look like.
After that, I will backup all context areas and retore them from Sleep or Infiniteloop in the exact order.
Hereby, RTRT can know the return point and reach its own main() function's _exit to finish report generation progress.
Finally, the code coverage report has been generated.

Nsight 5.0 on MacPro lion 10.7.4

I'm new to CUDA dev and I'm using NSight 5 on a MacPro.
I'm doing a very simple simulation with two particles (ver1 and ver2 here, which are two structs that have pointers to another type of structs – links)
The code compiled but seems to run into problem when reaches the end of this block, and never stepped into the integrate_functor():
...
thrust::device_vector<Vertex> d_vecGlobalVec(2);
d_vecGlobalVec[0] = ver1;
d_vecGlobalVec[1] = ver2;
thrust::for_each(
d_vecGlobalVec.begin(),
d_vecGlobalVec.end(),
integrate_functor(deltaTime)
);
...
So my questions are:
In NSight, I can see the values of member variables of ver1 and ver2; but right before the last line of the code in this block, when I expand the hierarchy of d_vecGlobalVec, I can see any of these values - the corresponding fields (e.g. of the first element in this vector) are just empty. Why is this the case? Obviously, ver1 and ver2 are on Host memo while the values in d_vecGlobalVec are on the device.
2.
A member of the NSight team posted this.
So following that, in general, does it mean that I should be able to step in and out between host and device code, and be able to see host/device variables as if there is no barrier between them?
System:
NVIDIA GeForce GT 650M 1024 MB
Mac OS X Lion 10.7.4 (11E2620)
Make sure your device code is actually called. Check all return codes and confirm that device actually worked on the output. Sometimes thrust may run the code on host if it believes it is more effective.
I would really recommend updating to 10.8 - it has the latest drivers with the best support for NVIDIA GeForce 6xx series.
Also note that for optimum experience you need to have different GPUs for display and CUDA debugging - otherwise Mac OS X may interfere and kill the debugger.

Segmentation fault from outside of my code

I have a wxWidgets/GTK based application that works well - except for one installation on an Debian Squeeze ARM system. There it crashes when the user just activates the main window of it. To find the reason for that I added a signal handler to the application and use libunwind out of that signal handler to find the source for the crash. During a test that worked fine, when the software writes e.g. to address 0x0 libunwind correctly points me to the function where that happens.
But the results for the system where the crash appears unexpectedly are a bit strange, they seem to happen outside of my application. One crash comes from a function with no name (here libunwind returns an empty string), and one is caused by "malloc_usable_size", a system function which should never die this way.
So...what to do next? All ideas, suggestions or any other hints are welcome since I'm not sure how to contunue with that problem...
Check for buffer overrun or overwriting some memory unexpectedly for any structures, pointers, memory locations for items returned by library functions.
Check for invalid pointer frees in your code for the library allocated pointers that you are using.
May be using valgrind would also help.

GDB error invalid offset, value too big (0x00000400) Unable to build app in debug mode need help

I have an app which was working fine few days ago. But today I'm getting this error:
{standard input}:1948:invalid offset, value too big (0x00000400)
Command /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 failed with exit >code 1
Ok Guys,
After a lot of troubleshooting finally I found the solution. A big Switch Case was the problem. Converting that into if else statement solved the problem.
I had a similar issue today while I was writing an assembly routine for an ARM Cortex-M0 processor. In my case, the code that caused the error looked like this:
ldr r7 ,=PRTCNFG_10
This is a pseudo instruction causing the processor to load the value of constant PRTCNFG_10 (defined using the .equ directive) into register r7. The pseudo instruction will be translated into
ldr r7 ,[pc, #immed8]
where #immed8 is an 8 bit immediate value. Since 2^8=256, the definition of PRTCNFG_10 must not be placed beyond pc+256 bytes, otherwise the Assembler will throw an error.
I solved the issue by explicitly allocating PRTCNFG_10 in memory:
PRTCNFG_10:
.word 0x606
Just saw the same issue, which also turned out to be caused by a switch case. It wasn't even that big (26 cases), and it had compiled fine in the past, but for some reason it started to fail today. Replacing it with if-else solved the weird GCC error.
While this question is not strictly about assembler, this question pops up in web searches about this specific errors often enough that I'd like to add an answer that should be helpful to people programming in it.
The assembler syntax is LDR REG, =SOMETHING.
If that SOMETHING is >16 bits, we got a problem because Thumb doesn't have 32-bit immediates. To fix this, the assembler remembers the constant and replaces the statement with a PC-relative load to something that's less than 0x400 bytes off (more than that doesn't fit in the instruction).
You then say
.ltoff
someplace convenient (e.g. right behind the next bx lr or pop {pc}) to direct the assembler to place these constants there.

My code works in Debug mode, but not in Release mode

I have a code in Visual Studio 2008 in C++ that works with files just by fopen and fclose.
Everything works perfect in Debug mode. and I have tested with several datasets.
But it doesn't work in release mode. It crashes all the time.
I have turned off all the optimizations, also there is no dependency to anything(in the linker), and also I have set these:
Optimization: Disabled(/Od)
Keep Unreferenced Data.
Do Not Remove Redundant
Optimize for Windows98: NO
I still keep wondering how it should not work under these circumstances.
What else should I turn off to let it work as in debug mode?
I think if it works in release mode but not in debug mode, it might be a coding fault but the other way looks weird. isn't it?
I appreciate any help.
--Nima
Debug modes often initialize heap data allocations. The program might be dependent on this behavior. Look for variables and buffers that are not getting initialized.
1) Double check any and all code that depends on preprocessor macros.
2) Use assert() for verify program state preconditions. These must not be expected to impact program flow (ie. removing the check would still allow the code to provide the same end result) because assert is a macro. Use regular run-time conditionals when an assert won't do.
3) Indeed, never leave a variable in an uninitialized state.
By far the most likely explanation is differing undefined behavior in the two modes caused by uninitialized memory. Lack of thread safety and problems with synchronization code can also exhibit this kind of behavior because of differing timing environments between debug and release, but if your program isn't multi-threaded then obviously this can't be it.
I had experienced this and in my case it was because of one of my array of struct which suppose to have only X index, but my looping which check this struct was over checking to X+1 index. Interesting is debugging mode was running fine though I was on Visual C++ 2005.
I spent a few hours by putting in printf into my coding line by line to catch the bug. Anyone has good way to debug this kind of error please let me know.