How can we improve our compilation flow with Specman? - specman

We are working on a large design, for which the verification environment is complex. It contains 5 internal VIPs ( 3 of them we own and debug, doing minor changes and tweaks), CDNS unipro VIP and a low level services package we uses for all of our environments. Our e compilation flow is long and tedious, and for every change we make in our code base , our turnaround time for fixing is 10 mins.
How can we improve our compilation flow for increasing our team effectiveness?

Work in compiled mode.
Compile your code in parallel.
Use specman advance option which let you save and restore, reseed and dynamic load.

use multiple cores for much faster compilation time (-mc switch to sn_compile.sh). Requires advanced options license

Compile your code in compiled mode using multi core compilation. It will reduce the compilation time significantly.
You can use this compilation also for debugging instead of the interpreted mode.
This capability is already included in the last hotfix of your installed release.

You can compile your code. You can also use parallel compilation. Another thing you can do is to use reseed and dynamic load

Use SAO: use multi process compilation.
Download latest fix, as from version 13.1 you don't need a special verstion.
You can also use compiled code and compile only the modules you changed (multi stage compilation).
Starting version 14.1 you can compile the code to an elib file.

In addition to multiple cores compilation, from 14.1 can use elibs to prevent recompilation of modules that were not changed.

What we do for normal development is only compile the code that we will normally never change (base libraries, VIPs from other vendors, code reused from previous projects, etc.). Any code that we develop for that particular project is loaded interpreted on top. This gives a smaller turnaround time when we have to change something (because you just do a quick "reload").
For regression testing, we compile everything up to the testbench top and load the tests on top.

Related

How to migrate codebase to strict mode gradually?

Recently I joined project with low quality codebase and I want to set analyzer to strong-mode and set a bunch of strict linter rules. But when I did that, I get more than 3K errors.
I can't rewrite all codebase at once.
Is there is a way to set new strict analyzer options only for new code and code that was edited?
Maybe something like second analyzer_options_strict.yaml file with applyOnlyTo/exclude: [filenames] option.
How to migrate all codebase to strict mode gradually in the right way?
If you are picking an old project that is a monolith, a nice first step is to divide into multiple packages, that way you can fine each one part by part and evolve the analyzer in a step by step way

JUnit Fork-Mode in Java Classes

There's support for forkMode in Ant and Maven and occasionally we use it with value perTest. However, the JUnit-tests in Eclipse still fail when we run the tests on a class or on a project (Run As -> JUnit Test). Obviously JUnit uses default settings or behaviour and executes the tests in parallel causing some red crosses in the JUnit-view.
Is there a way to code something into the test-class that lets JUnit behave like the forkMode setting? We don't mind if there's an Eclipse-only solution for this.
Or can this be done with a Run Configuration in Eclipse?
EDIT:
I understand that the problems are based on data remaining after tests and further tests will fail due to that. While this makes sense, please understand that this doesn't answer my question. Think of my situation as being part of some sort of a Tiger Team. We have a bunch of issues and fixing that part of existing tests is just one of them. Trust me, we will try to cover everything... (I haven't heard that in a while)
Eclipse runs the JUnit test serially, in a single thread, in the same JVM. If you have tests that normally operate in parallel, this should not affect the test behavior. However, if you assume that you can change settings in the VM, like system properties, or class static variables, and the next test will not have those changes, that will break your tests.
The rule of thumb is that each test should leave the system (vm, database, filesystem) exactly as it found it so that each test can be run at any time, in any order.

How do I find all the modules used by a Perl script?

I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.

How to configure lazy or incremental build in general with Ant?

Java compiler provides incremental build, so javac ant task as well. But most other processes don't.
Considering build processes, they transform some set of files (source) into another set of files (target).
I can distinct two cases here:
Transformator cannot take a subset of source files, only the whole set. Here we can only make lazy build - if no files from source was modified - we skip processing.
Transformator can take a subset of sources files and produce a partial result - incremental build.
What are ant internal, third-party extensions or other tools to implement lazy and incremental build?
Can you provide some widespread buildfile examples?
I am interested this to work with GWT compiler in particular.
The uptodate task is Ant's generic solution to this problem. It's flexible enough to work in most situations where lazy or incremental compilation is desirable.
I had the same problem as you: I have a GWT module as part of my code, and I don't want to pay the (hefty!) cost of recompiling it when I don't need to. The solution in my case looked something like this:
<uptodate property="gwtCompile.mymodule.notRequired"
targetfile="www/com.example.MyGwtModule/com.example.MyGwtModule.nocache.js">
<srcfiles dir="src" includes="**"/>
</uptodate>
<target name="compile-mymodule-gwt" unless="gwtCompile.mymodule.notRequired">
<compile-gwt-module module="com.example.MyGwtModule"/>
</target>
Related to GWT, it's not possible to do incremental builds because the GWT compiler looks at all the source code at once and optimizes and inlines code. This means code that wasn't changed could be evaluated differently, for example if you start using a method from a class that wasn't changed, the method was in the previous compilation step left out, but now needs to be compiled in.

iPhone -mthumb-interlinking

os i figured out how to use the -mthumb and -mno-thumb compiler flag and more or less understand what it's doing.
But what is the -mthumb-interlinking flag doing? when is it needed, and is it set for the whole project if i set 'compile for thumb' in my project settings?
thanks for the info!
Open a terminal and type man gcc
Do you mean -mthumb-interwork ?
-mthumb-interwork
Generate code which supports calling between the ARM and Thumb
instruction sets. Without this option the two instruction sets
cannot be reliably used inside one program. The default is
-mno-thumb-interwork, since slightly larger code is generated when
-mthumb-interwork is specified.
If this is related to a build configuration, you should be able to set it separately for each configuration "such as Release or Debug".
Why do you want to change these settings? I know using thumb instructions save some memory but will it save enough to matter in this case?
my application uses both, thumb and vfp code but i never specifically
set -thumb-interwork flag.. how is that possible?
According to man page, without that flag the two instructions sets
cannot be reliably used inside one program.
It says "reliably"; so without that option, it seems they still can be mixed within a single program but it might be "unreliably". I think normally mixing both instructions sets works, the compiler is smart enough to figure out when it has to switch from one set to another one. However, there might be border cases the compiler just doesn't understand correctly and it might fail to see that it should switch instruction sets here, causing the application to fail (most likely it will crash). This option generates special code, so that no matter what your code does, the switching always happens correctly and reliably; the downside is that this extra code is needed for every global visible function and thus increases the binary side (I have no idea if it also might slow down function calls a little bit, I personally would expect that).
Please also note the following two settings:
-mcallee-super-interworking
Gives all externally visible functions in the file being
compiled an ARM instruction set header
which switches to Thumb mode before executing the rest of
the function. This allows these
functions to be called from non-interworking code.
-mcaller-super-interworking
Allows calls via function pointers (including virtual
functions) to execute correctly regardless
of whether the target code has been compiled for
interworking or not. There is a small overhead
in the cost of executing a function pointer if this option
is enabled.
Though I think you only need those, when building libraries to be used with other projects; but I don't know for sure. The GCC thumb handling is definitely "underdocumented".