Is it possible to merge coverage reports from Trace32? - trace

Trace32 doesn't support to merge coverage reports from multiple executable binaries. One of them is like this:
To do this mission, the third parties normally have to develop such a tool that Trace32 should support but not.
For OnChip Trace, specific target dependence is unavoidable, such as: compiler, specific architect...
Does anyone know some third-party tool able to merge coverages that are generated by Trace32? I knew gcov/lcov can do that but it didn't support my Tasking compiler. Thank you.

Investigating throughout some documents provided by Lauterbach GmbH, then the Trace-based Code Coverage uses an address-based tagging. It meant that the assembled test runs have to be performed on the same executable.
Therefore, to judge coverage rate of the whole project. Typically, third parties will develop a merge tool (look like GNU gcov/lcov) in order to summarize the total coverage rate.

Related

Is it possible to specify aggregate code coverage testing when deploying with a list of tests to Salesforce

I am automating deployment and CI to our Salesforce orgs using Ant. In my build xml, I am specifying the complete list of our tests to run. Salesforce is returning code coverage errors, demanding 75% code coverage on a per file basis rather than allowing only 75% based on the total code base. Some of our old files do not have that level of coverage, and I am trying not to have to go back and create a ton of new tests for old software.
It seems like Salesforce is doing the code coverage based on the quickdeploy model, rather that the aggregate.
Does anyone know a way I can tell Salesforce not to use the quickdeploy model (if that is what it is doing). I have checked the Migration tool docs, but don't see anything.
Thanks...
Have you tried setting the attribute runAllTests="true" inside the sf:deploy tasks, rather than listing each test out?

Automated testing developer environments

We use gradle as our build tool and use the idea plugin to be able to generate the project/module files. The process for a new developer on the project would look like this:
pull from source control.
run 'gradle idea'.
open idea and be able to develop without any further setup.
This all works nicely, but generally only gets exercised when a new developer joins or someone gets a new machine. I would really like to automate the testing of this more frequently in the same way we automate our unit/integration tests as part of our continuous integration process.
Does anyone know if this is possible and if there is any libraries for doing this kind of thing?
You can also substitue idea for eclipse as we have a similar process for those that prefer using eclipse.
The second step (with or without step one) is easy to smoke test (just execute the task as part of a CI build), the third one less so. However, if you are following best practices and regenerate IDEA files rather than committing them to source control, developers will likely perform both steps more or less regularly (e.g. every time a dependency changes).
As Peter noted, the real challenge is step #3. The first 2 ones are solved by your SCM plugin and gradle task. You could try automating the last task by doing something like this
identify the proper command line option, on your platform, that opens a specified intellij project from the command line
find a simple good enough scenario that could validate that the generated project is working as it should. E.g. make a clean then build. Make sure you can reproduce these steps using keyboard shortcuts only. Validation could be made by validating either produced artifacts or test result reports, etc
use an external library, like Robot, to program the starting of intellij and the running of your keyboards. Here's a simple example with Robot. Use a dynamic language with inbuilt console instead of pure Java for that, it will speed your scripting a lot...
Another idea would be to include a daemon plugin in intellij to pass back the commands from external CLI. Otherwise take contact with the intellij team, they may have something to ease your work here.
Notes:
beware of false negatives: any failure could be caused by external issues, like project instability. Try to make sure you only build from a validated working project...
beware of false positives: any assumption / unchecked result code could hide issues. Make sure you clean properly the workspace, installation, to have a repeatable state and standard scenario matching first use.
Final thoughts: while interesting from a theoretical angle, this automation exercise may not bring all the required results, i.e. the validation of the platform. Still it's an interesting learning experience and could serve as a material for a nice short talk, especially if you find out interesting stuff. Make it a beer challenger with your team when you have a few idle hours to try to see who can implement the fastest a working solution ;) Good luck!

How to keep track of who created a JUnit test and when using Eclipse? How to create statistics based on this information?

In a Java project I'm working (alongside a team of 8 devs), we have a large backlog of features that lack automated tests. We are covering this backlog and we need to keep track of who wrote a JUnit test and when, plus we have to measure how many test we wrote as a team in a week/month/semester (as you may have figured out already, this information is for management purposes). We figured we'd do this by marking the tests with the information we need (author, creation date) and let Eclipse do the processing work, showing us tests we wrote, who wrote'em and how far we were from reaching our goals. How would you smart people go about this? What plugins would you use?
I tried to use Eclipse Custom Tags for this, but it's not the purpose of the feature, and the results I got were kind of brittle. I created a TEST tag that was supposed to mark a test method. It looks like this: (date is mm-dd-yyyy)
//TEST my.name 08-06-2011
Since Eclipse processes tag description by substringing (contains/doesn't contain), it's, as I said, very brittle. I can timestamp the tag, but it's just a string. Eclipse can't process it as a date, compare dates, filter by date interval, stuff like that.
I searched for plugins, but no dice.
My B-plan is to create an Annotation with the information we need and process our test packages using Eclipse Annotation Processing Tool. I haven't tried anything on this front yet, though, just an idea. Anyone knows a good plugin for this kind of Annotation processing? Or any starter tips for dealing with Eclipse APT.
Thanks a bunch, folks
I would not use Eclipse for this.
Your team should be checking the tests into a version control system such as Subversion, Git, Team Foundation Server, etc. From there it should a fairly straightforward matter to determine the owner and check-in time. You can and should do this sort of metrics calculation during every build. Better yet, be sure that your build script actually runs your tests and uses a tool like EMMA to instrument the code and determine the actual coverage.
As a fallback for measuring coverage, if you choose a naming convention then you may even be able to correlate the test classes by file name back to the feature under test.
Many modern build systems, such as CruiseControl, have integration for doing these sorts of things quite nicely.

Is there a revision control system that allows us to manage multiple parallel versions of the code and switch between them at runtime?

If I want to enable a new piece of functionality to a subset of known users first, is there any automated system of framework that exists to do this?
Perhaps not directly with version control - you might be interested to read how flickr goes about selectively deploying functionality: http://code.flickr.com/blog/page/2/
And this guy talks about implementing something similar in a rails app: http://www.alandelevie.com/2010/05/19/feature-flippers-with-rails/
Most programming languages have if statements.
I don't know what "switching between them at runtime" means. You usually don't check executable code into an SCM system. There's a separate process to check out, build, package, and deploy. That's the province of continuous integration and automated builds in agile techniques.
SCM systems like Subversion allow you to have tags and branches for parallel development. You're always free to build, package, and deploy those as you see fit.
As far as I know no...
If you wanted a revision control system that had multiple versions that you could switch between. Find a SCM you like and lookup branching.
But, it sounds like you want it to me able to switch versions in the SCM programmatically during runtime. The problem with that is, for a revision control system to be able to do that it would have to be aware of the language and how it's implemented.
It would have to know how load and run the next version. For example, if it was C code it would have to dynamically compile and run it on the fly. If it was PHP it would have to magically load the script in a sandbox http server that has PHP support. Etc... In which case, it isn't possible.
You can write an app to change the version in the scm by using the command line.
To do it during runtime, that functionality has to be part of the application itself.
The best (only) way I can think of doing it is to have one common piece of code that acts like a 'bootloader', which uses a system call to checkout the correct branch based on whatever your requirements are. It then (if necessary) compiles that code, and runs it.
It's not technically 'at runtime', but it appears that way if it works.
Your first other option is something that dynamically loads code, but that's very language-dependent, and you'd need to specify.
The other is to permanently have both in the working codebase (which doubles your size if it's a full duplication), and switch at runtime. You can save a good bit of space by using objects that are shared between both branches, and things like conditional compilation to use the same source files for both targets.

Best build process solution to manage build versions

I run a rather complex project with several independent applications. These use however a couple of shared components. So I have a source tree looking something like the below.
My Project
Application A
Shared1
Shared2
Application B
Application C
All applications have their own MSBuild script that builds the project and all the shared resources it needs. I also run these builds on a CruiseControl controlled continuous integration build server.
When the applications are deployed they are deployed on several servers to distribute load. This means that it’s extremely important to keep track of what build/revision is deployed on each of the different servers (we need to have the current version in the DLL version, for example “1.0.0.68”).
It’s equally important to be able to recreate a revision/build that been built to be able to roll back if something didn’t work out as intended (o yes, that happends ...). Today we’re using SourceSafe for source control but that possible to change if we could present good reasons for that (SS it’s actually working ok for us so far).
Another principle that we try to follow is that it’s only code that been build and tested by the integration server that we deploy further.
"CrusieControl Build Labels" solution
We had several ideas on solving the above. The first was to have the continuous integration server build and locally deploy the project and test it (it does that now). As you probably know a successful build in CruiseControl generates a build label and I guess we somehow could use that to set the DLL version of our executables (so build label 35 would create a DLL like “1.0.0.35” )? The idea was also to use this build label to label the complete source tree. Then we probably could check out by that label and recreate the build later on.
The reason for labeling the complete tree is to include not only the actual application code (that’s in one place in the source tree) but also all the shared items (that’s in different places in the tree). So a successful build of “Application A” would label to whole tree with label “ApplicationA35” for example.
There might however be an issue when trying to recreate this build and setting the DLL version before deploying as we then don’t have access to the CruiseControl generated build label anymore. If all CrusieControl build labels were unique for all the projects we could use only the number for labeling but that’s not the case (both application A and B could at the same time be on build 35) so we have to include the application name in the label. Hence SourceSafe label “Application35”. How can I then recreate build 34 and set 1.0.0.34 to the DLL version numbers once we built build 35?
"Revision number" solution
Someone told me that Subversion for example creates a revision number for the entire source tree on every check in – is this the case? Has SourceSafe something similar? If this is correct the idea is then to grab that revision number when getting latest and build on the CruiseControl server. The revision number could then be used to set the DLL version number (to for example “1.0.0.5678”). I guess we could then get this specific revision for the Subversion if needed and that then would include that application and all the shared items to be able to recreate a specific version from the past. Would that work and could this also be achived using SourceSafe?
Summarize
So the two main requirements are:
Be able to track build/revision number of the build and deployed DLL.
Be able to rebuild a past revision/build, set the old build/revision number on the executables of that build (to comply with requirement 1).
So how would you solve this? What would be your preferred approach and how would you solve it (or do you have a totally different idea?)? **Pleased give detailed answers. **
Bonus question What are the difference between a revision number and a build number and when would one really need both?
Your scheme is sound and achievable in VSS (although I would suggest you consider an alternative, VSS is really an outdated product).
For your "CI" Build - you would do the Versioning take a look at MSBuild Community Tasks Project which has a "Version" tasks. Typically you will have a "Version.txt" in your source tree and the MSBuild task will increment the "Release" number while the developers control the Major.Minor.Release.Revision numbers (that's how a client of mine wanted it). You can use revision if you prefer.
You then would have a "FileUpdate" tasks to edit the AssemblyInfo.cs file with that version, and your EXE's and "DLL's" will have the desired version.
Finally the VSSLabel task will label all your files appropriately.
For your "Rebuild" Build - you would modify your "Get" to get files from that Label, obviously not execute the "Version" task (as you are SELECTING a version to build) and then the FileUpdate tasks would use that version number.
Bonus question:
These are all "how you want to use them" - I would use build number for, well the build number, that is what I'd increment. If you are using CI you'll have very many builds - the vast majority with no intention of ever deploying anywhere.
The major and minor are self evident - but revision I've always used for a "Hotfix" indicator. I intend to have a "1.3" release - which would in reality be a product with say 1.3.1234.0 version. While working on 1.4 - I find a bug - and need a hot fix as 1.3.2400.1. Then when 1.4 is ready - it would be say 1.4.3500.0
I need more space than responding as comments directly allows...
Thanks! Good answer! What would be the
difference, what would be better
solving this using SubVersion for
example?Richard Hallgren (15 hours
ago)
The problems with VSS have nothing to do with this example (although the "Labeling" feature I believe is implemented inefficiently...)
Here are a few of the issues with VSS
1) Branching is basically impossible
2) Shared checkout is generally not used (I know of a few people who have had success with it)
3) performance is very poor - it is exteremly "chatty"
4) unless you have a very small repository - it is completely unreliable, to the point for most shops it's a ticking timebomb.
For 4 - the problem is that VSS is implemented by the entire repository being represented as "flat files" in the file system. When the repository gets over a certain size (I believe 4GB but I'm not confident in that figure) you get a chance for "corruption". As the size increases the chances of corruption grow until it becomes an almost certainty.
So take a look at your repository size - and if you are getting into the Gigabytes - I'd strongly recommend you begin planning on replacing VSS.
Regardless - a google of "VSS Sucks" gives 30K hits... I think if you did start using an alterantive - you will realize it's well worth the effort.
Have CC.net label the successful builds
have each project in the solution link to a common solutioninfo.cs file which contains assembly and file version attributes (remove from each projects assemblyinfo.cs)
Before the build have cruise control run an msbuild regex replace (from msbuild community tasks) to update the version information using the cc.net build label (passed in as a parameter to the msbuild task)
build the solution, run tests, fx cop etc
Optionally revert the solution info file
The result is that all assemblies in the cc.net published build have the same version numbers which conform to a label in the source code repository
UppercuT can do all of this with a custom packaging task to split the applications up. And to get the version number of the source, you might think about Subversion.
It's also insanely easy to get started.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT