Add old analysis to NDepend and specify date - ndepend

I've inherited a legacy project and have been working on improving the code this last year. To see my evolution, I've bought an NDepend license and started using it with success.
But I would like to see how I've been doing since I started refactoring. So I was wondering if you can add an analysis to an NDepend project and date it. I can still get the old DLLs, so I can run the analysis, but NDepend dates it to the date I'm running the analysis, not when it was compiled.

NDepend stores the historic analysis result in the directory specified by Project Properties > Analysis > Historic Analysis results.
The date is indicated by a hierarchy of folder.
First level YYYY_MM
Second level DayOfMonth_Hour_Minute
For example $HistoricAnalysisResultDir$\2017_09\12_14_20 means that the analysis result is dated to 12th Sept 2017, 14h20.
You just have to mimic this hierarchy manually and store in it your .ndar files (NDepend analysis result files).
A great alternative would be to write a short program base on NDepend.API to do it for you:
create a project to analyze assemblies of an older version,
run the analysis
create the historic analysis result hierarchy folders
copy there the analysis result
Edit 10Oct2017 Having the historic analysis results available and the baseline set is not enough to update the trends. Have a look at the Power Tool source code using the trend feature, you'll see how to log trend metrics in the past.

Related

Exporting individual Congos Reports via command line

I'm trying to work out how I can export individual Cognos Reports via the command line, for the purposes of source versioning in Git at a report-by-report level. I presume XML would be the output format.
I read that the Cognos SDK can help but you need to build your own solution, which may be possible but this use case feels like something many others would already want and there'd be tooling already.
Of course, importing the individual report would also be needed.
Can anyone help here please?
Thanks.
If your end game is version control (Who changed what, when?), you should look into MotioCI. Last time I looked, there was no free version of MotioCI.
You can use tools like the ones provided by companies like http://www.motio.com. With the free version you can export the XML of the reports but only one by one.
You can also use a Cognos deployment of the reports that generates a zip file with the XML of the reports, but all the reports are in the same file and you will have to extract the XML of the individual reports by hand.
I found the SDK to be cumbersome and, when I got it working, slow.
Yes, report specs are XML.
I have created a process that produces output like what you are asking for. Here's what it involves:
A recursive common table expression (CTE) query to get the report
specs along with the folder structure as seen in Cognos.
A PowerShell script to run the query and write the results to the file system.
Another PowerShell script to pull the current content from the remote git repo, run the first PowerShell script, then add, commit, and push the results up to the remote git repo.
I also wrote a PowerShell script to perform the operations associated with git push. This involves using a program I found called HTML Tidy (http://tidy.sourceforge.net/) that can be used to make the XML human-readable. This helps with diffs in git. I use TFS, so I get a nice, side-by-side diff if I have tidied the XML. (Otherwise, it tells me the only line of XML has changed.)
I recently added output for dashboards (exploration) and data sets (dataSet2). Dashboards are stored as JSON, so my routine had to tidy that (simple in PowerShell).
I run my routine daily, getting new and modified content from the last 3 days (just in case), and weekly to do an entire dump (to capture the deletes). The weekly process takes about six minutes. The daily process is negligible.
Before you ask: I hesitate to provide actual code because I can't take any responsibility for your system.
Updates:
Hacking away at the Content Store database is not recommended and it is not supported by IBM.
For reference/comparison: I'm running IBM Cognos 11.0.7 on IIS on Windows 2012 R2 with the Content Store database on MS SQL Server 2016. Your system may be different.
Additional Resources
https://www.cognoise.com/index.php/topic,28289.msg113869.html#msg113869
https://www.cognoise.com/index.php/topic,17411.msg50409.html#msg50409
https://learn.microsoft.com/en-us/powershell/scripting/overview?view=powershell-6
https://learn.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-2017
https://git-scm.com/docs
http://tidy.sourceforge.net/

How do I structure my project folder in eclipse for Cucumber project which has sprint wise delivery

I am trying to create an automation framework using cucumber and trying to replicate a real time scenario (sprint wise delivery).
How do I structure my folders/source folders/packages in eclipse? Below is the structure which I am about to follow but I am not quite convinced if it is right.
I am trying to structure in such a way that when I give the command
"mvn test -Dcucumber.options="src\test\resources\sprint1\features", then it should run all the features under sprint1, similarly for sprint2 and so on.
Any suggestions or inputs would be helpful.
P.S: Since I am new to cucumber, a detailed explaination on the folder structure for real time sprint wise delivery would be much appreciated.
Thanks :)
I would not consider the file structure you are thinking of.
The reason is that after a while, it doesn't matter when a feature was added to the system. So organizing features based on time is a bad idea.
If you still need to be able to run the features for a specific sprint, consider using tags instead. That would allow you only to run the features connected to the sprint you are interested in.
I would not to that either, because after a while it doesn't matter which sprint a piece of functionality was added. It should still pass all executions, even if it is 27 sprints old.
If this organization is bad, how should you do it instead?
This is a question where a lot of people have a lot of opinions and the debate can get very heated.
My take is that it is interesting to make sure that the code is easy to use. With that I mean easy to navigate and understand for a new developer. If you want, think of usability in any other product.
Given this, I would organize the features after functional areas in different packages. A package for each area, one for viewing products, one for ordering products, one for paying etc.
I would also try to take a step further and organize the source code in a similar way.
But I would never organize using a temporal approach as you are thinking of.
You should not organize your tests as per the sprint because a particular sprint will end on a particular time. If you want to run some feature files together for temporary basis(till the time sprint is not over), you can add tags on the top in the feature files.
For example:
You have following 2 feature files:
src/test/resources/sprint1/file1.feature
src/test/resources/sprint1/file2.feature
Just add "#sprint1" on top of each feature as shown below:
//1. file1.feature
#sprint1
Feature: sprint1 : features : file1
Scenario: Some scenario desc..
Given ....
When ....
Then ....
//2. file2.feature
#sprint1
Feature: sprint1 : features : file1
Scenario: Some scenario desc..
Given ....
When ....
Then ....
Now to run these both files you need to execute the following code in your command prompt:
cucumber --tags #sprint1
By executing this command, all the files which contains "#sprint1" tag will run. After the sprint is over, you can delete this extra tag from feature files

Store tests trend without test results data

I want to see test trend for 1 month for example on a Dashboard or whatever. But I limit my jobs to max 10 stored builds because they take a lot of space. Can I somehow store data about passes/failed tests so the charts can be displayed and not store hundreds of builds?
We use the plugin 'Plot plugin'.
It saves values in 'property' files which are saved in the workspace folder. So the data is not lost when the build folders are gone. Each build appends it's results to the 'property' file.
You can configure a plot to include any number of builds.
Here is an example of one of the plots it has been making for us over the last year or so:
I recommend separating your build job (the one that constructs libraries and executables and that likely has large artifacts) from your test job (the one that executes tests). This allows you to control aspects of the test job independently of the build job, including how long to keep build results.
Using the Copy Artifacts plugin you can make the build artifacts available to the test job.

Plotting arbitrary data for repository

I'm looking for a way to visualize arbitrary information about my repository over time, which might be some version-dependent number, such as:
lines of code
number of lines in a latex document
time between commits
anything that can be output by a script
What is the best way to visualize this information?
More specifically, I'm using mercurial and would ideally like something with a decent interface, with plot resizing/scrolling/etc... Jenkins' plot plugin is decent but not great, but more importantly it's not possible to visualize past data (say, after adding a new metric).
I would suggest to split your task to simplify everything a little bit. It is likely you will need several different tools in order to collect and visualize all required information. Historical view seems to be another big challenge.
Lines of code
There are several plugins available for Jenkins, but almost all are highly specialized. SLOCCount plug-in seems to be most universal, but it does not provide any graphical output.
NSIQ Collector Plugin
SLOCCount plug-in
JavaNCSS Plugin
There might be some other option for your language. For example, CCCC will provide required information for C and C++ code:
Number of lines in a latex document
I see several options to achieve that:
adapt existing solution/plugin
use repository statistics tool (Pepper, for example, can do the trick)
use simple shell script to count lines and report it
Pepper will generate something like the following:
Please check Pepper gallery. There are another tools, for example: hgchart
Time between commits
The simplest solution is to let a commit to trigger some trivial job, so Jenkins will provide all information as part of build history (with a timeline, etc).
Another solution is to use repository statistics tool once again:
Anything that can be output by a script
There are several good plug-ins for that.
Plot plugin can visualize multiple values provided as properties or csv file.
Measurement Plots Plugin scans the output in order to find values to be visualized
Happy continuous integration.

Given code base hosted on TFS, which command can tell me which file has changed most?

I want to find out files under a given directory which have been updated most. Is there any command which can display this info? Or is there any way to get max version count for a given file, so I can write some script to get this info from all and then sort desc.
Do you mean changed the most number of times, or undergone the most code chrun?
Either way - looking at the report data might be the easiest option for you. Take a look at the following blog post I did explaining how to use Excel for looking at TFS data that uses churn as an example allowing you to drill down into folders and files - but you should be able to get the data that you are looking for.
Getting Started with the TFS Data Warehouse