Filtering the directory out with Devel::Cover - perl

I wanted to get coverage of my Perl based application in CentOS with apache web server, and went for the Devel::Cover to get it done. After some initial struggles, I got it installed. Since the PERL5OPT env variable did not help me in getting the coverage, I tried to include use Devel::Cover inside the code (I know it's a bad idea, but it serves my purpose) . The cover_db is generating its run/structures after I restart the webserver, but the data seems to be having transactions done with the CPAN generic modules as well, which reduce the total coverage score.
For example: if I use a single method from Net::FTP, it reduces the total score by considering the total number of lines in that module. Likewise for all the modules from CPAN.
What I need is the ability to select the files from a specific directory for coverage and ignore all the rest. From the description, it seems the +inc and -inc options are designed for this, but when I try to use them, I get the following error
Unknown option: inc
I would like to know couple of things.
After cover_db is updated with the transactions, is it possible to filter out from it using cover -options during generation of report?
Is there any other way I could get the coverage of only specific paths?
Appreciate the response.

Related

Can we get the Coverity report specific to only one issue like URL Manipulation Error?

I am using cov-capture, and cov-analyze to get the reports in my VM. Can anyone help in getting the command to run the cov-analyze only for getting specific errors? Example: There are various XML files created and analysis takes time to run. So to save time If we can get only a single report for a single issue like URL Manipulation or Encryption Error.
Note: Tool Used in Synopsys with REST API code in python and flask.
To run the analysis with only a single checker enabled, use the --disable-default and --enable options like this:
$ cov-analyze --disable-default --enable CHECKER_NAME ...
CHECKER_NAME is the all-caps, identifier-like name of the checker that reports issues of a certain type. For URL Manipulation, the checker is called PATH_MANIPULATION. The Checker Reference lists all of the checker names.
However, be aware that doing this repeatedly for each checker will take significantly longer than simply running all desired checkers at once because there is substantial overhead involved in simply reading the program into memory for analysis.
If your goal is faster analysis turnaround for changes you are making during development before check-in or push, you may want to look into using the cov-run-desktop command, which is meant for that use case.

Is there a way to find included Specflow scopes at a BeforeTestRun level?

I'm working with multiple features and scenarios and am looking for a way to find out what scopes are included in a test run at the time to test run start, if that's possible.
There's a large-ish subset (category) of our tests that require a setup that takes 5-10 seconds--currently we're using a BeforeFeature to optimize this setup as much as we can but we have several features (but not all) under the same scope. We'd like to run this setup only when that category of tests of tests is included in the test run.
in pseudo code it would essentially be
[BeforeTestRun]
If test run includes scenarios/features with tag "AdvancedSetup"
AdvancedSetup();
In SpecFlow this information is not available.
But perhaps your test runner has this information available.
FYI: Tags are translated to TestCategories.
NUnit allows use of a higher-level setup that applies to a namespace. You access this by creating a SetUpFixture. If SpecFlow gives you a way to map features into specific namespaces, you could use this.

Protractor - how to reuse the same spec file for different tests

In my Protractor conf.js file, I'd like to re-use the same spec files multiple times; however, it's seems to not be possible.
Some background:
We are reading test cases from a JSON file, launching reports, then testing grid results and various DOM elements.
All reports have the same format. The primary differences lie in the report titles, data columns, actual data results, etc.
So in my conf.js file, ideally I'd like to re-use the same spec files multiple times - but my understanding is that I cannot do this.
For example, my spec array:
specs: [
'spec/report1-spec.js',
'spec/report-grid-details-spec.js',
'spec/report2-spec.js',
'spec/report-grid-details-spec.js',
'spec/report3-spec.js',
'spec/report-grid-details-spec.js',
]
I've read this post (http://ramt.in/how-to-run-identical-jasmine-specs-multiple-times-with-protractor/ ) where you can move your spec files into a node module, but 1) I don't want to move all specs files there, and 2) it doesn't work anyway when I move even one spec file into a module export file.
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
Just wondering if anyone out there has found a solution to this need to re-use spec files multiple times in one conf.js configuration.
Thank you,
Bob
If I can't do it, then I'll just move my report-grid-details-spec.js code into a common page object file and call it whenever it's needed.
This would probably be the easiest way to approach the problem. Though, I like the idea of putting specs into modules - it is a plus to reusability overall.
The thing is, jasmine does not allow executing the same test in a single test run. And, from what I understand, there is no easy way to change the behavior.
One of the possible workarounds is to completely restart protractor and, hence, recreate the jasmine testing environment so that the next report-grid-details-spec.js would run in a new jasmine environment - this is something that protractor-flake project uses to retry the failing tests (it basically restarts protractor through command-line passing the failing specs as a comma-separated list to the specs argument, source).

Is it possible for run NUnit against a specific (long) list of tests

I have a list of several thousand NUnit tests that I want to run (generated automatically by another tool). (This is a subset of all of the tests, and changes frequently)
I'd like to be able to run these via NUnit-Console.exe. Unfortunately the /run option only takes a direct list of files which in my case would not fit on a single command line. I'd like it to pickup the list from a filename.
I appreciate that I could use categories, but the list I want to run changes frequently and so I'd prefer not to have to start changing source code.
Does anyone know if there is a clean way to get NUnit to run my specified tests?
(I could break it down into a series of smaller calls to NUnit-console with a full command line, but that's not very elegant)
(If it's not possible, maybe I should add it as an NUnit feature request.)
Had a reply from Charlie Poole (from NUnit development team), that this is not currently possible but has been added as a feature request for NUnit 2.6
I see what you're saying, but like you say you can run a single fixture from the command line.
nunit-console /fixture:namespace.fixture tests.dll
How about generating all the tests in the same fixture? Or place them all in the same assembly?
nunit-console tests.dll
As mentioned in the nunitLink, we need to mention the scenario/test case name. It simple but it has bit of a trick in it. Directly mentioning the test case name will not serve the purpose and you will end up with the 0 testcases executed. We need to write the exact path for the same.
I don't know how it works for other languages but using c# I have found a solution. Whenever we create a feature file corresponding feature.cs file get's created in Visual Studio. Click on the featureFileName.feature.cs and look for namespace and keep it aside(Part 1)
namespace MMBank.Test.Features
Scroll a bit down you will get the class name. Note that as well and keep it aside(Part 2)
public partial class HistoricalTransactionFeature
Keep scrolling down, you will see the code which nunit understands for execution basically.
[NUnit.Framework.TestAttribute()]
[NUnit.Framework.DescriptionAttribute("TC_1_A B C D")]
[NUnit.Framework.CategoryAttribute("MM_Bank")]
Below the code you can see the function/method name which will most likely be TC_1_ABCD(certain parameters)
public virtual void TC_1_ABCD(string username, string password, string visit)
You will be having multiple such methods based on no. of scenarios you have in your feature file. Note the method(test case) which you want to execute and keep it aside(Part 3)
Now collate all the parts with dots. Finally you will land up with something like this,
MMBank.Test.Features.HistoricalTransactionFeature.TC_1_ABCD
This is it. Similarly you can create the test case names from multiple feature files and stack them up in text file. Every test case name should be in different line. For command you can browse through above nunit link for execution using command prompt.

How do I find all the modules used by a Perl script?

I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.