Wrong coverage report in karma - karma-runner

I've configured my karma.conf.js with enabled preprocessing to get a report about the code coverage of my tests. I've added this line to the preprocessors section.
preprocessors: {
'public/js/app.js': ['coverage'],
'public/js/filters.js': ['coverage'],
'public/js/directives.js': ['coverage'],
'public/js/services/*.js': ['coverage'],
'public/js/controllers/*.js': ['coverage'],
},
What I'm get is a report that is totally wrong. I know that I've written tests for each modules and the function within. But the coverage report shows me only the tests for the services correctly.
For instance the tests for directives. I know that I've written some tests and the tests will also be executed. But the report shows me that I've just tests for 36% of my code lines.
What could be the reason for this strange behavior?
Update:
I see this output from the spec reporter:
Directives:
bsTooltip:
when the element was created:
PASSED - should call the popup function
bsSwitchtext:
when the model isBusy changes to true:
PASSED - should call the button method with loading
when the model isBusy changes to false changes:
PASSED - should call the button method with loading
So I think that my test will all be executed.

Looks like there's issue with Typescript & Jasmine which is used by Angular. Enabling source map for test build appears to fix this issue.
I enabled source map in Angular 6.1 as follows,
Go to angular.json and in main project, find test, and add sourceMap:true to enable source map for the test run.
to enable that in CLI, run with command --source-map or --sm=true
Github issue links
Code coverage report issue with branch coverage (if path not taken)
ng test --code-coverage in 6.1 improperly detecting branches
I'm obligated to write this answer as I had the same issue, and this was the first question on Google search.

Try adding a console.log('testing123'); at one of the points which shows as not covered. If it shows up when you run the tests you know that something is going wrong with Istanbul.
However my guess would be that either there is something wrong with your configuration, and those tests are not running at al, or the tests are not executing the code as you think they are.

Try changing what you have in preprocessors to:
preprocessors: {
'**/public/js/**/*.js': ['coverage']
},
I was not able to the report to work unless I followed the specific syntax with **/ before directors in the preprocessors object.
The karma-coverage documentation includes the preceding **/ before directories listed in preprocessors.
Based on this SO answer.

Related

protractor plugin to print out filename of specfile

I'm trying to see if there is a way to print out the name of the specfile that has just finished running. I was hoping to do this in the teardown function but I am not sure how to obtain the actual filename. Does anyone have any experience with protractor plugins?
Unfortunately jasmine reporter or protractor plugin cant give you currently executed specfile name. Only 'describe' or 'it' blocks names. Basically this is more informative than filename. Check Jasmine Custom Reporter docs - https://jasmine.github.io/2.5/custom_reporter.html
You can try to build something on your own to get filename, but maybe you should consider other options.

Cleanup for running spec files in series in Protractor

I am running multiple specs using a Protractor configuration file as follows:
...
specs: [abc.js , xyz.js]
...
After abc.js is finished I want to reset my App to an initial state from where the next spec xyz.js can kick off.
Is there a well defined way of doing so in Protractor? I'm using Jasmine as a test framework.
You can use something like this:
specs: ['*.js']
But I recommend you to separate the specs with a suffix, such as abc-spec.js and xyz-spec.js. Thus your specs will be like this:
specs: ['*-spec.js']
This is done to avoiding the config file to be 'run'/tested if you put the config file in the same folder as your tests/spec files.
Also there is downside that the test will be run in 0 -> 9 and A -> Z order. E.g. abc-spec.js will run first then xyz-spec.js. If you want to define your custom execution order, you may prefix your spec files' names, for instance: 00-xyz-spec.js and 01-abc-spec.js.
To restart the app, sadly there is no common way (source) but you need to work around to achieve it. Use something like
browser.get('http://localhost:3030/');
browser.waitForAngular();
whenever you need to reload your app. It will force the page to be reloaded. But if your app uses cookie, you will also need to clean it out in order to make it completely reset.
I used a different approach and it worked for me. Inside my first spec I am adding Logout testcase which logouts from the app and on reaching the log in page, just clear the cookie before login again using following:
browser.driver.manage().deleteAllCookies();
The flag named restartBrowserBetweenTests can also be specified in a configuration file. However, this comes with a valid warning from the Protractor team:
// If [set to] true, protractor will restart the browser between each test.
// CAUTION: This will cause your tests to slow down drastically.
If the speed penalty is of no concern, this could help.
If the above doesn't help and you absolutely want to be sure that the state of the app (and browser!) is clean between specs, you need to roll out your own shellscript which gathers all your *_spec.js files and calls protractor --specs [currentSpec from a spec list/test suite].

Karma - Instanbul - console.log being counted as test?

I noticed that Karma/Istanbul is marking console.log() as covered testcase?
Is there anyway we can make Istanbul to skip console.log()?
I could just remove or rename the console.log, but I am interested if there are such options in Istanbul?
Here is an example:
If you want to ignore some statements from coverage report use special formed comments:
https://github.com/gotwarlost/istanbul/blob/master/ignoring-code-for-coverage.md

When running multiple tags with NUnit Console Runner and SpecFlow I get incorrect results

This is a follow up to my earlier questions on setting up tags: Can I use tags in SpecFlow to determine the right environment to use? and setting up variables from those tags: How to set up a URL variable to be used in NUnit/SpecFlow framework
I've set up some variables to aid in populating my NUnit Tests, but I find that when the NUnit runner finds the test that fits the first tag the test runs it with the settings of the second tag. Since the tags are important to me to not only know what test to run, but what variables to use, this is causing me problems.
So if I have the following tags:
#first
#first #second
#second
If I run #second everything is fine. If I run #first I get any scenario that has only #first fine, but when it comes to scenarios where I have both #first #second the scenario is run, because #first is there, however, it uses the parameters for #second. Since I am running the DLL through the NUnit-Console and the Tests are written through SpecFlow I am not sure where the issue may lie.
Does anyone have advice on setting up tests to run like this?
You've not been very specific, but it sounds like you have a feature file like this:
#first
Scenario: A - Something Specific happens under the first settings
Given ...etc...
#second
Scenario: B - Something Specific happens under the second settings
Given ...etc...
#first #second
Scenario: C - Something general happens under the first and second settings
Given ...etc...
It looks like you are selecting tests to run in NUnit by running all the tests in the "first" category.
If you set up event definitions like this:
[BeforeFeature("first")]
public static string FirstSettings()
{ ... }
[BeforeFeature("second")]
public static string SecondSettings()
{ ... }
If you execute scenario C then FirstSettings() and SecondSettings() will be executed before it. This is regardless of whether you used the #second category to select the test to run under NUnit.
This is almost certainly the reason that you are seeing the second settings applied to your test with both tags - I expect the second settings overwrite the first ones, right?
My only advice for setting up tests like this, is that binding events and so on to specific tags can be useful but should be used as little as possible. Instead make your individual step definitions reusable, and set up your test environment, where possible, with Given steps.

How do I get the SpecFlow Scenario to be reported when the test runs?

I've managed to tune the output from my SpecFlow tests so that it reads nicely, with just the steps reported plus failures. But it's still pretty unreadable without the Feature and Scenario names also being reported.
Looking at the generated code, it appears that the Feature and Scenario names are encoded as NUnit DescriptionAttributes.
Can I configure SpecFlow or NUnit to also report these to stdout, so I get a nicely flowing "story-like" output?
If you define an extra method in your step definition class as follows then NUnit will report the feature and scenario text.
[BeforeScenario]
public void OutputScenario()
{
Console.WriteLine("Feature: " + FeatureContext.Current.FeatureInfo.Title);
Console.WriteLine(FeatureContext.Current.FeatureInfo.Description);
Console.WriteLine("\r\nScenario: " + ScenarioContext.Current.ScenarioInfo.Title);
}
I hope this helps.