Xcode Swift Unit Test Output is too verbose - swift

Is it possible to limit the console output of unit tests in Xcode? I don't need to a list of every passed test case (hundreds) and test suite (dozens) listed. I only want to see the details of what has failed.

Answering my own question...
The best solution I have found is to enter the word "failed" (without the quotes) into the filter text box at the bottom of the console window. Once this is done the console output will show only one line for each:
- XCTAssert failure,
- Test case failure,
- Test suite failure.
If there are no test failures, nothing will be displayed. However, the "test succeeded" pop up is only displayed for a couple of seconds and if you don't see it then there is nothing to indicate that the tests have completed.
If the filter word is changed to "fail", the console output will additionally include stats for the number of tests passed or failed. These are displayed even if all the tests pass.

Related

Allure report remove console output attachment?

I have allure reporting setup for my c# selenium framework, and everything is working fine, but I have noticed something that bothers me that I'd like to change. In every single test, there is always an attachment called "console output" that is empty and 0kb in size. My question is, Is there any way to remove/disable this?
You can see what I mean in the picture below:
I'm guessing this is the confluence of two minor bugs, one in nunit and one in allure.
On the NUnit side, the XML that is created for a test result contains an <output> element to hold the text output by the test. It sounds as if an empty element is produced when there is no output. You can check whether this is the case with your version of NUnit by examining the XML output.
On the allure side, an empty element could be ignored, but apparently, it isn't.
Either or both of these should be reported to the respective projects.

Can I write custom task start/end records to VSTS build/release log

I would like to be able to add items to the logs displayed during VSTS builds and releases:
I've looked at this page and written the following test powershell script
$guid = $env:taskGuid
write-host "##vso[task.logdetail id=$guid;name=project1;type=build;order=1]create new timeline record"
# write-host "##vso[task.logdetail id=new guid;parentid=exist timeline record guid;name=project1;type=build;order=1]create new nested timeline record"
write-host "##vso[task.logdetail id=$guid;progress=50;state=InProgress;]update timeline record"
write-host "##vso[task.logdetail id=$guid;state=Completed;result=Succeeded]complete timeline record"
I was hoping to see additional entries in the log but I see no difference at all, not even the write-host statements appearing.
So I have 2 questions
What should I see from my sample script above?
Is it possible to get
additional entries in that log shown in the initial screenshot
without actually adding additional tasks?
The syntax you're using is designed to show in the timeline which doesn't appear to be used in the new build layout yet (see here to disable preview of new build output). If you use the old build output, select the step for the powershell script you're executing and then select time line, you will then see your step (which in my case is ssloan) being logged by the logger. . See here for a better run down of the various build steps than I could give.
For just outputting the the logs you can use a variety of writers provided in PowerShell utilities Write-Host should be sufficient so long as you always have a host to write to. These will then appear in your logs

Selenium IDE GoToIf command

I am using Selenium IDE GotoIf command, if my condition is true, then selenium executes the label that I specified. In my case, I would like my test to break if the condition is true, without going to a label, I would like my test script to break immediately if my condition is true, without being redirected to a label. I think that what I would need is an IF command instead of a GOIF command. I would like my test to behave like this:
IF CONDITION==TRUE
TEST BREAK
ELSE KEEP EXECUTING
Is there any command to make that happen? I would like the execution to keep going only in case my condition is false and I would like my test case to break if my condition is true, of course since the test case would break, the execution would stop at that point!
Also, is there a command to make a test case explicitly break in Selenium IDE? I would like to voluntarily make my test case break, is there any command for that?
I answered a similar question before albeith in a defrent way.
To force Selenium to end with an error message you have to actualy cause an error on purpose. Truthfully I would be using...
gotoIf / condition is false / Variableiffalse
click / Errormessage of my choice / (leave value field empty)
label / Variableiffalse / (leave value field empty)
Test continue up to the end
The failed click command will automaticaly end the test and post the target as your error message. I hope this was useful
your error message should appear as something similar to [error] Element message I put in target not found

Wrong coverage report in karma

I've configured my karma.conf.js with enabled preprocessing to get a report about the code coverage of my tests. I've added this line to the preprocessors section.
preprocessors: {
'public/js/app.js': ['coverage'],
'public/js/filters.js': ['coverage'],
'public/js/directives.js': ['coverage'],
'public/js/services/*.js': ['coverage'],
'public/js/controllers/*.js': ['coverage'],
},
What I'm get is a report that is totally wrong. I know that I've written tests for each modules and the function within. But the coverage report shows me only the tests for the services correctly.
For instance the tests for directives. I know that I've written some tests and the tests will also be executed. But the report shows me that I've just tests for 36% of my code lines.
What could be the reason for this strange behavior?
Update:
I see this output from the spec reporter:
Directives:
bsTooltip:
when the element was created:
PASSED - should call the popup function
bsSwitchtext:
when the model isBusy changes to true:
PASSED - should call the button method with loading
when the model isBusy changes to false changes:
PASSED - should call the button method with loading
So I think that my test will all be executed.
Looks like there's issue with Typescript & Jasmine which is used by Angular. Enabling source map for test build appears to fix this issue.
I enabled source map in Angular 6.1 as follows,
Go to angular.json and in main project, find test, and add sourceMap:true to enable source map for the test run.
to enable that in CLI, run with command --source-map or --sm=true
Github issue links
Code coverage report issue with branch coverage (if path not taken)
ng test --code-coverage in 6.1 improperly detecting branches
I'm obligated to write this answer as I had the same issue, and this was the first question on Google search.
Try adding a console.log('testing123'); at one of the points which shows as not covered. If it shows up when you run the tests you know that something is going wrong with Istanbul.
However my guess would be that either there is something wrong with your configuration, and those tests are not running at al, or the tests are not executing the code as you think they are.
Try changing what you have in preprocessors to:
preprocessors: {
'**/public/js/**/*.js': ['coverage']
},
I was not able to the report to work unless I followed the specific syntax with **/ before directors in the preprocessors object.
The karma-coverage documentation includes the preceding **/ before directories listed in preprocessors.
Based on this SO answer.

When running multiple tags with NUnit Console Runner and SpecFlow I get incorrect results

This is a follow up to my earlier questions on setting up tags: Can I use tags in SpecFlow to determine the right environment to use? and setting up variables from those tags: How to set up a URL variable to be used in NUnit/SpecFlow framework
I've set up some variables to aid in populating my NUnit Tests, but I find that when the NUnit runner finds the test that fits the first tag the test runs it with the settings of the second tag. Since the tags are important to me to not only know what test to run, but what variables to use, this is causing me problems.
So if I have the following tags:
#first
#first #second
#second
If I run #second everything is fine. If I run #first I get any scenario that has only #first fine, but when it comes to scenarios where I have both #first #second the scenario is run, because #first is there, however, it uses the parameters for #second. Since I am running the DLL through the NUnit-Console and the Tests are written through SpecFlow I am not sure where the issue may lie.
Does anyone have advice on setting up tests to run like this?
You've not been very specific, but it sounds like you have a feature file like this:
#first
Scenario: A - Something Specific happens under the first settings
Given ...etc...
#second
Scenario: B - Something Specific happens under the second settings
Given ...etc...
#first #second
Scenario: C - Something general happens under the first and second settings
Given ...etc...
It looks like you are selecting tests to run in NUnit by running all the tests in the "first" category.
If you set up event definitions like this:
[BeforeFeature("first")]
public static string FirstSettings()
{ ... }
[BeforeFeature("second")]
public static string SecondSettings()
{ ... }
If you execute scenario C then FirstSettings() and SecondSettings() will be executed before it. This is regardless of whether you used the #second category to select the test to run under NUnit.
This is almost certainly the reason that you are seeing the second settings applied to your test with both tags - I expect the second settings overwrite the first ones, right?
My only advice for setting up tests like this, is that binding events and so on to specific tags can be useful but should be used as little as possible. Instead make your individual step definitions reusable, and set up your test environment, where possible, with Given steps.