Continuous update of junit xml while test is running - pytest

Here is how I am invoking pytest
pytest test.py -v -s -x --junitxml=junit.xml
What I see is that junit.xml gets produced at the end of the test. I am running the tests in Jenkins and I want to be able to use plugins like "Test In Progress" that will show the details of the in-progress/completed tests.
Is there a way to make pytest update xml file with the progress of the tests?

Related

How to stop powershell script after failed dotnet command?

I see unreliable behaviour in the dotnet command when executed from a powershell script.
Executing in a powershell script:
dotnet build "$slnPath"
ignores any compilation errors and continues executing the script. I have to check $lastexitcode to see, if there are any errors.
On the other hand, the command:
dotnet test "$slnPath"
immediately terminates execution of the powershell script, if there are any failed unit tests.
Is that normal behavior? Do I need to write different error handling depending on the arguments of the dotnet command?
The dotnet test command launches the test runner console application specified for a project. The test runner executes the tests defined for a unit test framework (for example, MSTest, NUnit, or xUnit) and reports the success or failure of each test. If all tests are successful, the test runner returns 0 as an exit code; otherwise if any test fails, it returns 1.
But dotnet build building requires the project.assets.json file, which lists the dependencies of your application. The file is created when dotnet restore is executed. Without the assets file in place, the tooling can't resolve reference assemblies, which results in errors.
You can read what each command do(and may be found some -key to control they) at this adress tool_description

py.test gives Coverage.py warning: Module sample.py was never imported

I ran a sample code from this thread.
How to properly use coverage.py in Python?
However, when I executed this command py.test test.py --cov=sample.py
it gave me a warning, therefore, no report was created.
platform linux2 -- Python 2.7.12, pytest-3.2.3, py-1.4.34, pluggy-0.4.0
rootdir: /media/sf_Virtual_Drive/ASU/CSE565_testand
validation/Assignments/temp, inifile:
plugins: cov-2.5.1
collected 3 items
test.py ...Coverage.py warning: Module sample.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
Anyone has an idea why coverage.py does not work?
hence, if I run coverage run -m py.test test.pyseparately, it does not show any warning.
Short answer: you need to run with the module name, not the file name: pytest --cov sample test.py
Long answer:
One comment in the answer you linked (How to properly use coverage.py in Python?) explains that this doesn't seem to work if the file you are trying to get the coverage of is a module imported by the test. I was able to reproduce that:
./sample.py
def add(*args):
return sum(args)
./test.py
from sample import add
def test_add():
assert add(1, 2) == 3
And I get the same error:
$ pytest --cov sample.py test.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /path/to/directory, inifile:
plugins: cov-2.6.1
collected 1 item
test.py . [100%]Coverage.py warning: Module sample.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
/path/to/directory/.venv/lib/python3.7/site-packages/pytest_cov/plugin.py:229: PytestWarning: Failed to generate report: No data to report.
self.cov_controller.finish()
WARNING: Failed to generate report: No data to report.
However, when using the module name instead:
pytest --cov sample test.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /path/to/directory, inifile:
plugins: cov-2.6.1
collected 1 item
test.py . [100%]
---------- coverage: platform darwin, python 3.7.2-final-0 -----------
Name Stmts Miss Cover
-------------------------------
sample.py 2 0 100%
The pytest-cov documentation seems to indicate you can use a PATH, but it might not be working in all cases...
tl;dr
Use coverage to generate the statistics file .coverage and then create a report that scopes to your specific file only.
coverage run -m pytest .\test\test_named_prng.py
coverage html --include=named_prng.py
Situation
Let's suppose you have some python files in your package, and you also have test cases within a single test file (test/test_named_prng.py). You want to measure the code coverage of your test file on one specific file within your package (named_prng.py).
\namedPrng
│ examples.py
│ named_prng.py
│ README.md
│ timeit_meas.py
│ __init__.py
│
└───test
test_named_prng.py
__init__.py
Here namedPrng/__init__.py imports examples.py and named_prng.py, where the other init file is empty.
An example with files is available on my GitHub.
Problem
Your problem is that with pytest or with coverage you cannot scope the report to your specific file (named_prng.py), because every other file imported from your package is also included in the report.
root cause
If you have an __init__.py in the level where the module you want to import is located, then __init__.py may import more files than necessary as the __init__.py will be executed. There are options to tell pytest and coverage to restrict which modules you want to investigate, but if they involve further modules from your package, they will be analysed too.
symptom with pytest
The option --cov of the package pytest-cov, which is used if you issue pytest with the option --cov, doesn't work if the (sub)module you want to create the coverage test on was imported from __init__.py.
If you run pytest (from namedPrng) with
pytest .\test\test_named_prng.py --cov --cov-report=html
you will get a report every .py file except the timeit_meas.py, because it is never imported: nor the test, nor its init, nor the imported named_prng.py, nor its init.
If you run pytest with
pytest .\test\test_named_prng.py --cov=./ --cov-report=html
then you explicitly tell coverage (invoked with pytest) to include everything in your level, therefore every .py file will be included in the report.
You'd like to tell coverage to create the report only on the source code of named_prng.py, but if you specify your module to --cov with
pytest .\test\test_named_prng.py --cov=named_prng --cov-report=html
or with --cov=named_prng.py you will get a warning:
Coverage.py warning: Module named_prng.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
WARNING: Failed to generate report: No data to report.
symptom with coverage
One can run the coverage and report separately and hope that more detailed options can be passed to coverage.
By issuing
coverage run -m pytest .\test\test_named_prng.py
coverage html
you get the same report on the 5 .py files. If you try to tell coverage to use only named_prng.py by
coverage run --source=named_prng -m pytest .\test\test_named_prng.py
or with --source=named_prng.py, you will get a warning
Coverage.py warning: Module named_prng.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
and no report will be created.
Solution
You need to use the --include switch for coverage which unfortunately cannot be passed to pytest in a CLI.
Use coverage CLI
You can restrict the scope of investigation during code coverage calculation time:
coverage run --include=named_prng.py -m pytest .\test\test_named_prng.py
coverage html
or at reporting time.
coverage run -m pytest .\test\test_named_prng.py
coverage html --include=named_prng.py
Use pytest + settings file
One can call pytest with detailed configuration via a config file. Where you issue pytest, set up a .coveragerc file with the content
[run]
include = named_prng.py
Check coverage's description on the possible options and patterns.
This can be solved by running coverage first on your test file then generate the report as follows:
coverage run test.py
coverage report -m

Turning on/off and saving update of Bullseye

I want to use bullseye code coverage in my dos script.
And I have written below code. The test.cov file is created but result is not generated on test.cov.
SET MY_LOCAL_COV_FILE=c:\test.cov
SET COVFILE=%MY_LOCAL_COV_FILE%
SET COVBUILDZONE=%BUILD_NUMBER%
covselect --file "%MY_LOCAL_COV_FILE%" --add c:
cov01 --on
MSBuild ".\my.sln" /t:clean /p:Configuration="Debug"
cov01 --off
I think you have two problems.
You are not building the code, you have only run the 'clean' target from MSBuild, try running 'rebuild' which will clean and then compile the code so the code coverage instrumentation is inserted.
You are not running the built code so Bullseye can't get any meaningful coverage information. Before the 'conv01 --off' try running your executable, or unit test, or whatever it is you have built.

How to I configure tox so it will run pytest coverage on a single environment instead of all?

I do have a complex tox.ini configuration with multiple environments for different versions of python.
I would like to know how to tell tox to run coverage only on the default python interpretor.
One of the problems is that the default python environment can be different from one platform to another.
I do have a wrapper script which calls tox -e py25,py26,docs where the -e arguments are the detected versions of python.
[tox]
...
[testenv:docs]
...
[testenv]
commands=py.test --cov-report xml --cov scripts
...
[testenv:py26]
...
[testenv:py25]
...
Desired behaviour: run pytest with coverage for a single environment (this is supposed to run integrated with jenkins).
I think you could use and include the [testenv:py] environment which uses the python interpreter with which tox is invoked itself. If you define the coverage-run there you should get what you want.

CoffeeScript build setup that supports unit testing?

I want to use CoffeeScript for building what will essentially be a JavaScript library.
I would just like to be able to
define some classes, with inheritance
keep my code in several files
write some unit tests (QUnit or whatever works, preferably writing tests in CoffeeScript)
(ideally) have the project watched and built automatically while I work
This seems reasonable, no? My plan is just having the unit tests run against the compiled JavaScript, in a browser, although if I can run them straight in node.js that's even better.
Currently I'm trying to do this with CoffeeToaster and QUnit, using two different CoffeeToaster configurations, one with tests and one without. It is working, but perhaps somebody has a better suggestion? Should I ditch CoffeeToaster and do it with Cake? Or get another unit testing framework? Can anybody point me to a tutorial for this? I'm making a clientside JS lib, so I don't want to involve Rails etc.
I'm currently using:
Mocha as the test runner and should.js for assertions;
Mockery to intercept certain require calls for isolated testing with mocks/stubs of required libraries;
*JSCoverage for instrumenting the code for code coverage reports.
My code lives in src/ and I write my tests in CoffeeScript. I use make to build and test the code.
make build compiles the CoffeeScript in src/ to JavaScript in lib/.
make test builds the code and then runs the tests in test/.
make monitor watches and runs the tests as soon as they change. Unfortunately it doesn't recompile the code. I use a Vim keybinding to call make, which also triggers Mocha to re-run the tests.
Edit: If this bothers you, you could run coffee --watch -o lib/ -c src/.
make coverage generates a code coverage report and puts it in lib-cov/report.html.
My Makefile looks somewhat like this:
COFFEE = ./node_modules/.bin/coffee --compile
MOCHA = NODE_ENV=test ./node_modules/.bin/mocha
MOCHA_OPTS = \
--compilers coffee:coffee-script \
--require should \
--colors
REPORTER = spec
build:
#$(COFFEE) --output lib/ src/
test: build
#$(MOCHA) --reporter $(REPORTER) $(MOCHA_OPTS)
monitor:
#$(MOCHA) --reporter min $(MOCHA_OPTS) \
--watch --growl
coverage: instrument
#MYLIB_COV=1 $(MOCHA) $(MOCHA_OPTS) \
--reporter html-cov > lib-cov/report.html
instrument: build
#rm -rf ./lib-cov
#jscoverage ./lib ./lib-cov
.PHONY: build test monitor coverage instrument
You could probably use the above with very little modification.
To generate the coverage report with make coverage, the tests must be run against the instrumented code in lib-cov/ instead of the code in lib/. To make this possible, three things are needed:
The Makefile should set an environment variable, like MYLIB_COV (change the name as you like).
Your index.js should look at this environment variable and require either lib/ or lib-cov/ accordingly:
// index.js
module.exports = process.env.MYLIB_COV
? require('./lib-cov/mylib')
: require('./lib/mylib');
If you need exports from multiple source files, you can combine them here. If you have something other than index.js as 'main' in your package.json, don't forget to change it.
Your tests should require '../':
# test/test.user.coffee
describe 'User', ->
User = {}
before ->
{User} = require '../'
describe '#equals()', ->
describe 'when users have the same username and host', ->
it 'should return true', ->
user1 = new User 'user', 'some.host.foo'
user2 = new User 'user', 'some.host.foo'
user1.equals(user2).should.be.true
# etc.
I'll leave it as an exercise to the reader to find out whether they need Mockery and how to use it if they do. I will point out, though, that the require call in the test snippet above is done inside before for a reason.
Happy coding!