pytest coverage never runs function body - pytest

I am using vscode with pytest and pytest-cov in order to generate a coverage report for my tests. However, no matter what I do the report always indicates that no code was run, event though I know the test functions call the code in question.
My project structure, simplified for clarity, is
Root
├── src
| └──my_package
| ├──__init__.py
| └──my_module.py
├── tests
| └──my_package_tests
| ├──__init__.py
| └──my_module_test.py
└── pytest.ini
pytest.ini contains
[pytest]
addopts = --cov=my_package --cov-report=html:./reports/coverage_report
The tests import and call functions from my_package
When I run the tests, and look at the generated report, I can see that lines in my_package are being marked as having run when the module is imported, but no code in the function bodies are being marked as executed. The test are passing so the code is being executed.
I'm at a loss on this one. I've tried doing this via command line only, uninstalling pytest-cov and trying manually, but nothing has worked.

Related

pytest.ini doesn't take effect when calling pytest vs. pytest <test_name>

I am working creating some testing infrastructure and struggling with taking care of all the dependencies correctly.
The directory structure I have looks like:
conftest.py
kernels/
|-kernel_1/
|---<kernel_src1>
|---__init__.py
|---options.json
|---test/
|-----test_func1.py
|-kernel_2/
|---<kernel_src2>
|---__init__.py
|---pytest.ini
|---options.json
|---scripts/
|-----__init__.py
|-----some_module.py
|---test/
|-----test_func2.py
When I call pytest on any of these tests, the test first compiles and simulates the kernel source code (C++) and compares the output against golden that is generated in python. Since all the kernels will be compiled individually, I create an output directory to store compile/simulation logs along with some header files that we generated in the kernel_1 directory.
For example, pytest kernel_1/test/test_func1.py will create a directory in kernel_1/build_test_func1/<compile/sim logs>.
I use the conftest.py which updates cwd to the test directory based on the accepted answer here:
Change pytest working directory to test case directory
I also added pytest.ini to add kernel_2 to the pythonpath when running test_func2 so we can find modules in scripts folder:
[pytest]
pythonpath=.
Tests run correctly when calling it from:
cd kernel_2/; pytest
cd kernel_2/test; pytest
cd kernel_2; pytest test/test_func1.py
cd kernel_2/test; pytest test_func1.py
The test also runs correctly when calling it like this: pytest kernel_2/test/test_func2.py
But I start seeing ModuleImportError when calling it from top-level without specifying the test
pytest
ImportError while importing test module '<FULL_PATH>/kernel_2/test/test_func2.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
<FULL_PATH>miniconda3/envs/pytest/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
kernel_2/test/test_func2.py:8: in <module>
from scripts.some_module import some_func
E ModuleNotFoundError: No module named 'scripts'
The issue looks when collecting pytest.ini in a specific kernel doesn't take effect when calling pytest, but I haven't been able to find a way to fix this issue. Any comments, concerns are appreciated!

import from parent directory python

I am trying to run a pytest test for filea.py using the following directory structure
test_filea.py
from filea import *
def test_one_p_one():
r = one_p_one()
assert r == 2
filea.py
def one_p_one():
return 1 + 1
When i have to following directory structure every thing works fine.
├── filea.py
├── test_filea.py
but when i move my tests into a sub directory like this
├── filea.py
└── tests
└── test_filea.py
i get the error:
test_filea.py:1: in <module>
from filea import *
E ModuleNotFoundError: No module named 'filea'
My editor seems to indicate the import in the file in the sub directory is ok.. (no read squiggly lines)
but when i run this using "pytest"
i get the error indicated above.
As per pytest documentation about test discovery, try like this:
add an empty __init__.py file in testsdirectory;
make sure that, when you run pytest ., the parent directory of filea.py and tests is the current working directory.
It depends where you run the tests from, and how you invoke pytest. Calling pytest tests is different than calling python -m pytest tests, the later adds the current working directory into the sys.path, which makes filea module importable.

Running tests against source code or the package

I've written some Python that I'm distributing as a custom package. I have some tests that I run against the source code while I'm developing, but I also want users who install the package to be able to run the same tests against the distributed package.
My package follows this structure:
my_package
├── MyPackage
│ ├── __init__.py
│ └── my_module.py
├── setup.py
└── tests
└── test_my_package.py
The my_package.py is
def my_function():
print("here!")
return True
And test_my_package.py is:
import unittest
import sys
sys.path.insert(0, "../")
from MyPackage.my_module import my_function
class TestMyModule(unittest.TestCase):
def test_somehting(self):
self.assertTrue(my_function())
As I'm manipulating sys.path, I'm always running the tests against the development code. Is there a way to use stuptools so I can run the tests against development code but the users run against the installed package?
Thanks!
There is a misconception when you say "tests that I run against the source code while I'm developing".
You always should run your tests against the packaged code because you want to be sure that the packaged code, which your users will run, works.
You could use tox to run your tests which automatically creates a package from your source code and even runs the tests for different Python Versions, eg the currently supported Python 3.6, 3.7 and 3.8.
While it would be a very rare thing, your users could then run the tests also.

py.test gives Coverage.py warning: Module sample.py was never imported

I ran a sample code from this thread.
How to properly use coverage.py in Python?
However, when I executed this command py.test test.py --cov=sample.py
it gave me a warning, therefore, no report was created.
platform linux2 -- Python 2.7.12, pytest-3.2.3, py-1.4.34, pluggy-0.4.0
rootdir: /media/sf_Virtual_Drive/ASU/CSE565_testand
validation/Assignments/temp, inifile:
plugins: cov-2.5.1
collected 3 items
test.py ...Coverage.py warning: Module sample.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
Anyone has an idea why coverage.py does not work?
hence, if I run coverage run -m py.test test.pyseparately, it does not show any warning.
Short answer: you need to run with the module name, not the file name: pytest --cov sample test.py
Long answer:
One comment in the answer you linked (How to properly use coverage.py in Python?) explains that this doesn't seem to work if the file you are trying to get the coverage of is a module imported by the test. I was able to reproduce that:
./sample.py
def add(*args):
return sum(args)
./test.py
from sample import add
def test_add():
assert add(1, 2) == 3
And I get the same error:
$ pytest --cov sample.py test.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /path/to/directory, inifile:
plugins: cov-2.6.1
collected 1 item
test.py . [100%]Coverage.py warning: Module sample.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
/path/to/directory/.venv/lib/python3.7/site-packages/pytest_cov/plugin.py:229: PytestWarning: Failed to generate report: No data to report.
self.cov_controller.finish()
WARNING: Failed to generate report: No data to report.
However, when using the module name instead:
pytest --cov sample test.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /path/to/directory, inifile:
plugins: cov-2.6.1
collected 1 item
test.py . [100%]
---------- coverage: platform darwin, python 3.7.2-final-0 -----------
Name Stmts Miss Cover
-------------------------------
sample.py 2 0 100%
The pytest-cov documentation seems to indicate you can use a PATH, but it might not be working in all cases...
tl;dr
Use coverage to generate the statistics file .coverage and then create a report that scopes to your specific file only.
coverage run -m pytest .\test\test_named_prng.py
coverage html --include=named_prng.py
Situation
Let's suppose you have some python files in your package, and you also have test cases within a single test file (test/test_named_prng.py). You want to measure the code coverage of your test file on one specific file within your package (named_prng.py).
\namedPrng
│ examples.py
│ named_prng.py
│ README.md
│ timeit_meas.py
│ __init__.py
│
└───test
test_named_prng.py
__init__.py
Here namedPrng/__init__.py imports examples.py and named_prng.py, where the other init file is empty.
An example with files is available on my GitHub.
Problem
Your problem is that with pytest or with coverage you cannot scope the report to your specific file (named_prng.py), because every other file imported from your package is also included in the report.
root cause
If you have an __init__.py in the level where the module you want to import is located, then __init__.py may import more files than necessary as the __init__.py will be executed. There are options to tell pytest and coverage to restrict which modules you want to investigate, but if they involve further modules from your package, they will be analysed too.
symptom with pytest
The option --cov of the package pytest-cov, which is used if you issue pytest with the option --cov, doesn't work if the (sub)module you want to create the coverage test on was imported from __init__.py.
If you run pytest (from namedPrng) with
pytest .\test\test_named_prng.py --cov --cov-report=html
you will get a report every .py file except the timeit_meas.py, because it is never imported: nor the test, nor its init, nor the imported named_prng.py, nor its init.
If you run pytest with
pytest .\test\test_named_prng.py --cov=./ --cov-report=html
then you explicitly tell coverage (invoked with pytest) to include everything in your level, therefore every .py file will be included in the report.
You'd like to tell coverage to create the report only on the source code of named_prng.py, but if you specify your module to --cov with
pytest .\test\test_named_prng.py --cov=named_prng --cov-report=html
or with --cov=named_prng.py you will get a warning:
Coverage.py warning: Module named_prng.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
WARNING: Failed to generate report: No data to report.
symptom with coverage
One can run the coverage and report separately and hope that more detailed options can be passed to coverage.
By issuing
coverage run -m pytest .\test\test_named_prng.py
coverage html
you get the same report on the 5 .py files. If you try to tell coverage to use only named_prng.py by
coverage run --source=named_prng -m pytest .\test\test_named_prng.py
or with --source=named_prng.py, you will get a warning
Coverage.py warning: Module named_prng.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
and no report will be created.
Solution
You need to use the --include switch for coverage which unfortunately cannot be passed to pytest in a CLI.
Use coverage CLI
You can restrict the scope of investigation during code coverage calculation time:
coverage run --include=named_prng.py -m pytest .\test\test_named_prng.py
coverage html
or at reporting time.
coverage run -m pytest .\test\test_named_prng.py
coverage html --include=named_prng.py
Use pytest + settings file
One can call pytest with detailed configuration via a config file. Where you issue pytest, set up a .coveragerc file with the content
[run]
include = named_prng.py
Check coverage's description on the possible options and patterns.
This can be solved by running coverage first on your test file then generate the report as follows:
coverage run test.py
coverage report -m

CoffeeScript build setup that supports unit testing?

I want to use CoffeeScript for building what will essentially be a JavaScript library.
I would just like to be able to
define some classes, with inheritance
keep my code in several files
write some unit tests (QUnit or whatever works, preferably writing tests in CoffeeScript)
(ideally) have the project watched and built automatically while I work
This seems reasonable, no? My plan is just having the unit tests run against the compiled JavaScript, in a browser, although if I can run them straight in node.js that's even better.
Currently I'm trying to do this with CoffeeToaster and QUnit, using two different CoffeeToaster configurations, one with tests and one without. It is working, but perhaps somebody has a better suggestion? Should I ditch CoffeeToaster and do it with Cake? Or get another unit testing framework? Can anybody point me to a tutorial for this? I'm making a clientside JS lib, so I don't want to involve Rails etc.
I'm currently using:
Mocha as the test runner and should.js for assertions;
Mockery to intercept certain require calls for isolated testing with mocks/stubs of required libraries;
*JSCoverage for instrumenting the code for code coverage reports.
My code lives in src/ and I write my tests in CoffeeScript. I use make to build and test the code.
make build compiles the CoffeeScript in src/ to JavaScript in lib/.
make test builds the code and then runs the tests in test/.
make monitor watches and runs the tests as soon as they change. Unfortunately it doesn't recompile the code. I use a Vim keybinding to call make, which also triggers Mocha to re-run the tests.
Edit: If this bothers you, you could run coffee --watch -o lib/ -c src/.
make coverage generates a code coverage report and puts it in lib-cov/report.html.
My Makefile looks somewhat like this:
COFFEE = ./node_modules/.bin/coffee --compile
MOCHA = NODE_ENV=test ./node_modules/.bin/mocha
MOCHA_OPTS = \
--compilers coffee:coffee-script \
--require should \
--colors
REPORTER = spec
build:
#$(COFFEE) --output lib/ src/
test: build
#$(MOCHA) --reporter $(REPORTER) $(MOCHA_OPTS)
monitor:
#$(MOCHA) --reporter min $(MOCHA_OPTS) \
--watch --growl
coverage: instrument
#MYLIB_COV=1 $(MOCHA) $(MOCHA_OPTS) \
--reporter html-cov > lib-cov/report.html
instrument: build
#rm -rf ./lib-cov
#jscoverage ./lib ./lib-cov
.PHONY: build test monitor coverage instrument
You could probably use the above with very little modification.
To generate the coverage report with make coverage, the tests must be run against the instrumented code in lib-cov/ instead of the code in lib/. To make this possible, three things are needed:
The Makefile should set an environment variable, like MYLIB_COV (change the name as you like).
Your index.js should look at this environment variable and require either lib/ or lib-cov/ accordingly:
// index.js
module.exports = process.env.MYLIB_COV
? require('./lib-cov/mylib')
: require('./lib/mylib');
If you need exports from multiple source files, you can combine them here. If you have something other than index.js as 'main' in your package.json, don't forget to change it.
Your tests should require '../':
# test/test.user.coffee
describe 'User', ->
User = {}
before ->
{User} = require '../'
describe '#equals()', ->
describe 'when users have the same username and host', ->
it 'should return true', ->
user1 = new User 'user', 'some.host.foo'
user2 = new User 'user', 'some.host.foo'
user1.equals(user2).should.be.true
# etc.
I'll leave it as an exercise to the reader to find out whether they need Mockery and how to use it if they do. I will point out, though, that the require call in the test snippet above is done inside before for a reason.
Happy coding!