Codecov: error processing coverage reports - pytest

I want to add codecov to this project. Yet, codecov says here that it can not process my coverage.xml file that I created with this command: pytest python/tests -v --junitxml=coverage.xml in the Travis CI script.
Everything prior to that like providing my token seems to work as suggested in the TravisCI build here.
I thought this could perhaps be a problem with the paths but I included a potential fix in the codecov.yml and nothing changed.
Therefore, I do not think that the script codecov.yml, travis.yml, and utils/travis_runner.py are part of the problem.

The --junitxml option is for generating reports in JUnit format. Use the option --cov-report to generate coverage reports. pytest-cov allows passing --cov-report multiple times to generate reports in different formats. Example:
$ pip install pytest pytest-cov
$ pytest --cov=mypkg --cov-report term --cov-report xml:coverage.xml
will print the coverage table and generate the Cobertura XML report which is CodeCov-compatible.

Related

Dealing with the No source for code error in Python coverage

I have a new variant of this old "No source for code" issue.
"No source for code" message in Coverage.py
Here is the error I see:
$ coverage report
No source for code: '/home/pauljohn/GIT/projects/ml_grb/grb/packages/grb/tests/C:\Users\G33987\.conda\envs\grb\lib\site-packages\grb\review\review.py'.
Aborting report output, consider using -i.
I'm in Linux, working in a Git repository with a windows teammate "G33987". My current working directory is /home/pauljohn/GIT/projects/ml_grb/grb/packages/grb/tests, as you can see, and the virtual environment I'm using is in ~/venv-grb. Notice the super weird thing is that it is looking for a file in my tests folder with an appended full path to a teammate's installed "grb" package folder, "C:\Users\G33987.conda..."
I can add the "-i" flag to ignore problem, but I want to understand and fix.
In the other posts about this issue with coverage.py, the problem was linked to presence of old copy of .coverage in tests folder or to presence of *.pyc files. I've checked and our Git repository does not track any pyc files. However, by mistake it was tracking the original .coverage file. But we don't track that anymore and I've manually deleted it between runs.
So far, I have this workflow
coverage erase
find . -name "*.pyc" -exec rm {} \;
coverage run --source=grb -m pytest .
coverage report
I can run coverage report -i to ignore issue and the output does give line-by-line reports on the files in my own virtual environment. But I'm disgusted by ignoring an error. Where does reference to teammate's virtual environment come from? I'm not using a conda virtual environment, but rather I'm pure Python virtual environment.
Python 3.8.5
coverage 5.5
pytest 6.2.2
py 1.10.0
pluggy 0.13.1

Run all files in a directory to measure coverage

I want to run coverage for all files in a directory.
For instance, I have the following directory structure:
root_dir/
tests/
test1.py
test2.py
code_dir/
There are some python files in tests directory. I want to run them together using coverage run and generate a report.
Individually, I can do like this:
coverage run tests/test1.py
coverage run tests/test2.py
and generate a report.
How can I do this with a single command?
Thanks.
You should use a test runner to find and run those tests. Either pytest, or python -m unittest discover will do that for you.

How do get pytest to do discovery based on module name, and not path

I'm looking at moving from unittest to pytest. One thing I like to do, is to do a setup.py install and then run the tests from the installed modules, rather than directly from the source code. This means that I pick up any files I've forgotten to include in MANIFEST.in.
With unittest, I can get the test runner to do test discovery by specifying the root test module. e.g. python -m unittest myproj.tests
Is there a way to do this with pytest?
I'm using the following hack, but I wish there was a built in cleaner way.
pytest $(python -c 'import myproj.tests; print(myproj.tests.__path__[0])')
The Tests as part of application section of pytest good practices says if your tests are available at myproj.tests, run:
py.test --pyargs myproj.tests
With pytest you can instead specify the path to the root test directory. It will run all the tests that pytest is able to discover. You can find more detail in the pytest good practices

Allure report using Newman

I use "newman" to run API tests on Jenkins server. It's very easy for me, I write test scripts in "Postman" and run my collection in "newman" but I can't provide good reports for my manager. I found "allure report" and I like it. Is there any chance to create allure report if I use "Newman". Does allure support newman?
Looks like no, it's impossible.
Why now (after writed tests) you are looking for report tool? It's activity happen in start automation process when qa team analyze test tools are could be used for automation.
I have a look on https://github.com/postmanlabs/newman and think could you try parse commandline output to text file? And use this output to generate simple-report for manager.
Yes, you can. Follow below steps:
we can generate nice and clean report using Allure-js framework.
1. Installation
$ npm install -g newman-reporter-allure
2. Run the newman cli command to generate Allure results, specify allure in Newman's -r or --reporters option.
$ newman run <Collection> -e <Environment> -r allure
3. Allure results will be generated under folder "allure-results" in the root location. Use allure-commandline to serve the report locally.
$ allure serve
4. To generate the static report web-application folder using allure-commandline
$ allure generate --clean
Report will be generated under folder "allure-report" in the root location
.
Try to use my repository, here's tricky solution, which covers everything:
And add couple more things:
Add 2 files: collection *.json and env *.json
Add +x permissions to start.sh file with chmod command, like chmod +x start.sh
And run script ./start.sh your_collection.json your_env.json
Finally, you will get 2 reports:
HTML report
Allure report

Test Coverage in Jenkins for a Perl application

I just implemented an excellent example of test coverage in Perl described at Perl, Code Coverage Example
But that required Module::Build , Now what if i have existing Perl Application which does NOT have the Module::Build instrumentation, is there a way to get test coverage for unit or functional tests ?
I looked at :
Clean up from previous test run (optional)
cover -delete
#Test run with coverage instrumentation
PERL5OPT=-MDevel::Cover prove -r t
#Collect covered and caller information
# Run this _before_ running "cover"
# Don't run with Devel::Cover enabled
covered runs
- or e.g. -
covered runs --rex_skip_test_file='/your-prove-file.pl$/' \
--rex_skip_source_file='{app_cpan_deps/}'
#Post process to generate covered database
cover -report Html_basic
%perl -d:Coverage -Iblib/lib test.pl
But this seems to indicate Code Coverage while running the application.
I want to be able to get a Clover or Cobertura Compatible output, so i can integrate it with email-ext in Jenkins
Task::Jenkins may be of some help. It has instructions about how to publish the Devel::Cover HTML reports through Jenkins, as well as info about adapting other Perl tools to Jenkins.
Jira has some instructions about integrating Devel::Cover into Jenkins.
To get code coverage for any Perl process (test, application, server, whatever) you set the PERL5OPT environment variable to -MDevel::Cover which is like putting use Devel::Cover in the program. If your command to execute tests is perl something_test then you'd run PERL5OPT=-MDevel::Cover perl something_test.
If you're using prove, use HARNESS_PERL_SWITCHES=-MDevel::Cover prove <normal prove arguments>. This tells prove to load Devel::Cover when running the tests, but avoids gathering coverage for prove itself.