Running pytest with coverage in a larger project, the output is strangely cut (note the datam at the end, many files are still missed here).
I'm not aware of further configuration—no pytest.ini, no pyproject.toml, no related environment variable.
How can I overcome this, given I want the simple terminal output, not an extra report?
Only if needed: How could I print the results written to .coverage sqlite database to terminal?
> pytest tests/ --cov
...
---------- coverage: platform win32, python 3.10.4-final-0 -----------
Name Stmts Miss Cover
--------------------------------------------------------------------------------
...
datamodel\model\gis\topology\edge.py 26 3 88%
datamodel\model\gis\version.py 0 0 100%
datam
============================== 45 passed in 6.44s ==============================
This looks like a bug of the coverage package, maybe from too many files (>400 in my case), maybe interference with pytest-xdist.
In such a case ignore the pytest output and print the results written to the .coverage sqlite database to terminal, like
> pytest tests --cov -n 12
...
> coverage report
...
Related
TLDR: How can I get better output from pytest?
I'm using Django with regular python3 unittests.
I've just switched to pytest-django for running tests.
pytest throws an error for almost all my tests (149 in total).
Pages and pages with this error.
self = <RegexURLResolver 'project.urls' (None:None) ^/>
#property
def reverse_dict(self):
language_code = get_language()
if language_code not in self._reverse_dict:
self._populate()
> return self._reverse_dict[language_code]
E KeyError: 'en-us'
Which wasn't the problem. It led me down to a wrong path.
I had a syntax error in one of my views.py files.
./manage.py test resulted in:
snip
File "/home/roland/project/views.py", line 20
code = zip(list1, list2])
SyntaxError: invalid syntax
Notice the last: ] which was the problem.
So: How can I get more useful output on problems when using pytest?
Btw:
After finding this and scrolling back into the pytest output there was mention of the syntax error. It was just buried in the output.
You can use the --maxfail=1 option so it will stop immediately on first failure.
Also, make sure your pytest.ini is setup properly so that pytest knows it should be using django-pyest.
[pytest]
DJANGO_SETTINGS_MODULE='myapp.settings'
For my workflow, I usually do the following:
run pytest --maxfail=1 myfile.py &> pytest-output.txt
tail, grep, or search he text file for errors.
Fix and iterate
There are a lot of other configuration options that will help you to get more meaningful input from pytest.
So I am trying to run a powershell script that is triggered by TeamCity to run specific unit tests based on the names of the files that were changed on each github commit.
Here is how I am running it from the command line:
C:\MyFolder\bin\NUnit.ConsoleRunner.3.4.1\tools\nunit3-console.exe "C:\MyFolder\Bin\UnitTesting.dll" --test="MyFolder.QuickTests.DaoTests.ProductDaoTests.ProductBasicTest"
But I keep getting this, it runs it just never runs any tests:
NUnit Console Runner 3.4.1
Copyright (C) 2016 Charlie Poole
Runtime Environment
OS Version: Microsoft Windows NT 10.0.14393.0
CLR Version: 4.0.30319.42000
Test Files
MyFolder\Bin\UnitTesting.dll
Test Filters
Test: MyFolder.QuickTests.DaoTests.ProductDaoTests.ProductBasicTest
Run Settings
WorkDirectory: C:\Users\Me
ImageRuntimeVersion: 4.0.30319
ImageTargetFrameworkName: .NETFramework,Version=v4.0
ImageRequiresX86: False
ImageRequiresDefaultAppDomainAssemblyResolver: False
NumberOfTestWorkers: 2
Test Run Summary
Overall result: Passed
Test Count: 0, Passed: 0, Failed: 0, Inconclusive: 0, Skipped: 0
Start time: 2016-10-17 20:28:43Z
End time: 2016-10-17 20:28:43Z
Duration: 0.303 seconds
Results (nunit3) saved as TestResult.xml
Now when I run it without the --test command like this:
C:\MyFolder\bin\NUnit.ConsoleRunner.3.4.1\tools\nunit3-console.exe "C:\MyFolder\Bin\UnitTesting.dll"
It runs every unit-test that we have, but I don't want to run them all, I want to run specific quick ones, and only run the large ones when we go to staging/production servers so our developers don't have to wait 15 to 20 minutes every time they commit something.
Some additional info:
-My namespace that I am using for this is
MyFolder.QuickTests.DaoTests.ProductDaoTests
The Class I am calling is:
ProductBasicTest
Some of the names like the folder directories were changed because they are %teamcity% placeholders for file directories.
What am I doing wrong to not be able to run specific tests?
For some reason my nunit-console is not recognizing the /run command or /fixture or --test=.
EDIT:
I upgraded to 3.5.0 and am still getting the same issues, I am not able to use --test.
C:\MyFolder\bin\NUnit.ConsoleRunner.3.5.0\tools\nunit3-console.exe "C:\MyFolder\Bin\UnitTesting.dll" --test="MyFolder.QuickTests.DaoTests.ProductDaoTests.ProductBasicTest"
That is the new location, and getting the same issue.
When I do --where for MyFolder it crashes Powershell but doesn't actually run anything.
When I do --explore it does the same as --where for MyFolder and does nothing for MyFolder.QuickTests .
EDIT EDIT:
Thanks to Rob I found the docs here and looked at the --where function with --where "name=ProductBasicTest" which will run all the files in that Test-Suite!
So thanks to Rob one of the issues that I was running into, is it was not recognizing my namespace correctly with QuickTests. So whenever I ran the function it was not running correctly.
To fix this you can go to the Test xml file output and see what names it was running tests under.
To run these individually you can run it by the name with the command:
"nunit3-console.exe C:\PathToDll.dll --where "name = NameOfTest"
I'm initializing spot instances running a derivative of the standard Ubuntu 13.04 AMI by pasting a shell script into the user-data field.
This works. The script runs. But it's difficult to debug because I can't figure out where the output of the script is being logged, if anywhere.
I've looked in /var/log/cloud-init.log, which seems to contain a bunch of stuff that would be relevant to debugging cloud-init, itself, but nothing about my script. I grepped in /var/log and found nothing.
Is there something special I have to do to turn logging on?
The default location for cloud init user data is already /var/log/cloud-init-output.log, in AWS, DigitalOcean and most other cloud providers. You don't need to set up any additional logging to see the output.
You could create a cloud-config file (with "#cloud-config" at the top) for your userdata, use runcmd to call the script, and then enable output logging like this:
output: {all: '| tee -a /var/log/cloud-init-output.log'}
so I tried to replicate your problem. Usually I work in Cloud Config and therefore I just created a simple test user-data script like this:
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
echo "I am out of the output file...somewhere?"
yum search git # just for fun
ls
exit 0
Notice that, with CloudInit shell scripts, the user-data "will be executed at rc.local-like level during first boot. rc.local-like means 'very late in the boot sequence'"
After logging in into my instance (a Scientific Linux machine) I first went to /var/log/boot.log and there I found:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200! I am
out of the file. Log file somewhere? Loaded plugins: changelog,
kernel-module, priorities, protectbase, security,
: tsflags, versionlock 126 packages excluded due to repository priority protections 9 packages excluded due to repository
protections ^Mepel/pkgtags
| 581 kB 00:00
=============================== N/S Matched: git =============================== ^[[1mGit^[[0;10mPython.noarch : Python ^[[1mGit^[[0;10m Library c^[[1mgit^[[0;10m.x86_64 : A fast web
interface for ^[[1mgit^[[0;10m
...
... (more yum search output)
...
bin etc lib lost+found mnt proc sbin srv tmp var
boot dev home lib64 media opt root selinux sys usr
(other unrelated stuff)
So, as you can see, my script ran and was rightly logged.
Also, as expected, I had my forced log 'output.txt' in /root/output.txt with the content:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200!
So...I am not really sure what is happening in you script.
Make sure you're exiting the script with
exit 0 #or some other code
If it still doesn't work, you should provide more info, like your script, your boot.log, your /etc/rc.local, and your cloudinit.log.
btw: what is your cloudinit version?
I always thought that imperative and declarative usage of xfail/skip in py.test should work in the same way. In the meantime I've noticed that if I write a test that contains an imperative skip the result of the test will always be "xfail" even it the test passes.
Here's some code:
import pytest
def test_should_fail():
pytest.xfail("reason")
#pytest.mark.xfail(reason="reason")
def test_should_fail_2():
assert 1
Running these tests will always result in:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5 -- C:\Python27\python.exe
collecting ... collected 2 items
test_xfail.py:3: test_should_fail xfail
test_xfail.py:6: test_should_fail_2 XPASS
===================== 1 xfailed, 1 xpassed in 0.02 seconds =====================
If I understand correctly what is written in the user manual, both test should be "XPASS'ed".
Is this a bug in py.test or am I getting something wrong?
When using the pytest.xfail() helper function you are effectively raising an exception in the test function. Only when you are using the marker it is possible for py.test to execute the test fully and give you an XPASS.
When I run make test using the normal test harness that CPAN modules have, it will just output a brief summary (if all went well).
t/000_basic.t .......................... ok
t/001_db_handle.t ...................... ok
t/002_dr_handle.t ...................... ok
t/003_db_can_connect.t ................. ok
... snip ...
All tests successful.
Files=30, Tests=606, 2 wallclock secs
Result: PASS
If I run the tests individually, they output much more detailed information.
1..7
ok 1 - use DBIx::ProcedureCall::PostgreSQL;
ok 2 - simple call to current_time
ok 3 - call to power() with positional parameters
ok 4 - call to power() using the run() interface
ok 5 - call to setseed with a named parameter
ok 6 - call a table function
ok 7 - call a table function and fetch
How can I run all the tests in this verbose mode? Is there something that I can pass to make test?
The ExtUtils::MakeMaker docs explain this in the make test section:
make test TEST_VERBOSE=1
If the distribution uses Module::Build, it's a bit different:
./Build test verbose=1
You can also use the prove command that comes with Test-Harness:
prove -bv
(or prove --blib --verbose if you prefer long options.) This command is a bit different, because it does not build the module first. The --blib option causes it to look for the built-but-uninstalled module created by make or ./Build, but if you forgot to rebuild the module after changing something, it will run the tests against the previously-built copy. If you haven't built the module at all, it will test the installed version of the module instead.
prove also lets you run only a specific test or tests:
prove -bv t/failing.t
You can also use the prove command:
prove --blib --verbose
from the unpacked module's top directory. --blib includes the needed directories for a built but not installed module distribution.