"make test" more verbose in Perl - perl

When I run make test using the normal test harness that CPAN modules have, it will just output a brief summary (if all went well).
t/000_basic.t .......................... ok
t/001_db_handle.t ...................... ok
t/002_dr_handle.t ...................... ok
t/003_db_can_connect.t ................. ok
... snip ...
All tests successful.
Files=30, Tests=606, 2 wallclock secs
Result: PASS
If I run the tests individually, they output much more detailed information.
1..7
ok 1 - use DBIx::ProcedureCall::PostgreSQL;
ok 2 - simple call to current_time
ok 3 - call to power() with positional parameters
ok 4 - call to power() using the run() interface
ok 5 - call to setseed with a named parameter
ok 6 - call a table function
ok 7 - call a table function and fetch
How can I run all the tests in this verbose mode? Is there something that I can pass to make test?

The ExtUtils::MakeMaker docs explain this in the make test section:
make test TEST_VERBOSE=1
If the distribution uses Module::Build, it's a bit different:
./Build test verbose=1
You can also use the prove command that comes with Test-Harness:
prove -bv
(or prove --blib --verbose if you prefer long options.) This command is a bit different, because it does not build the module first. The --blib option causes it to look for the built-but-uninstalled module created by make or ./Build, but if you forgot to rebuild the module after changing something, it will run the tests against the previously-built copy. If you haven't built the module at all, it will test the installed version of the module instead.
prove also lets you run only a specific test or tests:
prove -bv t/failing.t

You can also use the prove command:
prove --blib --verbose
from the unpacked module's top directory. --blib includes the needed directories for a built but not installed module distribution.

Related

pytest coverage output is cut

Running pytest with coverage in a larger project, the output is strangely cut (note the datam at the end, many files are still missed here).
I'm not aware of further configuration—no pytest.ini, no pyproject.toml, no related environment variable.
How can I overcome this, given I want the simple terminal output, not an extra report?
Only if needed: How could I print the results written to .coverage sqlite database to terminal?
> pytest tests/ --cov
...
---------- coverage: platform win32, python 3.10.4-final-0 -----------
Name Stmts Miss Cover
--------------------------------------------------------------------------------
...
datamodel\model\gis\topology\edge.py 26 3 88%
datamodel\model\gis\version.py 0 0 100%
datam
============================== 45 passed in 6.44s ==============================
This looks like a bug of the coverage package, maybe from too many files (>400 in my case), maybe interference with pytest-xdist.
In such a case ignore the pytest output and print the results written to the .coverage sqlite database to terminal, like
> pytest tests --cov -n 12
...
> coverage report
...

How can I debug my python unit tests within Tox with PUDB?

I'm trying to debug a python codebase that uses tox for unit tests. One of the failing tests is proving difficult due to figure out, and I'd like to use pudb to step through the code.
At first thought, one would think to just pip install pudb then in the unit test code add in import pudb and pudb.settrace(). But that results in a ModuleNotFoundError:
> import pudb
>E ModuleNotFoundError: No module named 'pudb'
>tests/mytest.py:130: ModuleNotFoundError
> ERROR: InvocationError for command '/Users/me/myproject/.tox/py3/bin/pytest tests' (exited with code 1)
Noticing the .tox project folder leads one to realize there's a site-packages folder within tox, which makes sense since the point of tox is to manage testing under different virtualenv scenarios. This also means there's a tox.ini configuration file, with a deps section that may look like this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
commands = pytest tests
adding pudb to the deps list should solve the ModuleNotFoundError, but leads to another error:
self = <_pytest.capture.DontReadFromInput object at 0x103bd2b00>
def fileno(self):
> raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
E io.UnsupportedOperation: redirected stdin is pseudofile, has no fileno()
.tox/py3/lib/python3.6/site-packages/_pytest/capture.py:583: UnsupportedOperation
So, I'm stuck at this point. Is it not possible to use pudb instead of pdb within Tox?
There's a package called pytest-pudb which overrides the pudb entry points within an automated test environment like tox to successfully jump into the debugger.
To use it, just make your tox.ini file have both the pudb and pytest-pudb entries in its testenv dependencies, similar to this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
pudb
pytest-pudb
commands = pytest tests
Using the original PDB (not PUDB) could work too. At least it works on Django and Nose testers. Without changing tox.ini, simply add a pdb breakpoint wherever you need, with:
import pdb; pdb.set_trace()
Then, when it get to that breakpoint, you can use the regular PDB commands:
w - print stacktrace
s - step into
n - step over
c - continue
p - print an argument value
a - print arguments of current function

Testing for LibreSSL in a Perl build script

I released Net::NSCAng::Client a while ago and am getting a lot of test failures on OpenBSD. The reason for that is that the NSCAng protocol uses OpenSSL in preshared-key mode (RFC4279), something the folks at LibreSSL (default on OpenBSD now) have ripped out. However, they seem to have been hell-bent on doing this the most intransparent way: the include files have all the functions defined, just the shared library is missing the corresponding symbols, so compilation works fine but the tests fail.
There is a compatibility package on OpenBSD called eopenssl, and by testing for this first in Makefile.PL (using ExtUtils::PkgConfig) I can make it work if the compatibility library is installed. If it isn't, things still fail.
I could check for the CPP symbol OPENSSL_NO_PSK, but as the includes always come from LibreSSL, this fails even if linking with eopenssl would work fine. The only idea I have left is to try and have a test program run as part of the compilation phase as autoconf does it. Is that even possible with ExtUtils::MakeMaker (or something else -- I wouldn't mind switching the build system if necessary)?
It's easy to write feature tests with Devel::CheckLib. Something like the following can be used to check for the presence of function your_func (in Makefile.PL):
my $your_func_exists = check_lib(
header => 'your_header.h',
function => 'return your_func ? 1 : 0;',
);
If you simply want to abort compilation if the function is missing:
check_lib(
...
) or warn('your_func is missing'), exit;
Exiting with 0 should avoid a CPAN Tester's 'FAIL' report.

How to pass command-line arguments in CTest at runtime

I'm using CTest and want to pass command-line arguments to the underlying tests at runtime. I know there are ways to hard code command-line arguments into the CMake/CTest script, but I want to specify the command-line arguments at runtime and have those arguments passed through CTest to the underlying test.
Is this even possible?
I've figured out a way to do it (using the Fundamental theorem of software engineering). It's not as simple as I'd like, but here it is.
First, create a file ${CMAKE_SOURCE_DIR}/cmake/RunTests.cmake with the content
if(NOT DEFINED ENV{TESTS_ARGUMENTS})
set(ENV{TESTS_ARGUMENTS} "--default-arguments")
endif()
execute_process(COMMAND ${TEST_EXECUTABLE} $ENV{TESTS_ARGUMENTS} RESULT_VARIABLE result)
if(NOT "${result}" STREQUAL "0")
message(FATAL_ERROR "Test failed with return value '${result}'")
endif()
Then, when you add the test, use
add_test(
NAME MyTest
COMMAND ${CMAKE_COMMAND} -DTEST_EXECUTABLE=$<TARGET_FILE:MyTest> -P ${CMAKE_SOURCE_DIR}/cmake/RunTests.cmake
)
Finally, you can run the test with custom arguments using
cmake -E env TESTS_ARGUMENTS="--custom-arguments" ctest
Note that if you use bash, you can simplify this to
TESTS_ARGUMENTS="--custom-arguments" ctest
There are some problems with this approach, e.g. it ignores the WILL_FAIL property of the tests. Of course I wish it could be as simple as calling ctest -- --custom-arguments, but, as the Stones said, You can't always get what you want.
I'm not sure I fully understand what you want, but I still can give you a way to pass arguments to tests in CTest, at runtime.
I'll give you an example, with CTK (the Common Toolkit, https://github.com/commontk/CTK):
In the build dir (ex: CTK-build/CTK-build, it's a superbuild), if I run: ('-V' for Verbose, and '-N' for View Mode only)
ctest -R ctkVTKDataSetArrayComboBoxTest1 -V -N
I get:
UpdateCTestConfiguration from : /CTK-build/CTK-build/DartConfiguration.tcl
Parse Config file:/CTK-build/CTK-build/DartConfiguration.tcl
Add coverage exclude regular expressions.
Add coverage exclude: /CMakeFiles/CMakeTmp/
Add coverage exclude: .*/moc_.*
Add coverage exclude: .*/ui_.*
Add coverage exclude: .*/Testing/.*
Add coverage exclude: .*/CMakeExternals/.*
Add coverage exclude: ./ctkPixmapIconEngine.*
Add coverage exclude: ./ctkIconEngine.*
UpdateCTestConfiguration from :/CTK-build/CTK-build/DartConfiguration.tcl
Parse Config file:/CTK-build/CTK-build/DartConfiguration.tcl
Test project /CTK-build/CTK-build
Constructing a list of tests
Done constructing a list of tests
178: Test command: /CTK-build/CTK-build/bin/CTKVisualizationVTKWidgetsCppTests "ctkVTKDataSetArrayComboBoxTest1"
Labels: CTKVisualizationVTKWidgets
Test #178: ctkVTKDataSetArrayComboBoxTest1
Total Tests: 1
You can copy-paste the "Test command" in your terminal:
/CTK-build/CTK-build/bin/CTKVisualizationVTKWidgetsCppTests "ctkVTKDataSetArrayComboBoxTest1"
And add the arguments, for example "-I" for interactive testing:
/CTK-build/CTK-build/bin/CTKVisualizationVTKWidgetsCppTests "ctkVTKDataSetArrayComboBoxTest1" "-I"
Tell me if it helps.
matthieu's answer gave me the clue to get it to work for me.
For my code I did the following:
Type the command ctest -V -R TestMembraneCellCrypt -N to get the output:
...
488: Test command: path/to/ctest/executable/TestMembraneCellCrypt
Labels: Continuous_project_ChasteMembrane
Test #488: TestMembraneCellCrypt
...
Then I copied the Test command and provided the arguments there:
path/to/ctest/executable/TestMembraneCellCrypt -e 2 -em 5 -ct 10
I'll note that the package I'm using (Chaste), is pretty large so there might be things going on that I don't know about.

Test with imperative xfail in py.test always reports as xfail even if the passes

I always thought that imperative and declarative usage of xfail/skip in py.test should work in the same way. In the meantime I've noticed that if I write a test that contains an imperative skip the result of the test will always be "xfail" even it the test passes.
Here's some code:
import pytest
def test_should_fail():
pytest.xfail("reason")
#pytest.mark.xfail(reason="reason")
def test_should_fail_2():
assert 1
Running these tests will always result in:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5 -- C:\Python27\python.exe
collecting ... collected 2 items
test_xfail.py:3: test_should_fail xfail
test_xfail.py:6: test_should_fail_2 XPASS
===================== 1 xfailed, 1 xpassed in 0.02 seconds =====================
If I understand correctly what is written in the user manual, both test should be "XPASS'ed".
Is this a bug in py.test or am I getting something wrong?
When using the pytest.xfail() helper function you are effectively raising an exception in the test function. Only when you are using the marker it is possible for py.test to execute the test fully and give you an XPASS.