How do I show the exact command being tested when a pytest script runs? - pytest

How is it possible that anyone uses pytest without having it output the exact command that it runs.
I have a set of 5 test scripts with a total of 41 different test combinations. The script functions all basically follow this same template where at some stage, the funciton does:
subprocess.Popen(cmdline_builder(opt, name, options))
When there's a failure, the output is nearly useless, it doesn't show the exact command that was run.
How does anybody use this ? How would you expect to debug a failed test without knowing what to run ?

Related

How to test a single failing test when building perl

This issue usually is encountered when trying to run make test and sees one test fails. The README describes one can run each test individually, didn't clearly specifies how to do so.
make test uses the script called TEST in the test directory (t). To replicate make test of a single file, one would use this script as follows:
[.../perl/t]$ ./perl -I../lib TEST op/array.t
t/op/array ... ok
All tests successful.
Elapsed: 0 sec
u=0.01 s=0.00 cu=0.03 cs=0.02 scripts=1 tests=194
If you want to see the raw output of the test script, you can run perl as follows:
[.../perl/t]$ ./perl -I../lib op/array.t
1..194
ok 1
ok 2
ok 3
...
ok 192 - holes passed to sub do not lose their position (multideref, mg)
ok 193 - holes passed to sub do not lose their position (aelem)
ok 194 - holes passed to sub do not lose their position (aelem, mg)
The above information and more is found in perlhack.
This document explains how Perl development works. It includes details about the Perl 5 Porters email list, the Perl repository, the Perlbug bug tracker, patch guidelines, and commentary on Perl development philosophy.
Note that you need to run make test_prep before the above commands work. (If you've run make test, you've effectively run make test_prep already.)
Run ./perl harness ../foo/boo.t in the t directory, with foo/boo the name of the failing test.
To run a single test script, use perl, or better, prove. Assuming you are in the module's base directory:
prove -lv t/some-test-script.t
This will run the test script against the libraries in ./lib, with fallback to the libraries available to your install of Perl.
If you want to use the build libraries built by make, then this:
prove -bv t/some-test-script.t
Now the test script will be run against the libraries in ./blib, falling back to libraries installed for your Perl.
The test scripts are typically just Perl scripts that live in a t/ or xt/ or some similar path within the distribution's directory structure. So you can also run them just with Perl:
perl -Iblib t/some-test-script.t
But prove produces nicer test summary information and color coding.
That is about as granular as you can get unless tests are written to allow for targeting specific segments within a test script. If you need to target a specific test within a test script you'll usually have to dig into the test code itself.

If one job failed in bamboo it does not fail the build

I tried to execute two Power-shell script. 1st one is incorrect and 2nd one is correct but bamboo shows Successful build.
It really depends why the first script is "incorrect". If it is throwing an error code, by default it will still return a success, as the script successfully ran, even if the results were an error. You might want to look into using $LastExitCode after you call the Powershell script to get the status of the script itself.

NUnit console - when attempting to redirect output /err seems to have no effect

I've got an Nunit project with some tests and they all run as expected. Now I want to incorporate the running of these scripts automatically.
I'm trying to use some of the redirect options so I can separate the test output, but whatever combination I use, all I seem to get is the standard TestResult.xml. I can use /out:AnnotherOut.txt OK, but I'm really interested in capturing the error output using /err:TestErrors.txt.
Command line is:
(NunitConsole App) /nologo /framework:net-4.0 MyTestProject.nunit /include=Integration /err=TestErrors.txt

mimicking make dependency checking in perl

Not sure if I am explaining this well, but here goes...
I have a perl script/flow that runs various steps. Each step is basically dependent on the output of its previous step in order to run.
For example:
myflow -step1...input is file0, produces file1
myflow -step2...input is file1, produces file2
myflow -stepN...input is fileN-1, produces fileN
Right now users can run myflow -step1 -step2...-stepN to go from start to finish. I would like to somehow have the ability for the user to run myflow -stepN, have myflow check to see which steps need to be run prior to it, and then run stepN. Maybe no steps were run, so myflow -stepN would start from step1 and continue until it finishes stepN or an error occurs. Maybe step1 through step3 ran fine previously, so running -stepN would start from step4. Maybe all steps ran fine, but the user modified/deleted/touched an intermediate file, so running -stepN would detect this and rerun from that previous step.
Is there a cpan module that essentially mimics this make behavior, i.e. given steps, inputs they require, and outputs they produce, create a dependency graph and determine which steps need to be run?
I'm thinking you could use make itself instead of trying to simulate it.
The makefile rules for "building" each fileX "target" from the fileX-1 "source file" would be invoking your script for the respective step.

Output selenium test result as html after running perl script

I am currently looking for a way to output the test result nicely after running selenium perl script.
The htmlSuite command from running selenium server outputs a nice html format result page, but I don't know how to do that in perl script.
Problem is, I have it setup so that Selenium is being run 24/7 on a virtual machine workstation(Windows 7), where anyone one can run tests on. Therefore I can't use htmlSuite to run the test because the server will close after the test is finished.
Is there a command argument or perl script method to make selenium server output results on html or other nice format other than printing it on the command line?
Or is there a better way to do this?
If your script is output TAP (that's what Test::More would put out), then you can use the Test::Harness family of modules to parse that TAP and use it to generate an HTML report.
How nice is nice? Under Hudson/Jenkins this gives graphs and a tabular report of tests run:
prove --timer --formatter=TAP::Formatter::JUnit large_test.t >junit.xml