Run pgTAP with Perl prove instead of pg_prove - perl

I have a test-suit as usual for Perl projects, containing a lib and a t directory. The tests in t are structured through subdirectories. So I run them using:
prove -Ilib -r t/
So far nothing special, and afaik quite a standard way of testing in Perl.
Since it is the assumption, that this is the standard way of testing, I'd like to make sure that the following applies:
"If you run prove -r on t, you have tested everything that is there to test".
This is very important, since otherwise you can never be sure that you really called all the tests and the stuff is fine. Somebody calling the above would then maybe - not knowing so - just call a part of the available tests, leaving some behind. Quite annoying... tests that are not run, are of no help. It should be as easy and predictive as possible for developers to call all the tests! It is a bad thing when you have to look up how to run the rest of the test-suit. You might not know about it, or might not do it anyway.
So here comes my problem: I have to integrate some Tests using pgTAP which kindly provides the tool pg_prove. Now I have to make two commandos to do the testing. Additionally to running prove -Ilib -r I also have to run something like e.g. pg_prove -S schema=customerX -U dbuser -d dbname t/pgTAP/*.sql. The problem is not that big if you call the tests automatically from cron or what ever. But it really decreases the chance that we lazy developers run all test tests during our busy days.
So I wonder what would be the best approach to implement the tests in such a way that prove will also include those tests. Is it, that I have to create some .t-files which wrap the whole thing (and how?)? Are there any tricks I can do with the whole Harness stuff on CPAN? Would a simple test_all.sh in the root-dir, including both commandos, do the best job, even if it breaks the assumptions I made above?
So my question in short is: Can I run all tests, including pgTAP with prove? If not, is there a best practice for solving my problem?
Thanks a lot.

Yes. In fact, pg_prove just passes everything off to prove. Assuming your pgTAP tests end in .sql, you can run all your tests like this:
prove -lr --ext .sql --ext .t \
--source pgTAP \
--pgtap-option dbname=dbname \
--pgtap-option username=dbuser \
--pgtap-option suffix=.pg \
--pgtap-option set=schema=customerX
If you use Module::Build, you can also have ./Build test run all the tests, too, as I've done for circle.
See the TAP::Parser::SourceHandler::pgTAP documentation for details.

Related

Perl/Raku succinct webserver one-liner?

Are there any concise one-liners for quick serving of pages or directories if no index.html? Something like this:
python3 -m http.server
Couldn't find a Raku one-liner.
Compare Perl ones, taken from https://gist.github.com/willurd/5720255 and https://github.com/imgarylai/awesome-webservers :
plackup -MPlack::App::Directory -e 'Plack::App::Directory->new(root=>".");' -p 8000
perl -MHTTP::Server::Brick -e '$s=HTTP::Server::Brick->new(port=>8000); $s->mount("/"=>{path=>"."}); $s->start'
Install them prior to use (no additional installs with Python):
cpan Plack
cpan HTTP::Server::Brick
Plack pulls in a gajillion of dependencies so I didn't proceed with installation, and HTTP::Server::Brick doesn't install on my machine as its tests fail.
Both Perl and Raku are generally considered to be good in one-liners, and are meant to deliver DWIM:
"try to do the right thing, depending on the context",
"guess ... the result intended when bogus input was provided"
So I'd expect them - especially modern and rich Raku - to provide a webserver one-liner on par in simplicity with Python.
Or have I missed something?
If the feature lacks, is it planned?
If lacks and not-to-be-implemented, why?
I like http_this (https_this is also available).
There's an annoying shortcoming in that it doesn't currently support index.html - but I have a pull request pending to fix that.
On the other hand, these programs rely on Plack, so maybe you'd rather look elsewhere.
Raku Cro needs one line to install:
zef install --/test cro
And then one to setup and run:
cro stub http hello hello && cro run
From https://cro.services/docs/intro/getstarted
Let's say you want to serve all the files in a project subdirectory e.g. hello/httpd, then tweak the standard hello/lib/Routes.pm6 file to this:
1 use Cro::HTTP::Router;
2
3 sub routes() is export {
4 route {
5 get -> *#path {
6 static 'httpd', #path;
7 }
8 }
9 }
cro run looks for file changes and will auto restart the server
index.html works fine
I suggest a symbolic link ln -s if your dir is outside the project tree
Putting aside the webserver portion of your question, Python and Perl differ in their philosophies. Both of them are perfectly fine ways of doing things, and each appeals to a different sort of crowd.
Python is "batteries included", so it's a heavyweight distribution of many things in its standard library. There's more right out of the box, even if you never use most of it.
Perl tries to distribute just enough for you to install the modules that you decide that you need. That way, you can choose something that is fresher or newer than the thing that Perl chose to distribute.
Now, for the webserver, you may like Mojolicious. It's mostly self-contained (or relies on mostly core modules) so it's an easier install. The links you mentioned have Mojolicious::Lite examples.

How to test a single failing test when building perl

This issue usually is encountered when trying to run make test and sees one test fails. The README describes one can run each test individually, didn't clearly specifies how to do so.
make test uses the script called TEST in the test directory (t). To replicate make test of a single file, one would use this script as follows:
[.../perl/t]$ ./perl -I../lib TEST op/array.t
t/op/array ... ok
All tests successful.
Elapsed: 0 sec
u=0.01 s=0.00 cu=0.03 cs=0.02 scripts=1 tests=194
If you want to see the raw output of the test script, you can run perl as follows:
[.../perl/t]$ ./perl -I../lib op/array.t
1..194
ok 1
ok 2
ok 3
...
ok 192 - holes passed to sub do not lose their position (multideref, mg)
ok 193 - holes passed to sub do not lose their position (aelem)
ok 194 - holes passed to sub do not lose their position (aelem, mg)
The above information and more is found in perlhack.
This document explains how Perl development works. It includes details about the Perl 5 Porters email list, the Perl repository, the Perlbug bug tracker, patch guidelines, and commentary on Perl development philosophy.
Note that you need to run make test_prep before the above commands work. (If you've run make test, you've effectively run make test_prep already.)
Run ./perl harness ../foo/boo.t in the t directory, with foo/boo the name of the failing test.
To run a single test script, use perl, or better, prove. Assuming you are in the module's base directory:
prove -lv t/some-test-script.t
This will run the test script against the libraries in ./lib, with fallback to the libraries available to your install of Perl.
If you want to use the build libraries built by make, then this:
prove -bv t/some-test-script.t
Now the test script will be run against the libraries in ./blib, falling back to libraries installed for your Perl.
The test scripts are typically just Perl scripts that live in a t/ or xt/ or some similar path within the distribution's directory structure. So you can also run them just with Perl:
perl -Iblib t/some-test-script.t
But prove produces nicer test summary information and color coding.
That is about as granular as you can get unless tests are written to allow for targeting specific segments within a test script. If you need to target a specific test within a test script you'll usually have to dig into the test code itself.

Running a test with tox based on a keyword

I am using pytest with tox. I can run some of my tests with a keyword like this:
pytest -k <keyword> path/to/tests
Now it would be really convenient to be able to do this also with tox, as the environments there are clean and different python versions can be tested. However the nearest thing I have found is:
tox -- path/to/tests/test_very_specific_name.py:TestClass.test_func
This is not easy to type, so I rather just run tox without arguments and wait 2 minutes for everything to finish.
Is there a way to run single tests based on keywords with tox? I tried:
tox -- -k <keyword>
This results in a huge list of import errors. It doesn't seem to be able to find any of my local includes. Is this supposed to work?
I figured it out thanks to the comment by phd.
Everything on the command line after -- can be used in tox.ini as {posargs}. I was using that wrong. My tox.ini now has a line like this:
commands = py.test {posargs} <test_folder>
Now it works perfectly with:
tox -- -k <keyword>

Perl debugger on test modules

I'm running into problems testing a new addition to a module. (Specifically - the ~ operator seems to be not working in Math::Complex for this new feature only.) It's too bizarre to be what it appears but the ideal scheme would be to add the -d option on the top line of the .t program.
Well, I was quickly disabused of that idea! It does not invoke the debugger.
If I wanted to use the debugger, I'd need to create an edit of the .t program that:
Uses (the use command) the module directly. not in the form of
BEGIN { use_ok('My::Module') };
Does not "use Test::More;"
A few other edits that cause gluteal pains
The problem with doing that is that any changes I make in the edited test program I still need to transfer back to the true test program use in "make test". Error prone as best.
I am already using "make test TEST_VERBOSE=1" so that my stdio output shows up. But there's GOT to be a simpler way to invoke the debugger on the .t
Thanks for ideas here.
-- JS
use_ok tests are great, but you should have them in test files of their own, not test files that also test other things.
I'm not sure why you would need to avoid Test::More or use_ok to run the debugger, though. What does happen when you try your test directly:
perl -d -Mblib t/yourtestfile.t?
If all else fails, you can try using Enbugger in your test script.

Can py.test support multiple -k options?

Can py.test supports multiple -k options?
Each testcase belongs to a particular group such as _eventnotification or _interface, etc.
Is it possible to run test cases that belong to either one or both at the same time?
ie, run testcases that has _eventnotification or _interface in the name at the same time.
I tried the following and only the testcases with _interface were executed.
If that is not supported, is there another way to do this?
py.test -k "_eventnotification" -k "_interface"
The bad news: pytest-2.3.3 does not support it.
The good news: i took your question as an opportunity to finally enhance "-k" behaviour, so that you can use "not", "or", "end" etc, see the [extended -k example][1]. It works now like "-m" except that it matches on (substrings of) test names, not markers. You can use this in-development pytest version with "pip install -i http://pypi.testrun.org -U pytest".