Not able to generate the code coverage result using Devel::Cover - perl

I tried to find code coverage for a c4rgr.pl test file with Devel::Cover. I have a .pm module sitting in the same area. I am using perl -MDevel::Cover c4rgr.pl, which generates a cover_db and has a runs subdirectory inside.
Inside the runs subdirectory, a versionated directory is created every time I run the Cover command and a cover.13 is obtained. This cover.13 file (I think) has the raw data for code coverage results.
Still, I am unable to get the test results in a HTML output format or on the Terminal screen.
This is what I do:
Run the perl -MDevel::Cover c4rgr.pl
Here, the c4rgr.pl uses a .pm module which is sitting in the same area.
When I run above command, it runs the test file but no test coverage output as shown in Devel::Cover HTML output is obtained. Nevertheless, the cover.13 file is created every time.
Cover.13 <- 1401378982.24872.12631 <- runs <- cover_db .
Can someone tell me how to get the HTML file, please? I was able to run a dummy test with the same features and generate a HTML file. But with my actual test, it does not. Or is there a way to convert the cover.13 file to HTML that is not being done in my case?

After running perl -MDevel::Cover c4rgr.pl, run the cover command. For example:
cover -report html -outputdir cover_report

Related

How to test a single failing test when building perl

This issue usually is encountered when trying to run make test and sees one test fails. The README describes one can run each test individually, didn't clearly specifies how to do so.
make test uses the script called TEST in the test directory (t). To replicate make test of a single file, one would use this script as follows:
[.../perl/t]$ ./perl -I../lib TEST op/array.t
t/op/array ... ok
All tests successful.
Elapsed: 0 sec
u=0.01 s=0.00 cu=0.03 cs=0.02 scripts=1 tests=194
If you want to see the raw output of the test script, you can run perl as follows:
[.../perl/t]$ ./perl -I../lib op/array.t
1..194
ok 1
ok 2
ok 3
...
ok 192 - holes passed to sub do not lose their position (multideref, mg)
ok 193 - holes passed to sub do not lose their position (aelem)
ok 194 - holes passed to sub do not lose their position (aelem, mg)
The above information and more is found in perlhack.
This document explains how Perl development works. It includes details about the Perl 5 Porters email list, the Perl repository, the Perlbug bug tracker, patch guidelines, and commentary on Perl development philosophy.
Note that you need to run make test_prep before the above commands work. (If you've run make test, you've effectively run make test_prep already.)
Run ./perl harness ../foo/boo.t in the t directory, with foo/boo the name of the failing test.
To run a single test script, use perl, or better, prove. Assuming you are in the module's base directory:
prove -lv t/some-test-script.t
This will run the test script against the libraries in ./lib, with fallback to the libraries available to your install of Perl.
If you want to use the build libraries built by make, then this:
prove -bv t/some-test-script.t
Now the test script will be run against the libraries in ./blib, falling back to libraries installed for your Perl.
The test scripts are typically just Perl scripts that live in a t/ or xt/ or some similar path within the distribution's directory structure. So you can also run them just with Perl:
perl -Iblib t/some-test-script.t
But prove produces nicer test summary information and color coding.
That is about as granular as you can get unless tests are written to allow for targeting specific segments within a test script. If you need to target a specific test within a test script you'll usually have to dig into the test code itself.

How to run a pytest-bdd test?

I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?
I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)
Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest
I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example
Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#

TexStudio and TexCount User Command

Im trying to incorporate TexCount into Tex Studio.
Ive gotten the perl script working in the command line, but I would like it done in TexStudio automatically.
I have adapted a bat file, but it doesnt work:
#echo off
C:\texlive\2015\texmf-dist\scripts\texcount\texcount.pl -html -inc "?ame" > "%".html
"%".html
However I tinker with the syntax it doesnt seem to work and at the very most I can get it to produce a blank html file "_count.html"
Can anybody get this .bat file working please?
I've managed to get it working using the TexStudio User Commands.
In the Menus,
Options > Configure TeXStudio
Then,
Build > User Commands
I created my own user command with the name "texcount".
texcount %.tex -inc -total
This runs the texcount script;
on the current file (that's what the % is for),
it includes the files that use \input{},
it provides a total.
I can now run this command from:
Tools > User > (Number of user command: name of user command) e.g. 1:
texcount
Then the results are output in the messages tab.
Hope that helps.
EDIT: To more specifically produce a html file. I imagine you could use the html commands to produce your html file as shown in Thomas' answer.
You could then pipe the output to something like wkhtmltopdf to produce a pdf from the html file and show it using the default TeXstudio pdf viewer.
I could get this to work on my mac, you may need some adjustments for other OSs and/or other directory structures.
I created a script that calls texcount, as follows:
#!/bin/bash
perl /CorrectPath/texcount.pl -v0 -sum -html "$1*" >tempdoc.html
open tempdoc.html
In TexStudio, my user command looks like this:
"/CorrectPath/count_pdf_words.sh" ?ame"

How can I run a test perl script designed for prove form inside TextMate?

I'm using TextMate 1.5.10 (Mac OSX 10.7.2) to write a perl modulino application. To verify the functionality, I'm using test scripts designed to be run with the prove command line tool.
An example of the directory structure I'm using looks like this:
text_mate_test/MyModule.pm
text_mate_test/t/001_load_test.t
The 001_load_test.t file looks like this:
#!/usr/bin/perl
use Modern::Perl;
use Test::More;
use MyModule;
my $testObj = new_ok("MyModule", undef, "Initial load test.");
done_testing();
When I run prove or prove -v in the "text_mate_test" directory, everything passes as expected.
I'd like to be able to setup a hotkey in TextMate that allows me to run the test file without having to jump over to the terminal. Currently, if I run "001_load_test.t" directly from inside TextMate with Cmd+R, it chokes saying "Can't locate MyModule.pm in #INC". That's expected since the test script isn't designed to run directly. (I'm still pretty new to writing test files, but I believe that's the proper way to set them up.)
Running off the assumption that I don't want to change the test file itself, is there a way to setup a hotkey so I can run the file accurately from inside TextMate?
I've figured out an even better way to do this.
In the TextMate Bundle Editor (Menubar -> Bundles -> Bundle Editor -> Show Bundle Editor), I've updated the default "Perl -> Run Script" bundle to this:
#!/usr/bin/env ruby
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/executor"
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/save_current_document"
TextMate.save_current_document
TextMate::Executor.make_project_master_current_document
### If it's a ".t" test script in a "t" directory, run prove
if ( ENV["TM_FILEPATH"] =~ /^.*\/(t\/[^\/]+)$/ )
### Grab the relative file path for more legible output
relative_file_path = $1
### Jump up one directory so prove will work
Dir.chdir("../");
### Call prove with args to run only the file you are working on.
TextMate::Executor.run("prove", :script_args => ["-v", relative_file_path]);
### Otherwise, run with perl
else
TextMate::Executor.run(ENV["TM_PERL"] || "perl", "-I#{ENV["TM_BUNDLE_SUPPORT"]}",
"-Mexception_handler", ENV["TM_FILEPATH"],
:version_args => ["-e", 'printf "Perl v%vd", $^V;'])
end
Here's a screenshot of how it looks in the Bundle Editor.
The benefit of this is that you can use the same hot key (Cmd+r by default) to run your normal scripts with perl and your test scripts with prove.
This is what I was looking for.
UPDATED: When I first developed this, I only had one test script in the "t" directory. I didn't notice until I added other test scripts that the code in the original version of this answer would run prove across all the scripts. Not just the one being worked on. To get back to the expected behavior, I've update the bundle code so that prove will only run on the active script.
I've come up with a solution. Create a new Perl bundle called "Run Script with prove" and associate it with Shift-Cmd-R. The code for the bundle is:
#!/usr/bin/env ruby
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/executor"
require "#{ENV["TM_SUPPORT_PATH"]}/lib/tm/save_current_document"
TextMate.save_current_document
TextMate::Executor.make_project_master_current_document
### If it's a ".t" test script in a "t" directory, run prove
if ( ENV["TM_FILEPATH"] =~ /^.*\/(t\/[^\/]+)$/ )
### Use the relative file path for more legible output
relative_file_path = $1
### Jump up one directory so prove will work
Dir.chdir("../");
### Call prove with args to run only the file you are working on.
TextMate::Executor.run("prove", :script_args => ["-v", relative_file_path]);
else
error_string = "This script's filepath doesn't end with /t/.*\.t\n"
error_string += "That is required for the 'Perl -> Run Script with prove' bundle to work.\n"
TextMate::Executor.run("echo", :script_args => [error_string]);
end
Note: This is the results of a bunch of trial and error hacking. I don't know if it's the "right" to do it, but this works for me. Everything but the last two lines is a copy from the original "Run Script" bundle that comes with TextMate. Based on that, it seems like this should be pretty safe.
UPDATE: When I first built this I only had one test file in the "t" directory. When I added more, I discovered that the original version of the bundle was running all the test files. This code represents an update to the expected behavior of only running the test script you working on. Because of the way I ended up doing that, it also became necessary to add in a fallback. If you try to run a script that doesn't match the standard test file path format, it gives an error message.
It will enable the program to find your module if you add
use lib '..';
to the top of your code (before the use MyModule). This will add the text_mate_test directory to #INC and enable Perl to find the module, though you may come across other problems with running the program directly.

Output selenium test result as html after running perl script

I am currently looking for a way to output the test result nicely after running selenium perl script.
The htmlSuite command from running selenium server outputs a nice html format result page, but I don't know how to do that in perl script.
Problem is, I have it setup so that Selenium is being run 24/7 on a virtual machine workstation(Windows 7), where anyone one can run tests on. Therefore I can't use htmlSuite to run the test because the server will close after the test is finished.
Is there a command argument or perl script method to make selenium server output results on html or other nice format other than printing it on the command line?
Or is there a better way to do this?
If your script is output TAP (that's what Test::More would put out), then you can use the Test::Harness family of modules to parse that TAP and use it to generate an HTML report.
How nice is nice? Under Hudson/Jenkins this gives graphs and a tabular report of tests run:
prove --timer --formatter=TAP::Formatter::JUnit large_test.t >junit.xml