racket cover gets overwritten if I execute more than one set of tests - racket

I am trying to measure coverage with racket cover
https://docs.racket-lang.org/cover/basics.html
like this
raco cover hello.rkt
However, if I run it more than once, it overwrites what is in the coverage directory instead of adding to it. If I do something like
raco cover hello.rkt
raco cover bye.rkt
the two html reports specific to these rkt files (coverage/hello.html and coverage/bye.html) remain intact, coverage/index.html will only have bye.rkt in it. Also I guess if they share a library, I'll only get coverage info for that library from the last run.
Does anybody have a way to combine two coverage reports from two different executions?

Related

How to print an NUnit test run's XML file as a nice-looking report on the command line?

I have a suite of NUnit tests. When they finish, I get a TestResults.xml file. This file is very large and dense; I'd like to be able to peruse a simple human-readable report of the underlying results. I'm led to believe that the provided nunit-console program can do that, but I can't figure out how.
How can I print the information described by TestResults.xml with nunit-console?
Note that running the tests is not the problem; my problem is figuring out how to display the results on the command-line. I can't change how the tests are run, but I can do whatever I want with the results file.
The Test Result file isn't intended to be human-readable, except when used for debugging. It's a source for all the data about a test run from which human-readable reports may be generated.
There are two ways to generate a report
As a part of the test run itself.
After the run is finished, using the test result xml file.
Reports generated as part of the test run require a report writing extension. You would write such an extension following the documentation to produce the report in whatever format you require, like text or html, and give it a unique tag likemyreport.
You would install your extension and could then run the console runner including an option like
--result:somefile.html;format=myreport
As you may guess, creating your own extension is a relatively advanced use of NUnit.
Another option, if you are familiar with XSLT processing, is to use the built-in report format user, together with an XSLT transform. In that case, the option is something like
--result:somefile.html;transform=mytransform.xslt
You can also produce a report after the run is finished using the XML output file. The NUnit console program is not used in this case. You must use some standard program for creating reports from NUnit output (or a generic program taking XSLT transforms) or write your own program.

Cond statement doxygen does not work

I am trying to separate out internal and external documentation using the doxygen constructs of cond; but i just cant seem to get get it working. I would essentially like to exclude some files completely and not conditionally. Regardless of where i add the tag (before include, before header guards etc) , the files and source both show up.
What i have tried in vain is to take the test file from doxegen repo for
conditional test and add it to the project.
Steps to reproduce [Linux]
create a new directory.
copy paste the above file (had to rename it to .h as .c was passed over?).
generate dummy config via doxygen -g.
update Doxyfile ENABLED_SECTION = COND_ENABLED.
Run doxygen.
check html/index.html
This however is still visible in the html documentation it generates for the project. I have set the ENABLED_SECTION variable with other values , but cond_enabled function still shows up. Running the testing directory of the project (doxygen) it passes. So i am lost.
Any suggestions?
Tried with latest version 1.8.14.
Thanks!
Regarding the \cond problems (not an answer directly to the real problem you face, I think, but to long for a comment).
The mentioned file is used in the, limited, testing doxygen can do / does and the first lines contain some instructions on what to do. Furthermore there is a default Doxyfile with the tests in use. It is hard to run a separate test outside the doxygen build tree.
Regarding the remark "Running the testing directory of the project (doxygen) it passes." This is correct, here, at the moment, only testing is done against the XML output and the generated output is compared to a once created version of the XML output. No tests are done, at the moment, in respect to HTML or PDF / LaTeX. Recently the test framework has been slightly extended so in the future this should be possible (compare the xhtml and tex output, but some work has still to be done here).
The version of the parser sees the \cond in the first line (normal C comment) as a doxygen command and skips everything till the first \endcond (your friend in these cases is always doxygen -d preprocessor). I think that removing / modifying the first line will result in an already better result. There is however another hiccup for e.g. HTML output. As the function cond_enabled is not documented and EXPAND_ALL is not set to YES the function will not appear in the documentation. So best is also to add a line of documentation with the function cond_enabled.
Regarding the seen HTML problems I modified the the relevant test in doxygen slightly and pushed a proposed patch to github (pull request 714, https://github.com/doxygen/doxygen/pull/714).
Note: the problem of skipping the \cond in normal C comment is quite a bit harder to implement (seen the logical complexity of the doxygen code in pre.l and commentcnv.l.
EDIT: 2018/06/10: The push request has been integrated in the master version on github.

How to produce a .js file from a haskell source file with haste?

So I noticed, while answering this question, that the one who asked the question appears to be a javascript developer. And as the code I wrote in haskell is easy enough, I thought I give haste a try and try to compile it to javascript.
So, I downloaded the Windows binary package of haste (why does the .msi require a reboot?!!?), added it to my path, issued haste-cabal update and haste-cabal install split and after a bit of reading the output of hastec --help, I issued:
PS E:\h\stackoverflow> hastec -o hexagon.js --pretty-print hexagon.hs
as my best guess on how to get the output I am looking for.
Opposite to my expectation, haste output was this:
hastec.exe: user error (shell expression failed in readModule: Data.Binary.Get.runGet at position 8: not enough bytes)
So, my question: What do I have to do to get a java script source file?
Is it possible that you have an old version of Haste lying around, or have intermediate files (.jsmod, for instance) from a different version of the compiler in your source directory? This sounds like the (quite unhelpful) error message Haste produces when it runs into a corrupted intermediate file.
Check that the version of the binary you're calling is what you expect (hastec --version). Then, try getting rid of all intermediate files in the directory as well as any files in %USERPROFILE%\AppData\Roaming\haste, reinstalling split, and recompiling with the -fforce-recomp flag. You should also add a main function, so that Haste has an entry point to your program from which to start linking. If all you want to do is to make some Haskell function available to external JavaScript, you can use the export foreign function interface:
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Haste.Foreign
import Hexagon
main = export "picture" Hexagon.picture
You will probably also want to compile your program with the --onexec flag, to make sure that main runs and exports picture immediately when loaded, and not on page load which is the default:
> hastec -o hexagon.js --pretty-print --onexec hexagon.hs
After doing this, any code included after hexagon.js will be able to call e.g. Haste.picture(5); in order to produce a picture of size 5.
(Re: MSI installer requiring a reboot, this is required since it adds the Haste binaries to your %PATH%, which does not take effect immediately. I assume that a re-login would be enough to make it take effect, however.)

Reduce relocatable win32 Perl to as few files and bytes as possible

I'm trying to use a perl program on a Windows HTCondor computing cluster. The way HTCondor on windows works is it copies all dependencies into a temporary directory (used as a chroot of sorts) and then it deletes the directory after the specified outputs are moved to a designated place.
If I take only perl.exe and perl514.dll and make a job like this: perl -e "print qq/hello\n/" and tell the cluster to run it 200 times, then each replication winds up taking about 15 seconds, which is acceptable overhead. That's almost all time spent repeatedly copying the files over the network and then deleting them. echo_hello.bat run 200 times takes more like two seconds per replication.
The problem I have is that when I try to use my full blown perl distribution of 55MB and 2,289 files, a single "hello" rep takes something like four minutes of copying and deleting, which is unacceptable. When I try to do many runs the disks on the machines grind to a halt trying to concurrently handle all the file operations across all the reps, so it doesn't work at all. I don't know how long it might take to eventually finish because I gave up after half an hour and no jobs had finished.
I figured PAR::Packer might fix the issue, but nope. I tried print_hello.exe created like this: pp -o print_hello.exe -e "print qq/hello\n/". It still makes things grind to a halt, apparently by swamping the filesystem. I think a PAR::Packer executable makes a ton of temporary files as it pulls out files it needs from the archive. I think the windows file system totally chokes when there are a bunch of concurrent small file operations.
So how can I go about cutting down the perl I built to something like 6MB and a dozen files? I'm really only using a tiny number of core modules and don't need most of the crap in bin and lib, but I have no idea how to proceed ripping out stuff in a sane way.
Is there an automated way to strip away un-needed files and modules?
I know TCL has a bunch of facilities for packing files into a single uncompressed archive that can then be accessed through a "virtual filesystem" without expanding the file. Is there some way to do this with perl itself sort of like with PAR? The problem is PAR compresses everything and then has to extract to temporary files, rather than directly work through a virtual filesystem layer. (If I understand correctly.)
My usage of perl is actually as a scripting layer. It's embedded in a simulation. So I'm really running my_simulation.exe which depends on per514.dll, but you get the idea. I also cannot realistically do anything to the HTCondor cluster other than use it. So there's no need to think outside the box on what I should be using instead of perl and what I could administratively tweak in Windows and HTCondor, thanks.
You can use Module::ScanDeps to get list of actual dependencies of your perl. It was terrible, that it took significant amount of time, when PAR::Packer unpacked the whole application, so I decided to build the executable by myself.
Here is my ready to use script which gathers perl dependencies into some directory; it might be useful for you to reduce the number of perl-modules, e.g. by manually removing some dependencies after copying.
In theory (I have never tried that), the next your step could be merge all pure-perl dependencies into single file (like deps.pm); although it might be non-trivial due to perl's autoload magic and some other tricks.
You can list the modules that are needed by your program using the very nice ListDependencies module
To my knowledge it isn't downloadable anywhere, but it is simple to copy and paste into your own ListDependencies.pm file
You should read the POD documentation within the module for usage instructions

Why doesn't "coverage.py run -a" always increase my code coverage percent?

I have a GUI application that I am trying to determine what is being used and what isn't. I have a number of test suites that have to be run manually to test the user interface portions. Sometimes I run the same file a couple of times with "coverage.py run file_name -a" and do different actions each time to check different interface tools. I would expect that each time I ran with the -a argument, I could only increase the code covered line count by coverage.py (at least unless new files are pulled in). However, sometimes it gives lower code coverage after an additional run - what could be causing this?
I am not editing source between runs and no new files are being pulled in as far as I can tell. I am using coverage.py version 3.5.1.
That sounds odd indeed. If you can provide source code and a list of steps to reproduce the problem, I'd like to take a look at it: you can create a ticket for it here: https://bitbucket.org/ned/coveragepy/issues