Total executed lines gcov - ios5

I'm starting with code coverage to measure how good are my tests. I already have all my program compiled to use gcov and I also have my .gcda and .gcno files.
If I execute something like: "gcov --branch-probabilities --no-output mySourceFile.m" I get as a result the amount of lines executed, branches executed... But from just one file, I need that same measure but from my whole project. Is there anyway of doing that?
Thanks!

The tool I was looking for it's called gcovr. It's a python script and it summarizes the code coverage total. You just install it, and run it, that's all and it gives you total amount of lines, total executed lines and percentage of executed lines.
Hope it helps someone...

Related

racket cover gets overwritten if I execute more than one set of tests

I am trying to measure coverage with racket cover
https://docs.racket-lang.org/cover/basics.html
like this
raco cover hello.rkt
However, if I run it more than once, it overwrites what is in the coverage directory instead of adding to it. If I do something like
raco cover hello.rkt
raco cover bye.rkt
the two html reports specific to these rkt files (coverage/hello.html and coverage/bye.html) remain intact, coverage/index.html will only have bye.rkt in it. Also I guess if they share a library, I'll only get coverage info for that library from the last run.
Does anybody have a way to combine two coverage reports from two different executions?

Reduce relocatable win32 Perl to as few files and bytes as possible

I'm trying to use a perl program on a Windows HTCondor computing cluster. The way HTCondor on windows works is it copies all dependencies into a temporary directory (used as a chroot of sorts) and then it deletes the directory after the specified outputs are moved to a designated place.
If I take only perl.exe and perl514.dll and make a job like this: perl -e "print qq/hello\n/" and tell the cluster to run it 200 times, then each replication winds up taking about 15 seconds, which is acceptable overhead. That's almost all time spent repeatedly copying the files over the network and then deleting them. echo_hello.bat run 200 times takes more like two seconds per replication.
The problem I have is that when I try to use my full blown perl distribution of 55MB and 2,289 files, a single "hello" rep takes something like four minutes of copying and deleting, which is unacceptable. When I try to do many runs the disks on the machines grind to a halt trying to concurrently handle all the file operations across all the reps, so it doesn't work at all. I don't know how long it might take to eventually finish because I gave up after half an hour and no jobs had finished.
I figured PAR::Packer might fix the issue, but nope. I tried print_hello.exe created like this: pp -o print_hello.exe -e "print qq/hello\n/". It still makes things grind to a halt, apparently by swamping the filesystem. I think a PAR::Packer executable makes a ton of temporary files as it pulls out files it needs from the archive. I think the windows file system totally chokes when there are a bunch of concurrent small file operations.
So how can I go about cutting down the perl I built to something like 6MB and a dozen files? I'm really only using a tiny number of core modules and don't need most of the crap in bin and lib, but I have no idea how to proceed ripping out stuff in a sane way.
Is there an automated way to strip away un-needed files and modules?
I know TCL has a bunch of facilities for packing files into a single uncompressed archive that can then be accessed through a "virtual filesystem" without expanding the file. Is there some way to do this with perl itself sort of like with PAR? The problem is PAR compresses everything and then has to extract to temporary files, rather than directly work through a virtual filesystem layer. (If I understand correctly.)
My usage of perl is actually as a scripting layer. It's embedded in a simulation. So I'm really running my_simulation.exe which depends on per514.dll, but you get the idea. I also cannot realistically do anything to the HTCondor cluster other than use it. So there's no need to think outside the box on what I should be using instead of perl and what I could administratively tweak in Windows and HTCondor, thanks.
You can use Module::ScanDeps to get list of actual dependencies of your perl. It was terrible, that it took significant amount of time, when PAR::Packer unpacked the whole application, so I decided to build the executable by myself.
Here is my ready to use script which gathers perl dependencies into some directory; it might be useful for you to reduce the number of perl-modules, e.g. by manually removing some dependencies after copying.
In theory (I have never tried that), the next your step could be merge all pure-perl dependencies into single file (like deps.pm); although it might be non-trivial due to perl's autoload magic and some other tricks.
You can list the modules that are needed by your program using the very nice ListDependencies module
To my knowledge it isn't downloadable anywhere, but it is simple to copy and paste into your own ListDependencies.pm file
You should read the POD documentation within the module for usage instructions

Powershell: show progress for a 3 days script

I wrote a simple script that calls a test that takes about 3 days. I redirect the test's output to a log file, so when running the script there's nothing on the screen to indicate progress. It's very simple:
CD C:\Test
test.exe > log.txt
I can check the log file every once in a while sure, but if the machine freezes (which happens) I wouldn't notice right away.
So, I need an idea of a nice way to show progress. Outputting a dot every now and then is not nice I think, since it takes 3 days! Any other idea? As a beginner in PowerShell, an implementation for a given idea would also be nice.
Much appreciated,
Yotam

Run time error in Xcode for having too many lines of code

I have written a large code (a calculator program) about 220,000 line of code and more in my base implementation file. The program build and run well, but whenever it comes to execute the codes from specific line number and further, the program stops, and gives a run time error. There is no problem with the code, I have tried it in smaller scale, (I mean deleting some lines) when it become smaller it runs ok.
My question here is this, does the Xcode have limitation in capacity of each file when running?
And if the answer is NO, so what should I do?

Why doesn't "coverage.py run -a" always increase my code coverage percent?

I have a GUI application that I am trying to determine what is being used and what isn't. I have a number of test suites that have to be run manually to test the user interface portions. Sometimes I run the same file a couple of times with "coverage.py run file_name -a" and do different actions each time to check different interface tools. I would expect that each time I ran with the -a argument, I could only increase the code covered line count by coverage.py (at least unless new files are pulled in). However, sometimes it gives lower code coverage after an additional run - what could be causing this?
I am not editing source between runs and no new files are being pulled in as far as I can tell. I am using coverage.py version 3.5.1.
That sounds odd indeed. If you can provide source code and a list of steps to reproduce the problem, I'd like to take a look at it: you can create a ticket for it here: https://bitbucket.org/ned/coveragepy/issues