How to count test cases written with pytest? - command-line

My objective is to get the number of test methods in a package/folder. I'm able to do that by executing
py.test <folder> --collect-only|grep collected
This shows the test count as
collected 104 items
However this counts the parameterized test multiple times,e.g. if a method has two sets of parameter single test will be counted 2 times.
Is there any way to tell pytest to count them as single?

If your tests in pytest use custom test collection method or use parametrization, regular grepping won't be helpful. Regular grepping can only give you test function and test classes. If you are interested in that number, other answers here should work fine for you. If you want to know the total number of collected tests, follow along.
You should run pytest --collect-only for target test directory and run grep on the output.
One possible solution is this:
pytest --collect-only | grep "<Function\|<Class" -c
It will return the number of lines which has <Function or <Class in the output of pytest --collect-only. Since all collected tests have either of these words, it will give correct number of tests.
Another hacky way to find the number of tests is to use -k switch. It searches for expression and run the tests which match the expression.
pytest -k "not test and not Test"
This will give you number of all tests. What it does is collect all the tests and tries to find the test which does not have test in the test name. Since all tests has test in the name, all the tests would be deselected and you would get your total number of tests. This method works with parametrization.

Loosely based on my other answer.
Count all tests
One-liner:
$ pytest --collect-only -q | head -n -2 | wc -l
Explanation
--collect-only combined with -q outputs one test per line, with a trailing info line. Example:
$ pytest --collect-only -q
test_eggs.py::test_bacon[1]
test_eggs.py::test_bacon[2]
test_eggs.py::test_bacon[3]
test_spam.py::test_foo
test_spam.py::test_bar
no tests ran in 0.00 seconds
The rest is just routine: head -n -2 strips the info line, wc -l counts the lines.
Applying further filtering works as usual, e.g.
$ pytest --collect-only -q -k "fizz" | head -n -2 | wc -l
will count only tests containing fizz in name,
$ pytest --collect-only -q buzz/ fuzz/ | head -n -2 | wc -l
will count only tests inside buzz and fuzz directories etc.
Count tests per test module
If you want to get the info about how many tests are in each module, use --collect-only combined with -qq:
$ pytest --collect-only -qq
test_eggs.py: 3
test_spam.py: 2
Count unique tests (test parametrizations counting as single test)
What the OP initially requested. This is a modification of the above command that strips the parametrization from test names and removes duplicate lines before counting:
$ pytest --collect-only -q | head -n -2 | sed 's/\[.*\]$//' | sort | uniq | wc -l

How about
find . -type f -name 'test*.py' -exec grep -e 'def test_' '{}' \; | wc -l
or
ag 'def test_' | wc -l

One more solution:
egrep -e 'def test_' -r ./ | wc -l
Hope it can help.

Related

Why "-n" is commonly used for dry-run?

Well known commands like make, rsync, and, git use -n option for dry-run.
What does -n stand for in this context?
My guess is that it's because dry-run contains the letter n and because d and r are already used:
$ make --help | grep '^ *-[dr]'
-d Print lots of debugging information.
-r, --no-builtin-rules Disable the built-in implicit rules.
$ rsync --help | grep '^ *-[dr]'
-r, --recursive recurse into directories
-d, --dirs transfer directories without recursing

follow logfile with tail and exec on event

i wonder if there is a more simplyfied way to run the tail -f or -F on a logfile and execute a command each time a special keyword is mentioned.
this is my working solution so far, but i don't like it for following reasons:
i have to write new lines for each match to log file to avoid endless loop
tail does not follow exactly the log, it could miss some lines while the command is executed
i am not aware about CPU usage because of high frequency
example:
#!/sbin/sh
while [ 1 ]
do
tail -n1 logfile.log | grep "some triggering text" && mount -v $stuff >> logfile.log
done
i tried the following but grep won't give return code until the pipe break
#!/sbin/sh
tail -f -n1 logfile.log | grep "some triggering text" && mount $stuff
i am running a script on android which is limited to
busybox ash
edit:
the problem is related to grep. grep won't give return code until the last line. what i need is return code for each line. maybe kind of a --follow option for grep, or sed, awk, or a user defined function which works with tail --follow

How to use grep function in eclipse C IDE (ubuntu)

I have a file with a couple of lines and in each one there is process id and its generation.
I need to print how many processes there are in each generation and do it with only one loop and have to do it with grep and wc function but can't find how to use grep in my IDE and not as a terminal function.
I saw it should be something like:
grep -o '\<WORD\>' | wc -l
but it's not something I can write in IDE..
Any ideas?

Using xargs arguments twice

I need to check if local file is same as remote host file.
The file locations are like below:
File1 at Local machine
./remotehostname/home/a/b/scripts/xyz.cpp
File2 at remote machine
remotehostname:/home/a/b/scripts/xyz.cpp
I intend to compare these 2 files, using the command
diff ./remotehostname/home/a/b/scripts/xyz.cpp remotehostname:/home/a/b/scripts/xyz.cpp
find . -type f | grep -v .svn |xargs -I % diff %
I need to change % to take remotehost and compare the file.
Not sure how to apply sed on %. Or is there a better way to compare such files.
One way could be to save the list of files and then apply sed on that file, but I think there should be an even better way. Also the diff doesnt work on remote hosts, maybe I need to use output of dry rsync?
This can be done with xargs, but I prefer to use while read in bash.
xargs method
find . -type f | grep -v .svn | sed 's/.*/& remotehostname:&/' | xargs -n2 diff
The sed command duplicates the input and makes whatever modifications you need. The xargs then passes the inputs to diff two at a time. This will not work if any filename contain spaces.
bash method
find . -type f | grep -v .svn | while read line; do
diff "$line" "remotehostname:$line"
done
The bash read command reads a line from stdin, places it in the name variable, $line, and returns true. You can then put whatever you like inside the loop, so you get total freedom to rewrite the filename however you need. When the input runs out, read returns false, and the loop exits.
Note that piping things into loops has some interesting side effects that are not relevant here, but might bite you one day.
If you are interested in the actual difference (and not just whether they differ - which rsync is brilliant for telling you) then you can do this using GNU Parallel:
find . -type f | grep -v .svn |
parallel diff {} '<(ssh {= s:./::;s:/.*:: =} cat {= s:([^/]+/){2,2}::;$_=::shell_quote_scalar($_) =})'
s:./::;s:/.*:: = hostname from path
s:([^/]+/){2,2}:: = rest of path
::shell_quote_scalar = \-quote special chars as needed by the shell
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

sed with filename from pipe

In a folder I have many files with several parameters in filenames, e.g (just with one parameter) file_a1.0.txt, file_a1.2.txt etc.
These are generated by a c++ code and I'd need to take the last one (in time) generated. I don't know a priori what will be the value of this parameter when the code is terminated. After that I need to copy the 2nd line of this last file.
To copy the 2nd line of the any file, I know that this sed command works:
sed -n 2p filename
I know also how to find the last generated file:
ls -rtl file_a*.txt | tail -1
Question:
how to combine these two operation? Certainly it is possible to pipe the 2nd operation to that sed operation but I dont know how to include filename from pipe as input to that sed command.
You can use this,
ls -rt1 file_a*.txt | tail -1 | xargs sed -n '2p'
(OR)
sed -n '2p' `ls -rt1 file_a*.txt | tail -1`
sed -n '2p' $(ls -rt1 file_a*.txt | tail -1)
Typically you can put a command in back ticks to put its output at a particular point in another command - so
sed -n 2p `ls -rt name*.txt | tail -1 `
Alternatively - and preferred, because it is easier to nest etc -
sed -n 2p $(ls -rt name*.txt | tail -1)
-r in ls is reverse order.
-r, --reverse
reverse order while sorting
But it is not good idea when used it with tail -1.
With below change (head -1 without r option in ls), performance will be better, that you needn't wait to list all files then pipe to tail command
sed -n 2p $(ls -t1 name*.txt | head -1 )
I was looking for a similar solution: taking the file names from a pipe of grep results to feed to sed. I've copied my answer here for the search & replace, but perhaps this example can help as it calls sed for each of the names found in the pipe:
this command to simply find all the files:
grep -i -l -r foo ./*
this one to exclude this_shell.sh (in case you put the command in a script called this_shell.sh), tee the output to the console to see what happened, and then use sed on each file name found to replace the text foo with bar:
grep -i -l -r --exclude "this_shell.sh" foo ./* | tee /dev/fd/2 | while read -r x; do sed -b -i 's/foo/bar/gi' "$x"; done
I chose this method, as I didn't like having all the timestamps changed for files not modified. Feeding the grep result allows only the files with target text to be looked at (thus likely may improve performance / speed as well)
be sure to backup your files & test before using. May not work in some environments for files with embedded spaces. (?)
fwiw - I had some problems using the tail method, it seems that the entire dataset was generated before calling tail on just the last item.