Using pytest, how can I assert that an assertion actually happened? - pytest

I'd like to be able to make "meta" assertions, a la tools like Jest, with pytest.
For instance, say I have a list of items I'm iterating over and during this iteration I'm making assertions--this works as expected as long as the list is not empty. However if the list is empty, the code doesn't execute but my test happily passes (i.e. not assertions failed). Of course one could check that the length of the list (assuming we're dealing with a list and not a different iterable like a generator), but that might not fit my use case.
So how can I ensure that assertions actually happen in my tests using pytest?

Related

React Testing Library - existence / assertion best practice

When testing for the existence of an element, is it recommended to always assert as in:
expect(await screen.findByTestId('spinner')).toBeVisible();
Or is it sufficient (recommended) to just wait for the element:
await screen.findByTestId('spinner');
Note: the spinner is added using React Hooks and that is why I am await'ing them.
I thought a previous version of RTL recommended not specifically asserting when not required, but I can't find any references to that now.
There are certain Testing Library queries as getBy or findBy that have an implicit "expect". What happens is that they would throw an error in case the element is not found, so as you say, you can either wrap it in an expect or just do a simple query and if the test passes, that means the query didn't throw any errors.
You can see the explicit assertion as a feeling that you are checking something in that specific line. That way you can have more differentiate sections (Arrange, Act, Assert or "render, do things, expect things") that look almost the same in every test no matter what they are expecting.
Testing Library tends to recommend best practices when doing queries as they are built for a reason and each query can give you better results (for example you could test more things doing a queryByRole with a name option than doing a simple queryByName. However, this question would be a matter of "styling" in your code, and they can only say you could go with the "always write an expect" option if you want all your tests to be more uniform.

is there a way to handle assertion passed in pytest

I am trying to adapt the pytest tool so that it can be used in my testing environment, which requires that precise test report are produced and stored. The tests report are in xml format.
So far I have succeeded in creating a new plugin which produces the xml I want, at one exception :
I need to register in my xml report the passed assertion, with the associated code if possible. I couldn't find a way to do so.
The only possibility approaching is to overload pytest_assertrepr_compare im py pytest plugin, but it is called only on assertion failure, not on passed assertion.
Any idea to do so ?
Thank for the help!
etienne
I think this is basically impossible without changing the assertion re-writing itself. py.test does not see things happening on an assert-level of detail, it just executes the test function and it either returns a value (ignored) or raises an exception. In the case where it raises an exception it then inspects the exception information in order to provide a nice failure message.
The assertion re-writing logic simply replaces the assert statement with an if not <assert_expr>: create_detailed_assertion_info. I guess in theory it is possible to extend the assertion rewriting so that it would call hooks on both passing and failure of the <assert_expr>, but that would be a new feature.
Not sure if I understand your requirements exactly, but another approach is to produce a text/xml file with the expected results of your processing: The first time your run the test, you inspect the file manually to ensure it is correct and store it with the test. Further test runs will then produce a similar file and compare it with the former, failing if they don't match (optionally producing a diff for easier diagnosing).
The pytest-regtest plugin uses a similar approach by capturing the output from test functions and comparing that with former runs.

Running a suite of pytest tests on multiple objects

As a small part of a much larger set of tests, I have a suite of test functions I want to run on each of a list of of objects. Basically, I have a set of plugins, and a set of "plugin tests".
Naively, I can just make a list of test functions that take a plugin argument, and a list of plugins, and have a test where I call all of the former on all of the latter. But ideally, each test/plugin combo would appear as an individual test in the results.
Is there already a nicer/standardized way of doing something like this in pytest?
Check out pytest's documentation on parametrization (https://pytest.org/latest/parametrize.html).
It's a mechanism for running the same test a number of times with different parameters -- it sounds like just what you want. It generates tests that run individually, and they have nice output and reporting.

Using NUnit, how to I assert that a web element is not visible?

I'm using NUnit to run Selenium tests written in C#. I can easilt assert that elements exist after a test has run but how do I use assertions to verify that an element does not exist?
For example, one tests deletes a user and I need to verify that the user's name does not appear on the list afterwards.
Any suggestions are very welcome.
John
Wouldn't the assertTrue(bool b) method work for this purpose?
For example assertTrue(!list.Contains(item))

How to run some but not all tests in a Perl test suite in parallel?

I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?