When testing for the existence of an element, is it recommended to always assert as in:
expect(await screen.findByTestId('spinner')).toBeVisible();
Or is it sufficient (recommended) to just wait for the element:
await screen.findByTestId('spinner');
Note: the spinner is added using React Hooks and that is why I am await'ing them.
I thought a previous version of RTL recommended not specifically asserting when not required, but I can't find any references to that now.
There are certain Testing Library queries as getBy or findBy that have an implicit "expect". What happens is that they would throw an error in case the element is not found, so as you say, you can either wrap it in an expect or just do a simple query and if the test passes, that means the query didn't throw any errors.
You can see the explicit assertion as a feeling that you are checking something in that specific line. That way you can have more differentiate sections (Arrange, Act, Assert or "render, do things, expect things") that look almost the same in every test no matter what they are expecting.
Testing Library tends to recommend best practices when doing queries as they are built for a reason and each query can give you better results (for example you could test more things doing a queryByRole with a name option than doing a simple queryByName. However, this question would be a matter of "styling" in your code, and they can only say you could go with the "always write an expect" option if you want all your tests to be more uniform.
Related
I'd like to be able to make "meta" assertions, a la tools like Jest, with pytest.
For instance, say I have a list of items I'm iterating over and during this iteration I'm making assertions--this works as expected as long as the list is not empty. However if the list is empty, the code doesn't execute but my test happily passes (i.e. not assertions failed). Of course one could check that the length of the list (assuming we're dealing with a list and not a different iterable like a generator), but that might not fit my use case.
So how can I ensure that assertions actually happen in my tests using pytest?
I am writing an integration test in Scala. The test starts by searching for a configuration file to get access information of another system.
If it finds the file then the test should run as usual, however if it does not find the file I don't want to fail the test, I would rather make it inconclusive to indicate that the test can not run because of missing configurations only.
In C# I know there is Assert.Inconclusive which is exactly what I want, is there anything similar in Scala?
I think what you need here is assume / cancel (from "Assumptions" section, found here):
Trait Assertions also provides methods that allow you to cancel a test. You would cancel a test if a resource required by the test was unavailable. For example, if a test requires an external database to be online, and it isn't, the test could be canceled to indicate it was unable to run because of the missing database.
I am trying to adapt the pytest tool so that it can be used in my testing environment, which requires that precise test report are produced and stored. The tests report are in xml format.
So far I have succeeded in creating a new plugin which produces the xml I want, at one exception :
I need to register in my xml report the passed assertion, with the associated code if possible. I couldn't find a way to do so.
The only possibility approaching is to overload pytest_assertrepr_compare im py pytest plugin, but it is called only on assertion failure, not on passed assertion.
Any idea to do so ?
Thank for the help!
etienne
I think this is basically impossible without changing the assertion re-writing itself. py.test does not see things happening on an assert-level of detail, it just executes the test function and it either returns a value (ignored) or raises an exception. In the case where it raises an exception it then inspects the exception information in order to provide a nice failure message.
The assertion re-writing logic simply replaces the assert statement with an if not <assert_expr>: create_detailed_assertion_info. I guess in theory it is possible to extend the assertion rewriting so that it would call hooks on both passing and failure of the <assert_expr>, but that would be a new feature.
Not sure if I understand your requirements exactly, but another approach is to produce a text/xml file with the expected results of your processing: The first time your run the test, you inspect the file manually to ensure it is correct and store it with the test. Further test runs will then produce a similar file and compare it with the former, failing if they don't match (optionally producing a diff for easier diagnosing).
The pytest-regtest plugin uses a similar approach by capturing the output from test functions and comparing that with former runs.
I am in a process of implementing Page object Model, I have one query regarding it, please see below:
I have created page files which is having locators and methods for the page, I have spec file in which I am doing the assertions by calling these methods. My question is that for one page I have over 100 test cases, now should I create single assertion file for single tests or should I create 100 assertion file for 100 test.
Please let me know what is the best way to manage it.
Regards,
Manan
I think it makes the most sense to group tests into files by functionality. It's hard to run only some tests from a file, so split out any groups of tests you think you might want to run independently. Are some of them suitable for a quick smoke test suite? Maybe those should be in a separate file.
You shouldn't need to create a new file for neither every assertion nor test case. I am confused by your question because in my understanding, the assertion is part of the test case, and test+assertion are part of the same function (assertion being the end goal of the test).
Regarding the Page Object Model: The important part of the pattern is ensuring the separation of page/DOM detail from test flow (i.e. tests should possess no knowledge of the DOM, but instead rely on page objects to act on actual pages).
I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?