is there a way to handle assertion passed in pytest - pytest

I am trying to adapt the pytest tool so that it can be used in my testing environment, which requires that precise test report are produced and stored. The tests report are in xml format.
So far I have succeeded in creating a new plugin which produces the xml I want, at one exception :
I need to register in my xml report the passed assertion, with the associated code if possible. I couldn't find a way to do so.
The only possibility approaching is to overload pytest_assertrepr_compare im py pytest plugin, but it is called only on assertion failure, not on passed assertion.
Any idea to do so ?
Thank for the help!
etienne

I think this is basically impossible without changing the assertion re-writing itself. py.test does not see things happening on an assert-level of detail, it just executes the test function and it either returns a value (ignored) or raises an exception. In the case where it raises an exception it then inspects the exception information in order to provide a nice failure message.
The assertion re-writing logic simply replaces the assert statement with an if not <assert_expr>: create_detailed_assertion_info. I guess in theory it is possible to extend the assertion rewriting so that it would call hooks on both passing and failure of the <assert_expr>, but that would be a new feature.

Not sure if I understand your requirements exactly, but another approach is to produce a text/xml file with the expected results of your processing: The first time your run the test, you inspect the file manually to ensure it is correct and store it with the test. Further test runs will then produce a similar file and compare it with the former, failing if they don't match (optionally producing a diff for easier diagnosing).
The pytest-regtest plugin uses a similar approach by capturing the output from test functions and comparing that with former runs.

Related

React Testing Library - existence / assertion best practice

When testing for the existence of an element, is it recommended to always assert as in:
expect(await screen.findByTestId('spinner')).toBeVisible();
Or is it sufficient (recommended) to just wait for the element:
await screen.findByTestId('spinner');
Note: the spinner is added using React Hooks and that is why I am await'ing them.
I thought a previous version of RTL recommended not specifically asserting when not required, but I can't find any references to that now.
There are certain Testing Library queries as getBy or findBy that have an implicit "expect". What happens is that they would throw an error in case the element is not found, so as you say, you can either wrap it in an expect or just do a simple query and if the test passes, that means the query didn't throw any errors.
You can see the explicit assertion as a feeling that you are checking something in that specific line. That way you can have more differentiate sections (Arrange, Act, Assert or "render, do things, expect things") that look almost the same in every test no matter what they are expecting.
Testing Library tends to recommend best practices when doing queries as they are built for a reason and each query can give you better results (for example you could test more things doing a queryByRole with a name option than doing a simple queryByName. However, this question would be a matter of "styling" in your code, and they can only say you could go with the "always write an expect" option if you want all your tests to be more uniform.

Scala.js stacktraces

I'm frustrated by unintelligible stacktraces when my Scala.js code throws an exception. I thought I had a solution using a Javascript library (see Getting a scala stacktrace) but it breaks too often.
How do you extract meaning (where the program broke; how it got there -- in terms of the Scala code) from a stacktrace like the following. Or am I doing something wrong to even get an untranslated stacktrace?
Take a look at this code I wrote a while back in my youi framework: https://github.com/outr/youi/tree/e66dc36a12780fa8941152d07de9c3a52d28fc10/app/js/src/main/scala/io/youi/app/sourceMap
It is used to reverse JS stack traces to Scala stack traces. In youi I send the errors to the server so I can monitor browser errors that occur with the complete traceback.
Brief Overview
source-map.js
You need source-map.js to parse the js.map file that Scala.js
generated when it compiled your code. See:
https://github.com/mozilla/source-map
Load the js.map file via Ajax
The SourceMapConsumer needs a js.Object (JSON) of the js.map file. See https://github.com/outr/youi/blob/e66dc36a12780fa8941152d07de9c3a52d28fc10/app/js/src/main/scala/io/youi/app/sourceMap/ErrorTrace.scala#L58 for an example of loading via youi's Ajax features.
Process the Throwable
The trace represents line and columns in the JS file and you can pass
that information to SourceMapConsumer to get the original Scala line
numbers back (see SourceMapConsumer.originalPositionFor). See
ErrorTrace.toCause
(https://github.com/outr/youi/blob/e66dc36a12780fa8941152d07de9c3a52d28fc10/app/js/src/main/scala/io/youi/app/sourceMap/ErrorTrace.scala#L98)
for an example iterating over the Throwable's trace elements.
Handling Errors
Now that you have the capacity to process JavaScript errors and
convert them back to Scala traces, you need to actually receive the
errors. If you want to globally handle uncaught errors set a function
to window.onerror to capture errors. As of this writing, the
function signature in Scala.js isn't ideal for handling all
information, so in youi I use js.Dynamic to set it to what I need
(see:
https://github.com/outr/youi/blob/e66dc36a12780fa8941152d07de9c3a52d28fc10/app/js/src/main/scala/io/youi/app/ClientApplication.scala#L35).
Also, notice that in ErrorTrace it supports multiple incoming types
of errors (ErrorEvent, Throwable, and a more generic scenario).
This is because in JavaScript the errors come in different ways based
on what's happening. This is a fairly complex topic, and why I
created this functionality in youi to simplify things.
Not nearly as brief an overview as I would have liked, but this isn't a simple problem to solve. The source-map GitHub project (https://github.com/mozilla/source-map) has decent documentation and is what I used originally to write my solution (with some added trial and error). If the information I've provided is incomplete I'd recommend reading more there as it should provide the majority of information, and probably better explained.

Scala inconclusive Assertion

I am writing an integration test in Scala. The test starts by searching for a configuration file to get access information of another system.
If it finds the file then the test should run as usual, however if it does not find the file I don't want to fail the test, I would rather make it inconclusive to indicate that the test can not run because of missing configurations only.
In C# I know there is Assert.Inconclusive which is exactly what I want, is there anything similar in Scala?
I think what you need here is assume / cancel (from "Assumptions" section, found here):
Trait Assertions also provides methods that allow you to cancel a test. You would cancel a test if a resource required by the test was unavailable. For example, if a test requires an external database to be online, and it isn't, the test could be canceled to indicate it was unable to run because of the missing database.

Need Help on Page Object Model Implementation

I am in a process of implementing Page object Model, I have one query regarding it, please see below:
I have created page files which is having locators and methods for the page, I have spec file in which I am doing the assertions by calling these methods. My question is that for one page I have over 100 test cases, now should I create single assertion file for single tests or should I create 100 assertion file for 100 test.
Please let me know what is the best way to manage it.
Regards,
Manan
I think it makes the most sense to group tests into files by functionality. It's hard to run only some tests from a file, so split out any groups of tests you think you might want to run independently. Are some of them suitable for a quick smoke test suite? Maybe those should be in a separate file.
You shouldn't need to create a new file for neither every assertion nor test case. I am confused by your question because in my understanding, the assertion is part of the test case, and test+assertion are part of the same function (assertion being the end goal of the test).
Regarding the Page Object Model: The important part of the pattern is ensuring the separation of page/DOM detail from test flow (i.e. tests should possess no knowledge of the DOM, but instead rely on page objects to act on actual pages).

Not Enough Arguments exception thrown in FF extension

The documentation clearly states to use self.postMessage(message) from content scripts if you want to communicate with the add-on script. I'm doing just that, and passing in a string for testing purposes, but I get the exception detailed in the title. Why is this?
Here's a working example of how the message-passing works:
https://builder.addons.mozilla.org/package/60173/latest/
As you can see from the example, using self.port.emit & self.port.on for message passing results in more readable code.