I am writing an integration test in Scala. The test starts by searching for a configuration file to get access information of another system.
If it finds the file then the test should run as usual, however if it does not find the file I don't want to fail the test, I would rather make it inconclusive to indicate that the test can not run because of missing configurations only.
In C# I know there is Assert.Inconclusive which is exactly what I want, is there anything similar in Scala?
I think what you need here is assume / cancel (from "Assumptions" section, found here):
Trait Assertions also provides methods that allow you to cancel a test. You would cancel a test if a resource required by the test was unavailable. For example, if a test requires an external database to be online, and it isn't, the test could be canceled to indicate it was unable to run because of the missing database.
Related
I'm looking at the Jenkins API for java and I see there's a JobApi to start/stop Jobs and a BuildInfo interface with information about a build in that job. I can't find anything to get the test results of a build though, is it not (yet) implemented or did I miss it?
I mean the results that you would get by calling the endpoints:
http://<server>/job/<job_name>/<build_number>/testReport/api/json?pretty=true --> returns a hudson.tasks.junit.TestResult
http://<server>/job/<job_name>/<build_number>/testReport/<package>/BugReportsTest/<test_class>/api/json?pretty=true --> returns a <hudson.tasks.junit.CaseResult>
What is being received is a Java Object. One needs to call the isPassed() to get whether this particular object passed or not.
Refer to the Javadoc here
https://javadoc.jenkins.io/plugin/junit/hudson/tasks/junit/TestResult.html
This is pretty similar to case result as well where again isPassed() to be used to check whether it has passed or not.
Refer to Case Result Javadoc here
https://javadoc.jenkins.io/plugin/junit/hudson/tasks/junit/CaseResult.html
You can know whether the entire build job has passed or not by using the following url:
http://<server>/job/<job_name>/<build_number>/api/json?pretty=true
A snapshot for entire build has passed or not is as follows:
When testing for the existence of an element, is it recommended to always assert as in:
expect(await screen.findByTestId('spinner')).toBeVisible();
Or is it sufficient (recommended) to just wait for the element:
await screen.findByTestId('spinner');
Note: the spinner is added using React Hooks and that is why I am await'ing them.
I thought a previous version of RTL recommended not specifically asserting when not required, but I can't find any references to that now.
There are certain Testing Library queries as getBy or findBy that have an implicit "expect". What happens is that they would throw an error in case the element is not found, so as you say, you can either wrap it in an expect or just do a simple query and if the test passes, that means the query didn't throw any errors.
You can see the explicit assertion as a feeling that you are checking something in that specific line. That way you can have more differentiate sections (Arrange, Act, Assert or "render, do things, expect things") that look almost the same in every test no matter what they are expecting.
Testing Library tends to recommend best practices when doing queries as they are built for a reason and each query can give you better results (for example you could test more things doing a queryByRole with a name option than doing a simple queryByName. However, this question would be a matter of "styling" in your code, and they can only say you could go with the "always write an expect" option if you want all your tests to be more uniform.
When I write a test in FitNesse I usually write several tables in wiki format first and then write the fixture code afterwards. I do that by executing the test in the wiki server and then create the fixture classes with names I copied from the error messages out of the failed execution of the test page.
This is an annoying process and could be done by an automatic stub generator, that creates the fixture classes with appropriate class names and method names.
Is there already such a generator available?
Not as far as I know. It sounds like you are using Fit, correct?
It sounds like an interesting feature, maybe you can create one as a plugin?
I am trying to adapt the pytest tool so that it can be used in my testing environment, which requires that precise test report are produced and stored. The tests report are in xml format.
So far I have succeeded in creating a new plugin which produces the xml I want, at one exception :
I need to register in my xml report the passed assertion, with the associated code if possible. I couldn't find a way to do so.
The only possibility approaching is to overload pytest_assertrepr_compare im py pytest plugin, but it is called only on assertion failure, not on passed assertion.
Any idea to do so ?
Thank for the help!
etienne
I think this is basically impossible without changing the assertion re-writing itself. py.test does not see things happening on an assert-level of detail, it just executes the test function and it either returns a value (ignored) or raises an exception. In the case where it raises an exception it then inspects the exception information in order to provide a nice failure message.
The assertion re-writing logic simply replaces the assert statement with an if not <assert_expr>: create_detailed_assertion_info. I guess in theory it is possible to extend the assertion rewriting so that it would call hooks on both passing and failure of the <assert_expr>, but that would be a new feature.
Not sure if I understand your requirements exactly, but another approach is to produce a text/xml file with the expected results of your processing: The first time your run the test, you inspect the file manually to ensure it is correct and store it with the test. Further test runs will then produce a similar file and compare it with the former, failing if they don't match (optionally producing a diff for easier diagnosing).
The pytest-regtest plugin uses a similar approach by capturing the output from test functions and comparing that with former runs.
I'm trying to provide my QA team a list of available sentences in JBehave based on methods annotated with Given, When, Then, and Alias. As follows:
Then $userName is logged in.
Then user should be taken to the "$pageTitle"
I recently wrote a simple script to do this. Before I put more work into it I wanted to be sure there wasn't something better out there.
For one there is the Eclipse integration for JBehave, which offers code completion, thus providing all steps directly from the code ( http://jbehave.org/eclipse-integration.html ). Note that it doesn't go through dependent .jars though - only what it can find in the source tree.
i.e, enter "Given", hit Ctrl+Space and get all the available given steps.
But there has also been some work parsing the run results with a "Story Navigator" ( http://paulhammant.com/blog/introducing-story-navigator.html ), which offers a listing of the steps. But I'm not sure whether it can list unused steps; Furthermore this one seems more like a proof of concept to me (I wasn't able to make proper use of it).