How to run setup/cleanup only in a pytest script, skipping all testcases in it? - pytest

Having a pytest script that contains the setup/cleanup and various testcases, and I only need to call the setup/cleanup part in it at one time, but need to call some testcases in it at another time.
So my plan is something like:
test1.py PYTEST_ADDOPTS="-k 'not test_abc and not test_def'" SKIP_CLEANUP_ON_SOMETHING=1
test2.py
test1.py
Problems is:
It doesn't work at all because if no testcase could be collected, the entire test script would be omitted, and the setup/cleanup part doesn't have a chance to kick off.
The PYTEST_ADDOPTS string would become unbearably long when there are more and more testcase functions in test1.py.
Is there a way to skip all the testcases in test1.py, so that only the setup/cleanup parts would run?

Related

Is it possible to reorder the NUnit tests at run-time before the execution begins?

We have quite a lot of tests that need to bypass Load Balancer in order to talk directly to a specific web server.
Each test is decorated with TestCaseSource attribute specifying a function that at run-time determines the list of web servers to hit.
So, if we have n tests T1, T2, ..., Tn and m Web Servers W1, W2, ..., Wm (discovered at run-time), the tests run in the following order:
T1W1
T1W2
...
T1Wm
T2W1
T2W2
...
T2Wm
...
TnW1
TnW2
...
TnWm
Now, I need them to run in a different order, namely:
T1W1
T2W1
...
TnW1
T1W2
T2W2
...
TnW2
...
T1Wm
T2Wm
...
TnWm
I understand that I can modify the test name using the TestCaseData.TestName property. But doing so would still run the child test cases together. For example, see below:
The tests nan4dfc1app01_RegisterAndStartShiftAndEnsureInvalidBadge and nan4dfc1app02_RegisterAndStartShiftAndEnsureInvalidBadge run one after another rather than:
nan4dfc1app01_RegisterAndStartShiftAndEnsureInvalidBadge running with all other tests starting with nan4dfc1app01_
nan4dfc1app02_RegisterAndStartShiftAndEnsureInvalidBadge running with all other tests starting with nan4dfc1app02_
So essentially, renaming the test cases does not split the child test cases. Not good for me.
So, is there a way to change the order at run-time the way I need it?
It's not possible to do this with a TestCaseSourceAttribute. All the test cases generated for a single test method are run together.
The other mechanism for grouping tests is by fixture. If you made your class a parameterized fixture and passed it the web servers using TestFixtureSourceAttribute, then you could control the order of the tests within each fixture.
You would save the passed in parameter for the fixture as an instance member and use it within every test. This is probably simpler and easier to read than what you are doing anyway, because there is only one reference to the source rather than many.

How to call certain steps even if a test case fails in SOAP UI to clean up before proceeding?

I use SOAP UI for testing a REST API. I have a few test cases which are independent of each other and can be executed in random order.
I know that one can disable aborting the whole run by disabling the option Fail on error as shown in this answer on SO. However, it can be so that the TestCase1 has prepared certain data to run tests first and it breaks in the middle of its run because an assertion fails or for some other reason. Now, the TestCase2 starts running after it and will test some other things, however, because TestCase1 has not had its all steps (including those that clean up) executed, it may fail.
I would like to be able to run all of the tests even if a certain test fails however I want to be able to execute a number of particular test case specific steps should a test fail. In programming terms, I would like to have a finally where each test case will have a number of steps that will be executed regardless of whether the test failed or passed.
Is there any way to achieve this?
You can use Teardown script at test case level
In below example test step fails but still teardown script runs. So its more like Finally
Alternatively you can try creating your own soft assertion which will not stop the test case even if it fails. for example
def err[]
then whenever there is an error you can do
err.add( "Values did not matched")
at the end you can check
assert err.size()>0 ,"There is an error"
log.info err
This way you can capture errors and do actual assertions at the end or alternatively you can use the below teardown script provided by SoapUI

VSTS Test fails but vstest.console passes; the assert executes before the code for some reason?

Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.

How to stop a serenity jbehave test and set test out come

I'm using serenity core and JBehave. I would like to stop the test and set test outcome as PASS based on some condition if the condition fails test continues.
How could I stop the JBehave test during suite execution and set PASS? And then the suite continues with another test.
To the best of my knowledge, JBehave is designed to optionally stop a test upon failure only. By default it will fail an entire scenario and continue with any other scenarios in the story as applicable.
In situations like this, I believe your only solution is through the steps methods themselves. Check your "pass" condition in the steps method and then continue the code only if that condition fails. It's possible your story's scenario definition is not concise enough? Keep it to a simple Given, When, Then instead of breaking the steps up in too detailed a manner.

PowerShell wait for function call to complete

I am calling a series of PowerShell functions from a master script (each function is a test).
I specify the tests in an XML file and I want them to run in order.
The functions to call are organized in PowerShell module files (.psm1). The master script calls Import-Module as needed and then calls the function via something like this...
$newResults = & "$runFunction" #ARGS
or this...
$newResults = Invoke-Expression $runFunctionWithArgs
I have gotten both to work just fine and the XML file parsing invokes these commands in the correct order.
Problem: The tests are apparently launched asynchronously so that the first test I launch does not necessarily get invoked and complete before the second test is invoked.
Note, the tests are functions in a PowerShell module and not commands so I do not think that Start-Process will work (but please tell me if you know how to make that work).
More Details:
It would take too much to add all the code, but essentially what each function call does is create a hashtable with one or more "TestResult" objects. "TestResult" has things like Success codes and a TimeStamp. Each test does things that take different amounts of time, but all synchronous. I would expect the timestamps to be the same order that I called each test, especially since the first thing each test does is get the timestamp so it should not depend on what the test does. When I run in the ISE, everything goes in order. When I run in the command window, the timestamps do not match my expected order.
Workaround:
My working theory is still that PowerShell is somehow parallelizing the calls. I can get consistent results by making the invocation of each call dependent on the results of the previous call. It is a dummy check because I know that what I test will always be true, but PowerShell doesn't know that
if ($newResults.Count -ne [Long]::MaxValue) { $newResults = & "$runFunction" #ARGS }
PowerShell thinks that it needs to know if the previous call count is not MaxValue.