Python Pytest skip rest /part of code in the function with #skip decorator - pytest

Is Pytest allows to skip not a whole test (function) but a part of code inside the function?
What I want (usage example):
def test_fill(my_dict: dict):
assert all(v is None for v is my_dict.values()):
my_dict.fill()
# Temporary check for "foo" values
assert all(v is not None for v is my_dict.values()):
# Should skip the code below
pytest.mark.skip(reason='Need values setup')
# The real checks with exact values are here (skipped for now)
assert my_dict['key_1'] = 1 # Part of future test
assert my_dict['key_2'] = 10 # Part of future test
assert my_dict['key_3'] = 100 # Part of future test
How pytest.mark.skip supposed to work:
It may raise exception and quietly catch it.
And I will see it in the final test results output like in case of regular skipping.
Surely I can easily comment it, place in if branch, or skip the whole test with #pytest.mark.skip decorator,
but this will be not reflected in the tests output and it's easily to forgot about this weak test.

Skipping a test while inside a test makes sense, if the information needed to decide if to skip a test is only available inside the test. This can easily be done using pytest.skip:
def test_something():
if not some_condition():
pytest.skip("Condition not fullfilled")
# do the test
This will skip the test the same way a pytest.mark.skipIf decorator will do, e.g. mark the test as skipped in the output and display the given skip reason.
In most cases (e.g. when the skip condition can be defined outside of the test) the decorator version can be used. From the documentation:
It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies.
For the sake of completeness: in unittest this is also possible by using TestCase.skipTest.

Related

Do we have something similar to Pass, Continue and Break statement in Karate? [duplicate]

Find the example here.
def a = condition ? " karate match statement " : "karate match statement"
Is it possible to do something like this??
This is not recommended practice for tests because tests should be deterministic.
The right thing to do is:
craft your request so that the response is 100% predictable. do not worry about code-duplication, this is sometimes necessary for tests
ignore the dynamic data if it is not relevant to the Scenario
use conditional logic to set "expected value" variables instead of complicating your match logic
use self-validation expressions or schema-validation expressions for specific parts of the JSON
use the if keyword and call a second feature file - or you can even set the name of the file to call dynamically via a variable
in some cases karate.abort() can be used to conditionally skip / exit early
That said, if you really insist on doing this in the same flow, Karate allows you to do a match via JS in 0.9.6.RC4 onwards.
See this thread for details: https://github.com/intuit/karate/issues/1202#issuecomment-653632397
The result of karate.match() will return a JSON in the form { pass: '#boolean', message: '#string' }
If none of the above options work - that means you are doing something really complicated, so write Java interop / code to handle this

What is the NUnit test name template for the test fixture arguments?

So {a} refers to the test case arguments, but in the full name of the test case we can see the test fixture arguments. For example:
C:\DFDeploymentSmokeTests\LocalTestProfiles> $xml = [xml](cat ..\TestResults\CSTests.xml)
C:\DFDeploymentSmokeTests\LocalTestProfiles> $TestCase = $xml.SelectSingleNode('//test-case')
C:\DFDeploymentSmokeTests\LocalTestProfiles> $TestCase.name
SiteCheck
C:\DFDeploymentSmokeTests\LocalTestProfiles> $TestCase.fullname
Web.ForEachWebServer(nan4dfc1app01_10.192.78.221_smoketest.dayforce.com).SiteCheck
C:\DFDeploymentSmokeTests\LocalTestProfiles>
The nan4dfc1app01_10.192.78.221_smoketest.dayforce.com is the ToString() result of the Test Fixture argument and NUnit includes it in the fullname of a test case.
However, there does not seem to be a way to provide it in the --test-name-format command line parameter.
Or am I wrong and there is a way?
Clarification
I do not want to change the full name of a test, but just its name. My problem is with the test names under a fixture using TestFixtureSource. Indeed, suppose the fixture name is F, the tests under it are T1 and T2 and the fixture is invoked twice with arguments A1 and A2. The default test name pattern is {m}{a}, but {a} does not include the fixture parameters. So, the test report shows these test names (not full names):
T1
T2
T1
T2
This is how it shows in the Azure DevOps Tests (the Publish Tests plugin uses the test names when publishing the results)
I want to change the name to be equal to the full name, because the full names are:
F(A1).T1
F(A1).T2
F(A2).T1
F(A2).T2
I realize that if the name would be F(A1).T1, then the full name would be F(A1).F(A1).T1, but since UI does not show the full names, I can live with that.
The full name of a test case is always the name (default or set by you) appended to the full name of the containing class. There is no way to change this.
UPDATE: Based on your clarification,you want the test case name to include the parameters passed to the particular fixture instance. This is also impossible, using the current "static" design.
[Using "static" and "dynamic" in a special NUnit-y way here. In a sense, all of this is dynamic, since it happens when you execute the runner. But we use it to mean "predetermined when the test is loaded (created, discovered) as opposed to "determined at each test execution."]
At the time your tests are discovered (and named) no fixtures have been instantiated yet. The code that runs your TestCaseSource method is generating test names to be used for each instance of the test fixture. We could have done it differently, but... well, we didn't because nobody thought of this use case.
Sorry!
PS: There is a long-standing NUnit issue calling for the creation of (what we call) "dynamic" test cases, which could easily include the feature you are asking for.

pytest fixture with parametrization from another fixture

I am using pytest and would like to invoke a test function for a number of objects returned by a server, and for a number of servers.
The servers are defined in a YAML file and those definitions are provided as parametrization to a fixture "server_connection" that returns a Connection object for a single server. Due to the parametrization, it causes the test function to be invoked once for each server.
I am able to do this with a loop in the test function: There is a second fixture "server_objects" that takes a "server_connection" fixture as input and returns a list of server objects. The pytest test function then takes that second fixture and executes the actual test in a loop through the server objects.
Here is that code:
import pytest
SD_LIST = ... # read list of server definitions from YAML file
#pytest.fixture(
params=SD_LIST,
scope='module'
)
def server_connection(request):
server_definition = request.param
return Connection(server_definition.url, ...)
#pytest.fixture(
scope='module'
)
def server_objects(request, server_connection):
return server_connection.get_objects()
def test_object_foo(server_objects):
for server_object in server_objects:
# Perform test for a single server object:
assert server_object == 'foo'
However, the disadvantage is of course that a test failure causes the entire test function to end.
What I want to happen instead is that the test function is invoked for each single server object, so that a test failure for one object does not prevent the tests on the other objects. Ideally, I'd like to have a fixture that provides a single server object, that I can pass to the test function:
...
#pytest.fixture(
scope='module'
)
def server_object(request, server_connection):
server_objects = server_connection.get_objects()
# TBD: Some magic to parametrize this fixture with server_objects
def test_object_foo(server_object):
# Perform test for a single server object:
assert server_object == 'foo'
I have read through all pytest docs regarding fixtures but did not find a way to do this.
I know about pytest hooks and have used e.g. pytest_generate_tests() before, but I did not find a way how pytest_generate_tests() can access the values of other fixtures.
Any ideas?
Update: Let me add that I also did search SO for this, but did not find an answer. I specifically looked at:
pytest fixture of fixtures
How to parametrize a Pytest fixture
py.test: Pass a parameter to a fixture function
initing a pytest fixture with a parameter

how to use forAll in scalatest to generate only one object of a generator?

Im working with scalatest and scalacheck, alsso working with FeatureSpec.
I have a generator class that generate object for me that looks something like this:
object InvoiceGen {
def myObj = for {
country <- Gen.oneOf(Seq("France", "Germany", "United Kingdom", "Austria"))
type <- Gen.oneOf(Seq("Communication", "Restaurants", "Parking"))
amount <- Gen.choose(100, 4999)
number <- Gen.choose(1, 10000)
valid <- Arbitrary.arbitrary[Boolean]
} yield SomeObject(country, type, "1/1/2014", amount,number.toString, 35, "something", documentTypeValid, valid, "")
Now, I have the testing class which works with FeatureSpec and everything that I need to run the tests.
In this class I have scenarios, and in each scenario I want to generate a different object.
The thing is from what I understand is that to generate object is better to use forAll func, but for all will not sure to bring you an object so you can add minSuccessful(1) to make sure you get at list 1 obj....
I did it like this and it works:
scenario("some scenario") {
forAll(MyGen.myObj, minSuccessful(1)) { someObject =>
Given("A connection to the system")
loginActions shouldBe 'Connected
When("something")
//blabla
Then("something should happened")
//blabla
}
}
but im not sure exactly what it means.
What I want is to generate an invoice each scenario and do some actions on it...
im not sure why i care if the generation work or didnt work...i just want a generated object to work with.
TL;DR: To get one object, and only one, use myObj.sample.get. Unless your generator is doing something fancy that's perfectly safe and won't blow up.
I presume that your intention is to run some kind of integration/acceptance test with some randomly generated domain object—in other words (ab-)use scalacheck as a simple data generator—and you hope that minSuccessful(1) would ensure that the test only runs once.
Be aware that this is not the case!. scalacheck will run your test multiple times if it fails, to try and shrink the input data to a minimal counterexample.
If you'd like to ensure that your test runs only once you must use sample.
However, if running the test multiple times is fine, prefer minSuccessful(1) to "succeed fast" but still profit from minimized counterexamples in case the test fails.
Gen.sample returns an option because generators can fail:
ScalaCheck generators can fail, for instance if you're adding a filter (listingGen.suchThat(...)), and that failure is modeled with the Option type.
But:
[…] if you're sure that your generator never will fail, you can simply call Option.get like you do in your example above. Or you can use Option.getOrElse to replace None with a default value.
Generally if your generator is simple, i.e. does not use generators that could fail and does not use any filters on its own, it's perfectly safe to just call .get on the option returned by .sample. I've been doing that in the past and never had problems with it. If your generators frequently return None from .sample they'd likely make scalacheck fail to successfully generate values as well.
If all that you want is a single object use Gen.sample.get.
minSuccessful has a very different meaning: It's the minimal number of successful tests that scalacheck runs—which by no means implies
that scalacheck takes only a single value out of the generator, or
that the test runs only once.
With minSuccessful(1) scalacheck wants one successful test. It'll take samples out of the generator until the test runs at least once—i.e. if you filter the generated values with whenever in your test body scalacheck will take samples as long as whenever discards them.
If the test passes scalacheck is happy and won't run the test a second time.
However if the test fails scalacheck will try and produce a minimal example to fail the test. It'll shrink the input data and run the test as long as it fails and then provides you with the minimized counter example rather than the actual input that triggered the initial failure.
That's an important property of property testing as it helps you to discover bugs: The original data is frequently too large to lend itself for debugging. Minimizing it helps you discover the piece of input data that actually triggers the failure, i.e. corner cases like empty strings that you didn't think of.
I think the way you want to use Scalacheck (generate only one object and execute the test for it) defeats the purpose of property-based testing. Let me explain a bit in detail:
In classical unit-testing, you would generate your system under test, be it an object or a system of dependent objects, with some fixed data. This could e.g. be strings like "foo" and "bar" or, if you needed a name, you would use something like "John Doe". For integers and other data, you can also randomly choose some values.
The main advantage is that these are "plain" values—you can directly see them in the code and correlate them with the output of a failed test. The big disadvantage is that the tests will only ever run with the values you specified, which in turn means that your code is also only tested with these values.
In contrast, property-based testing allows you to just describe how the data should look like (e.g. "a positive integer", "a string of maximum 20 characters"). The testing framework will then—with the help of generators—generate a number of matching objects and execute the test for all of them. This way, you can be more sure that your code will actually be correct for different inputs, which after all is the purpose of testing: to check if your code does what it should for the possible inputs.
I never really worked with Scalacheck, but a colleague explained it to me that it also tries to cover edge-cases, e.g. putting in a 0 and MAX_INT for a positive integer, or an empty string for the aforementioned string with max. 20 characters.
So, to sum it up: Running a property-based test only once for one generic object is the wrong thing to do. Instead, once you have the generator infrastructure in place, embrace the advantage you then have and let your code be checked a lot more times!

py.test mixing fixtures and asyncio coroutines

I am building some tests for python3 code using py.test. The code accesses a Postgresql Database using aiopg (Asyncio based interface to postgres).
My main expectations:
Every test case should have access to a new asyncio event loop.
A test that runs too long will stop with a timeout exception.
Every test case should have access to a database connection.
I don't want to repeat myself when writing the test cases.
Using py.test fixtures I can get pretty close to what I want, but I still have to repeat myself a bit in every asynchronous test case.
This is how my code looks like:
#pytest.fixture(scope='function')
def tloop(request):
# This fixture is responsible for getting a new event loop
# for every test, and close it when the test ends.
...
def run_timeout(cor,loop,timeout=ASYNC_TEST_TIMEOUT):
"""
Run a given coroutine with timeout.
"""
task_with_timeout = asyncio.wait_for(cor,timeout)
try:
loop.run_until_complete(task_with_timeout)
except futures.TimeoutError:
# Timeout:
raise ExceptAsyncTestTimeout()
#pytest.fixture(scope='module')
def clean_test_db(request):
# Empty the test database.
...
#pytest.fixture(scope='function')
def udb(request,clean_test_db,tloop):
# Obtain a connection to the database using aiopg
# (That's why we need tloop here).
...
# An example for a test:
def test_insert_user(tloop,udb):
#asyncio.coroutine
def insert_user():
# Do user insertion here ...
yield from udb.insert_new_user(...
...
run_timeout(insert_user(),tloop)
I can live with the solution that I have so far, but it can get cumbersome to define an inner coroutine and add the run_timeout line for every asynchronous test that I write.
I want my tests to look somewhat like this:
#some_magic_decorator
def test_insert_user(udb):
# Do user insertion here ...
yield from udb.insert_new_user(...
...
I attempted to create such a decorator in some elegant way, but failed. More generally, if my test looks like:
#some_magic_decorator
def my_test(arg1,arg2,...,arg_n):
...
Then the produced function (After the decorator is applied) should be:
def my_test_wrapper(tloop,arg1,arg2,...,arg_n):
run_timeout(my_test(),tloop)
Note that some of my tests use other fixtures (besides udb for example), and those fixtures must show up as arguments to the produced function, or else py.test will not invoke them.
I tried using both wrapt and decorator python modules to create such a magic decorator, however it seems like both of those modules help me create a function with a signature identical to my_test, which is not a good solution in this case.
This can probably solved using eval or a similar hack, but I was wondering if there is something elegant that I'm missing here.
I’m currently trying to solve a similar problem. Here’s what I’ve come up with so far. It seems to work but needs some clean-up:
# tests/test_foo.py
import asyncio
#asyncio.coroutine
def test_coro(loop):
yield from asyncio.sleep(0.1)
assert 0
# tests/conftest.py
import asyncio
#pytest.yield_fixture
def loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
yield loop
loop.close()
def pytest_pycollect_makeitem(collector, name, obj):
"""Collect asyncio coroutines as normal functions, not as generators."""
if asyncio.iscoroutinefunction(obj):
return list(collector._genfunctions(name, obj))
def pytest_pyfunc_call(pyfuncitem):
"""If ``pyfuncitem.obj`` is an asyncio coroutinefunction, execute it via
the event loop instead of calling it directly."""
testfunction = pyfuncitem.obj
if not asyncio.iscoroutinefunction(testfunction):
return
# Copied from _pytest/python.py:pytest_pyfunc_call()
funcargs = pyfuncitem.funcargs
testargs = {}
for arg in pyfuncitem._fixtureinfo.argnames:
testargs[arg] = funcargs[arg]
coro = testfunction(**testargs) # Will no execute the test yet!
# Run the coro in the event loop
loop = testargs.get('loop', asyncio.get_event_loop())
loop.run_until_complete(coro)
return True # TODO: What to return here?
So I basically let pytest collect asyncio coroutines like normal functions. I also intercept text exectuion for functions. If the to-be-tested function is a coroutine, I execute it in the event loop. It works with or without a fixture creating a new event loop instance per test.
Edit: According to Ronny Pfannschmidt, something like this will be added to pytest after the 2.7 release. :-)
Every test case should have access to a new asyncio event loop.
The test suite of asyncio uses unittest.TestCase. It uses setUp() method to create a new event loop. addCleanup(loop.close) is close automatically the event loop, even on error.
Sorry, I don't know how to write this with py.test if you don't want to use TestCase. But if I remember correctly, py.test supports unittest.TestCase.
A test that runs too long will stop with a timeout exception.
You can use loop.call_later() with a function which raises a BaseException as a watch dog.