For some time I ran a test suite with the following fixtures in the highest order conftest.py. This file contains fixtures that have to be available for each test in the test suite. They are building on each other. One fixture is requiring the other. Thus the execution order is implicitly clear. The fixtures basically yield objects required for connections.
#pytest.fixture(scope="session")
def A():
yield A
#pytest.fixture(scope="session")
def B(A):
yield B
#pytest.fixture(scope="session")
def C(B):
Then at one moment the setup did not work anymore. Only the first fixture was executed.
#pytest.fixture(scope="session")
def A():
yield A
We are currently trying to check what exactly may have been changed, that changed the behavior. We tried around with a pytest version change, changes in the pytest.ini file or init.py files. So far we have not found any reason for the changed behaviour.
Does anyone have a hint?
Adding the autouse=True argument made the fixtures work again.
#pytest.fixture(scope="session", autouse=True)
def A():
yield A
#pytest.fixture(scope="session", autouse=True)
def B(A):
yield B
#pytest.fixture(scope="session", autouse=True)
def C(B):
Related
Why does this hang?
import cats.effect.IO
import cats.effect.unsafe.implicits.global
import com.typesafe.config.ConfigFactory
import io.circe.config.parser
import org.typelevel.log4cats._
import org.typelevel.log4cats.slf4j._
object ItsElenApp extends App:
private val logger: SelfAwareStructuredLogger[IO] = LoggerFactory[IO].getLogger
val ops = for
_ <- logger.info("aa")
_ <- IO(println("bcc"))
_ <- logger.error("bb")
yield ()
ops.unsafeRunSync()
println("end")
It prints:
INFO 2022-06-19 11:56:25,303 ItsElenApp - aa
bcc
and keeps running. Is it the log4cats library, or am I using the App object in a wrong way. Or do I have to close an execution context?
The recommended way of running cats.effect.IO-based apps is using cats.effect.IOApp (or cats.effect.ResourceApp):
object MyApp extends IOApp:
// notice it takes List[String] rather than Array[String]
def run(args: List[String]): IO[ExitCode] = ...
this would run the application, handle setting up exit code, etc. It closes the app when run reaches the end, which might be necessary if the default thread pool is non-daemon. If you don't want to use IOApp you might need to close JVM manually, also taking into consideration that exception might have been thrown on .unsafeRunSync().
Extending App is on the other hand not recommended in general. Why? It uses special Delayed mechanism where the whole body of a class (its constructor) is lazy. This makes it harder to reason about the initialization, which is why this mechanism became deprecated/discouraged. If you are not using IOApp it is better to implement things like:
object MyProgram:
// Java requires this signature
def main(args: Array[String]): Unit = ...
which in your case could look like
object ItsElenApp:
private val logger: SelfAwareStructuredLogger[IO] = LoggerFactory[IO].getLogger
def main(args: Array[String]): Unit =
val ops = for
_ <- logger.info("aa")
_ <- IO(println("bcc"))
_ <- logger.error("bb")
yield ()
// IOApp could spare you this
try
ops.unsafeRunSync()
finally
println("end")
sys.exit(0)
Because log4cats' logging functions seem to be asynchronous. Meaning they might run on a different thread, not in sequence with the other operations.
That means the main thread might finish but the other threads keep being opened and never closing.
If I use ops.unsafeRunTimed(2.seconds) everything executes and closes. But the latter log lines come only at the end of the time. Looks like the logging is somehow lazy and only finishes if it is asked to. I'm not sure.
They write you shouldn't use unsafeRunTimed in production code.
Of course if you use IOApp, everything executes normally again.
But how would you write this in production code, cleanly, nicely without an IOApp? I think there should be a clean way of telling log4cats to finish the operations, return their async fibers results and close everything. All manually.
I have a test suite with an expensive fixture (it spins up a bunch of containers in a cluster), so I'd like to use a session-scoped fixture for it. However, it's configurable on several axes, and different subsets of tests need to test different subsets of the configuration space.
Here's a minimal demonstration of what I'm trying to do. By default tests need to test the combinations x=1,y=10 and x=2,y=10, but the tests in test_foo.py need to test x=3,y=10 so override the x fixture:
conftest.py:
import pytest
#pytest.fixture(scope="session", params=[1, 2])
def x(request):
return request.param
#pytest.fixture(scope="session", params=[10])
def y(request):
return request.param
#pytest.fixture(scope="session")
def expensive(x, y):
return f"expensive[{x}, {y}]"
test_bar.py:
def test_bar(expensive):
assert expensive in {"expensive[1, 10]", "expensive[2, 10]"}
test_foo.py:
import pytest
#pytest.fixture(scope="session", params=[3])
def x(request):
return request.param
def test_foo(expensive):
assert expensive in {"expensive[3, 10]"}
When I run this, I get the following:
test_bar.py::test_bar[1-10] PASSED [ 33%]
test_foo.py::test_foo[3-10] FAILED [ 66%]
test_bar.py::test_bar[2-10] PASSED [100%]
=================================== FAILURES ===================================
________________________________ test_foo[3-10] ________________________________
expensive = 'expensive[1, 10]'
def test_foo(expensive):
> assert expensive in {"expensive[3, 10]"}
E AssertionError: assert 'expensive[1, 10]' in {'expensive[3, 10]'}
It appears to have reused the 1-10 fixture from test_bar for the 3-10 test in test_foo. Is that expected (some sort of matching by position in the parameter list rather than value), or a bug in pytest? Is there some way I can get it to do what I'm aiming for?
Incidentally, if I make x in test_foo.py non-parametric (just returning a hard-coded 3) it also fails, but in a slightly different way: it runs both test_bar tests first, then reuses the second fixture for the test_foo test.
The problem here is that the expansive fixture is in session scope, and the parameters are only read once.
A workaround would be to move expansive to module scope, so it will be evaluated for each module. That would work, but it would evaluate the fixture for each test module, even if several of them use the same parameters. If you have several test modules that use the same parameters, you could use 2 separate fixtures with different scope instead, with the common code moved out, e.g:
import pytest
...
def do_expansive_stuff(a, b): # a and b are normal arguments, not fixtures
# setup containers with parameters a, b...
return f"expensive[{a}, {b}]"
#pytest.fixture(scope="session")
def expensive(x, y):
yield do_expansive_stuff(x, y)
#pytest.fixture(scope="module")
def expensive_module(x, y):
yield do_expansive_stuff(x, y)
You would use expensive_module in test_foo. This is of course assuming that the expansive setup has to be done separately for each parameter set. The downside of this approach is of course having to use differently named fixtures.
It would be nice if someone would come up with a cleaner approach...
I am testing an app that has user profiles.
Normally, I tear down the profile after each test,
but it is very slow, so I wanted to have the option to run the test faster via
keeping the profile but tearing down changes after each test.
This is what I have now, and it works fine:
#pytest.fixture(scope="session")
def session_scope_app():
with empty_app_started() as app:
yield app
#pytest.fixture(scope="session")
def session_scope_app_with_profile_loaded(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
if TEAR_DOWN_PROFILE_AFTER_EACH_TEST:
#pytest.fixture
def setup(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
This produces a fixture setup that, as far as other tests are concerned,
behaves the same way regardless of whether the profile is torn down after each test.
Now, I want to turn TEAR_DOWN_PROFILE_AFTER_EACH_TEST into a command line
option. How can I do this? Command line options are not yet available in test collection stage,
and I can't just put the if into the fixture function body, as the two variants of setup depend on different fixtures.
There are two ways of doing that, but first, let's add the command option itself.
def pytest_addoption(parser):
parser.addoption("--tear-down-profile-after-each-test",
action="store_true",
default=True)
parser.addoption("--no-tear-down-profile-after-each-test", "-T",
action="store_false",
dest="tear_down_profile_after_each_test")
Now, we can either invoke fixtures dynamically, or create a tiny plugin that shuffles our fixtures.
Invoke the fixture dynamically
This is very simple. Instead of depending on a fixture via function arguments,
we can call request.getfixturevalue(name) from inside the fixture.
#pytest.fixture
def setup(session_scope_app):
if request.config.option.tear_down_profile_after_each_test:
with profile_loaded(session_scope_app):
yield session_scope_app
else:
session = request.getfixturevalue(
session_scope_app_with_profile_loaded.__name__
)
with profile_state_preserved(session):
yield session
(It's ok to depend on session_scope_app since session_scope_app_with_profile_loaded depends on it anyway.)
Pros: PyCharm is happy. Cons: you won't be seeing session_scope_app_with_profile_loaded in --setup-plan.
Make a simple plugin
Plugins have the benefit of having access to the configuration.
def pytest_configure(config):
class Plugin:
if config.option.tear_down_profile_after_each_test:
#pytest.fixture
def setup(self, session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(self, session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
config.pluginmanager.register(Plugin())
Pros: You get excellent --setup-plan. Cons: PyCharm won't recongize that setup is a fixture.
Having a testclass and testcases like below:
class TestSomething():
...
#pytest.fixture(autouse=True)
def before_and_after_testcases(self):
setup()
yield
cleanup()
def test_abc_1():
...
def test_abc_2():
...
def test_def_1():
...
def test_def_2():
...
Problem is, before_and_after_testcases() would run for each testcase in the test class. Is it possible to let the fixture apply to testcases with abc pattern in the function name only? The fixture is not supposed to run on test_def_xxx, but I don't know how to exclude those testcases.
The autouse=True fixture is automatically applied to all of the tests, to remove that auto-application you'll remove autouse=True
but now that fixture isn't applied to any!
to manually apply that fixture to the tests that need it you can either:
add that fixture's name as a parameter (if you need the value that the fixture has)
decorate the tests which need that fixture with #pytest.mark.usefixtures('fixture_name_here')
Another approach is to split your one test class into multiple test classes, grouping the tests which need the particular auto-used fixtures
disclaimer: I'm a pytest developer, though I don't think that's entirely relevant to this answer SO just requires disclosure of affiliation
I am minimally using pytest as a generic test runner for large automated integration tests against various API products at work, and I've been trying to find an equally generic example of a teardown function that runs on completion of any test, regardless of success or failure.
My typical use pattern is super linear and usually goes something like this:
def test_1():
<logic>
assert something
def test_2():
<logic>
assert something
def test_3():
<logic>
assert something
Occasionally, when it makes sense to do so, at the top of my script I toss in a setup fixture with an autouse argument set to "True" that runs on the launch of every script:
#pytest.fixture(scope="session", autouse=True)
def setup_something():
testhelper = TestHelper
testhelper.create_something(host="somehost", channel="somechannel")
def test_1():
<logic>
assert something
def test_2():
<logic>
assert something
def test_3():
<logic>
assert something
Up until recently, disposable docker environments have allowed me to get away with skipping the entire teardown process, but I'm in a bit of pinch where one of those is not available right now. Ideally, without diverting from the same linear pattern I've already been using, how would I implement another pytest fixture that does something like:
#pytest.fixture
def teardown():
testhelper = TestHelper
testhelper.delete_something(thing=something)
when the run is completed?
Every fixture may have a tear down part:
#pytest.fixture
def something(request):
# setup code
def finalize():
# teardown code
request.addfinalizer(finalize)
return fixture_result
Or as I usually use it:
#pytest.fixture
def something():
# setup code
yield fixture_result
# teardown code
Note that in pytest pre-3.0, the decorator required for the latter idiom was #pytest.yield_fixture. Since 3.0, however, one can just use the regular #pytest.fixture decorator, and #pytest.yield_fixture is deprecated.
See more here
you can use these functions in your conftest.py
def pytest_runtest_setup(item):
pass
def pytest_runtest_teardown(item):
pass
see here for docs