I have several unit tests in my pytest suite with the exact same boilerplate:
#use_cache
def test_something(self):
_check_cache_connection()
...
cache.clear()
Instead of copying this boilerplate into dozens of tests, I'd love to create a custom pytest marker called cache to do this for me. So the test would look like this:
#pyyest.mark.cache
def test_something(self):
...
So far, I've come up with the following:
from contextlib import ExitStack
#pytest.fixture(autouse=True)
def _use_cache(request):
marker = request.node.get_closest_marker("cache")
with ExitStack() as stack:
if marker:
stack.enter_context(use_cache())
_check_cache_connection()
yield
if marker:
cache.clear()
This seems to be working correctly but I feel like there may be a better way to do it. Does this look okay or does someone have a better implementation?
You can avoid ExitStack and multiple if marker by separating (auto-use) _use_cache_marker and the actual _use_cache, calling request.getfixturevalue in the former to run the latter:
#pytest.fixture(autouse=True)
def _use_cache_marker(request):
marker = request.node.get_closest_marker("cache")
if marker:
request.getfixturevalue("_use_cache")
#pytest.fixture()
def _use_cache():
with use_cache():
_check_cache_connection()
yield
cache.clear()
pytest-django does this for the django_db marker: https://github.com/pytest-dev/pytest-django/blob/1ad013e4bc612d89dcade9e0427c198566f05006/pytest_django/plugin.py#L460-L465
Put the boilerplate code in a separate function and parametrize it with function arguments.
Then you can simply call this function from each test with suitable parameters.
Works perfectly.
Related
I am testing an app that has user profiles.
Normally, I tear down the profile after each test,
but it is very slow, so I wanted to have the option to run the test faster via
keeping the profile but tearing down changes after each test.
This is what I have now, and it works fine:
#pytest.fixture(scope="session")
def session_scope_app():
with empty_app_started() as app:
yield app
#pytest.fixture(scope="session")
def session_scope_app_with_profile_loaded(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
if TEAR_DOWN_PROFILE_AFTER_EACH_TEST:
#pytest.fixture
def setup(session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
This produces a fixture setup that, as far as other tests are concerned,
behaves the same way regardless of whether the profile is torn down after each test.
Now, I want to turn TEAR_DOWN_PROFILE_AFTER_EACH_TEST into a command line
option. How can I do this? Command line options are not yet available in test collection stage,
and I can't just put the if into the fixture function body, as the two variants of setup depend on different fixtures.
There are two ways of doing that, but first, let's add the command option itself.
def pytest_addoption(parser):
parser.addoption("--tear-down-profile-after-each-test",
action="store_true",
default=True)
parser.addoption("--no-tear-down-profile-after-each-test", "-T",
action="store_false",
dest="tear_down_profile_after_each_test")
Now, we can either invoke fixtures dynamically, or create a tiny plugin that shuffles our fixtures.
Invoke the fixture dynamically
This is very simple. Instead of depending on a fixture via function arguments,
we can call request.getfixturevalue(name) from inside the fixture.
#pytest.fixture
def setup(session_scope_app):
if request.config.option.tear_down_profile_after_each_test:
with profile_loaded(session_scope_app):
yield session_scope_app
else:
session = request.getfixturevalue(
session_scope_app_with_profile_loaded.__name__
)
with profile_state_preserved(session):
yield session
(It's ok to depend on session_scope_app since session_scope_app_with_profile_loaded depends on it anyway.)
Pros: PyCharm is happy. Cons: you won't be seeing session_scope_app_with_profile_loaded in --setup-plan.
Make a simple plugin
Plugins have the benefit of having access to the configuration.
def pytest_configure(config):
class Plugin:
if config.option.tear_down_profile_after_each_test:
#pytest.fixture
def setup(self, session_scope_app):
with profile_loaded(session_scope_app):
yield session_scope_app
else:
#pytest.fixture
def setup(self, session_scope_app_with_profile_loaded):
with profile_state_preserved(session_scope_app_with_profile_loaded):
yield session_scope_app_with_profile_loaded
config.pluginmanager.register(Plugin())
Pros: You get excellent --setup-plan. Cons: PyCharm won't recongize that setup is a fixture.
I'm a starter with pytest. Just learned about fixture and tried to do this:
My tests call functions I wrote, and get test data from a code practicing website.
Each test is from a particular page and has several sets of test data.
So, I want to use #pytest.mark.parametrize to parametrize my single test func.
Also, as the operations of the tests are samelike, I want to made the pageObject instantiation and the steps to get test data from page as a fixture.
# content of conftest.py
#pytest.fixture
def get_testdata_from_problem_page():
def _get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
return page.get_sample_data()
return _get_testdata_from_problem_page
# content of test_problem_a.py
import pytest
from page_objects import problem_page
from problem_func import problem_a
#pytest.mark.parametrize('input,expected', test_data)
def test_problem_a(get_testdata_from_problem_page):
input, expected = get_testdata_from_problem_page("problem_a")
assert problem_a.problem_a(input) == expected
Then I realized, as above, I can't parametrize the test using pytest.mark as the test_data should be given outside the test function....
Are there solutions for this? Thanks very much~~
If I understand you correctly, you want to write one parameterized test per page. In this case you just have to write a function instead of a fixture and use that for parametrization:
import pytest
from page_objects import problem_page
from problem_func import problem_a
def get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
# returns a list of (input, expected) tuples
return page.get_sample_data()
#pytest.mark.parametrize('input,expected',
get_testdata_from_problem_page("problem_a"))
def test_problem_a(input, expected):
assert problem_a.problem_a(input) == expected
As you wrote, a fixture can only be used as a parameter to a test function or to another fixture, not in a decorator.
If you want to use the function to get test data elsewhere, just move it to a common module and import it. This can be just some custom utility module, or you could put it into contest.py, though you still have to import it.
Note also that the fixture you wrote does not do anything - it defines a local function that is not called and returns.
How do I detect if a method is called by unit test in ScalaTest?
Edit: sorry, I was expressing wrongly the thing I wanted. I have a code block in a method which takes very long to finish (I cannot mock it) and it does not affect any logic. I want to skip that code block in the unit test. So I want to know whether it is called by unit test or normal running. If it is called by unit test, I skip it, otherwise, I let it runs normally.
I have a simple workaround by adding a trait like this:
trait AppConfig {
val isDebug:Boolean
}
use it in the places where I needs to check whether it is in debug mode:
class MyLogicClass {
_: AppConfig =>
def myMethod()={
if(isDebug){...}
}
}
Use a code coverage library such as scoverage for instance. It will generate reports that indicate you which parts of your code are used by unit tests.
I am minimally using pytest as a generic test runner for large automated integration tests against various API products at work, and I've been trying to find an equally generic example of a teardown function that runs on completion of any test, regardless of success or failure.
My typical use pattern is super linear and usually goes something like this:
def test_1():
<logic>
assert something
def test_2():
<logic>
assert something
def test_3():
<logic>
assert something
Occasionally, when it makes sense to do so, at the top of my script I toss in a setup fixture with an autouse argument set to "True" that runs on the launch of every script:
#pytest.fixture(scope="session", autouse=True)
def setup_something():
testhelper = TestHelper
testhelper.create_something(host="somehost", channel="somechannel")
def test_1():
<logic>
assert something
def test_2():
<logic>
assert something
def test_3():
<logic>
assert something
Up until recently, disposable docker environments have allowed me to get away with skipping the entire teardown process, but I'm in a bit of pinch where one of those is not available right now. Ideally, without diverting from the same linear pattern I've already been using, how would I implement another pytest fixture that does something like:
#pytest.fixture
def teardown():
testhelper = TestHelper
testhelper.delete_something(thing=something)
when the run is completed?
Every fixture may have a tear down part:
#pytest.fixture
def something(request):
# setup code
def finalize():
# teardown code
request.addfinalizer(finalize)
return fixture_result
Or as I usually use it:
#pytest.fixture
def something():
# setup code
yield fixture_result
# teardown code
Note that in pytest pre-3.0, the decorator required for the latter idiom was #pytest.yield_fixture. Since 3.0, however, one can just use the regular #pytest.fixture decorator, and #pytest.yield_fixture is deprecated.
See more here
you can use these functions in your conftest.py
def pytest_runtest_setup(item):
pass
def pytest_runtest_teardown(item):
pass
see here for docs
I am trying to analyze Scala code written by someone else, and in doing so, I would like to be able to write Unit Tests (that were not written before the code was written, unfortunately).
Being a relative Newbie to Scala, especially in the Futures concept area, I am trying to understand the following line of code.
val niceAnalysis:Option[(niceReport) => Future[niceReport]] = None
Update:
The above line of code should be:
val niceAnalysis:Option[(NiceReport) => Future[NiceReport]] = None
- Where NiceReport is a case class
-----------Update ends here----------------
Since I am trying to mock up an Actor, I created this new Actor where I introduce my niceAnalysis val as a field.
The first problem I see with this "niceAnalysis" thing is that it looks like an anonymous function.
How do I "initialize" this val, or to give it an initial value.
My goal is to create a test in my test class, where I am going to pass in this initialized val value into my test actor's receive method.
My naive approach to accomplish this looked like:
val myActorUnderTestRef = TestActorRef(new MyActorUnderTest("None))
Neither does IntelliJ like it. My SBT compile and test fails.
So, I need to understand the "niceAnalyis" declaration first and then understand how to give it an initial value. Please advise.
You are correct that this is a value that might contain a function from type niceReport to Future[niceReport]. You can pass an anonymous function or just a function pointer. The easiest to understand might be the pointer, so I will provide that first, but the easiest in longer terms would be the anonymous function most likely, which I will show second:
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
def strToFuture(x: String) = Future{ x } //merely wrap the string in a future
val foo = Option(strToFuture)
Conversely, the one liner is as follows:
val foo = Option((x:String)=>Future{x})