It is possible to detect if a mark has been excluded?
I use pytest to run some tests against an embedded target. For some of the test setups, I can control the supply power via an epdu.
For the setups with an epdu, I want to power down the test equipment when the test is finished.
For the setups without epdu the test is called with -m "not power", but here it is also crucial that the power_on fixture will not try to communicate with the epdu
#pytest.fixture(scope='session', autouse=True)
def power_on():
# TODO: just return it called with `-m "not power"`
power_on_test_equipment()
yield
power_off_test_equipment()
#pytest.mark.power_control
def test_something():
power_something_off()
What I found is request.keywords['power'] will be true if I run pytest with -m power but will not exists if I run without the mark or with -m "not power", which is not really helpful for my scenario.
I can solve the problem using two markes, like `-m "no_power and not power", but it does not seem very elegant.
One possibility is just to check for the command line argument. If you know that you are always passing it as -m "not power", you could do something like this:
#pytest.fixture(scope='session', autouse=True)
def power_on(request):
power = 'not power' not in request.config.getoption('-m')
if power:
power_on_test_equipment()
yield
if power:
power_off_test_equipment()
Related
I have broken my head trying to figure out how --reuse-db. I have a super-simple Django project with one model Student and the following test
import pytest
from main.models import Student
#pytest.mark.django_db
def test_1():
Student.objects.create(name=1)
assert Student.objects.all().count() == 1
When I run it for the first time with command pytest --reuse-db, the test passes - and I am not surprised.
But when I run the pytest --reuse-db for the second time, I expect that the db is not destroyed and the test fails, because I expect that Student.objects.all().count() == 2.
I am misunderstanding the --reuse-db flag ?
--reuse-db means to reuse the database between N tests within the same test run.
This flag has no bearing on running pytest twice.
This is the real code from MLflow: https://github.com/mlflow/mlflow/blob/8a7659ee961c2a0d3a2f14c67140493a76d1e51d/tests/conftest.py#L42
#pytest.fixture
def test_mode_on():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
There are also multiple places where test_mode_on is used like this:
#pytest.mark.usefixtures(test_mode_on.__name__)
def test_safe_patch_propagates_exceptions_raised_outside_of_original_function_in_test_mode(
When I try to run any tests I get the following:
tests/test_version.py::test_is_release_version ERROR [100%]
==================================== ERRORS ====================================
Fixture "test_mode_on" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I want to understand what the original code was doing with yield from test_mode_on() and how to fix it.
Update:
I've tried to change the code to request the fixture, but got an error that test_mode_on has function scope while enable_test_mode_by_default_for_autologging_integrations has session scope.
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations(test_mode_on):
"""
Run all MLflow tests in autologging test mode, ensuring that errors in autologging patch code
are raised and detected. For more information about autologging test mode, see the docstring
for :py:func:`mlflow.utils.autologging_utils._is_testing()`.
"""
yield from test_mode_on()
The intention obviously was to re-use a function-scoped fixture in a session-scoped fixture. Apparently, this was an option that was working in old pytest versions.
In any recent pytest version, this is not possible (as you have noticed). If you cannot fix the MLflow tests, your only option is to use an old pytest version that still supports that - MLflow has pinned pytest to 3.2.1 (probably for that same reason).
Be aware that any pytest plugin you have installed will likely not work with that pytest version either, so you have to downgrade or remove the plugins, too.
This recent issue is probably related to the outdated pytest version, so there is a chance that this will be addressed in MLflow.
UPDATE:
Just realized that it would help to show how to fix this for a current pytest version. In current pytest you are not allowed to derive (or yield) from a fixture with a narrower scope, as this would often not work as expected. You can, however, move the fixture code into a generator function, and yield from that. So a working version could be something like:
def test_mode_on_gen():
try:
prev_env_var_value = os.environ.pop(_AUTOLOGGING_TEST_MODE_ENV_VAR, None)
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = "true"
assert is_testing()
yield
finally:
if prev_env_var_value:
os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR] = prev_env_var_value
else:
del os.environ[_AUTOLOGGING_TEST_MODE_ENV_VAR]
#pytest.fixture
def test_mode_on():
yield from test_mode_on_gen()
#pytest.fixture(autouse=True, scope="session")
def enable_test_mode_by_default_for_autologging_integrations():
yield from test_mode_on_gen()
I'm trying to run fixture tests separately from the classic unit tests.
For that end, I've marked all the fixtures with #pytest.mark.fixtures decorator, for example:
conftest.py
#pytest.fixture(scope="session")
def fs():
pass
test_something.py
#pytest.mark.fixtures
def test_xxx(fs):
pass
#pytest.mark.fixtures
def test_yyy():
pass
and then ran two pytest commands (within tox):
pytest -v -m fixtures --junitxml={toxinidir}/tests/output/pytest-fixtures.xml
pytest -v -m "not fixtures" --junitxml={toxinidir}/tests/output/pytest.xml
The problem is that the second pytest run still creates my session fixture, although I will not use it because I'm skipping the tests marked with the above fixtures mark.
How can I disable the fixture on the second "not fixtures" run (or skip the session-scoped fixture)?
In pytest you can mark a test case with a tag.
#pytest.mark.windows
def test_will_fail():
assert False
Now above test case is marked with the tag 'windows'. Running pytest with pytest -m windows will execute only test cases which are marked with the tag 'windows'.
But what if I want to apply more than one tag? e.g. I would like to tag above test case with 'windows' and 'smoke'. How would I do that? (I haven't seen an example on that in pytest docs.)
Those are simply Python decorators which you can stack:
#pytest.mark.smoke
#pytest.mark.windows
def test_will_fail():
assert False
I know that a single test can be ran by running, in sbt,
testOnly *class -- -n Tag
Is there a way of telling sbt/scalatest to run a single test without tags? For example:
testOnly *class -- -X 2
it would mean "run the second test in the class. Whatever it is". We have a bunch of tests and no one bothered to tag them, so is there a way to run a single test without it having a tag?
This is now supported (since ScalaTest 2.1.3) within interactive mode:
testOnly *MySuite -- -z foo
to run only the tests whose name includes the substring "foo".
For exact match rather than substring, use -t instead of -z.
If you run it from the command line, it should be as single argument to sbt:
sbt 'testOnly *MySuite -- -z foo'
I wanted to add a concrete example to accompany the other answers
You need to specify the name of the class that you want to test, so if you have the following project (this is a Play project):
You can test just the Login tests by running the following command from the SBT console:
test:testOnly *LoginServiceSpec
If you are running the command from outside the SBT console, you would do the following:
sbt "test:testOnly *LoginServiceSpec"
I don't see a way to run a single untagged test within a test class but I am providing my workflow since it seems to be useful for anyone who runs into this question.
From within a sbt session:
test:testOnly *YourTestClass
(The asterisk is a wildcard, you could specify the full path com.example.specs.YourTestClass.)
All tests within that test class will be executed. Presumably you're most concerned with failing tests, so correct any failing implementations and then run:
test:testQuick
... which will only execute tests that failed. (Repeating the most recently executed test:testOnly command will be the same as test:testQuick in this case, but if you break up your test methods into appropriate test classes you can use a wildcard to make test:testQuick a more efficient way to re-run failing tests.)
Note that the nomenclature for test in ScalaTest is a test class, not a specific test method, so all untagged methods are executed.
If you have too many test methods in a test class break them up into separate classes or tag them appropriately. (This could be a signal that the class under test is in violation of single responsibility principle and could use a refactoring.)
Just to simplify the example of Tyler.
test:-prefix is not needed.
So according to his example:
In the sbt-console:
testOnly *LoginServiceSpec
And in the terminal:
sbt "testOnly *LoginServiceSpec"
Here's the Scalatest page on using the runner and the extended discussion on the -t and -z options.
This post shows what commands work for a test file that uses FunSpec.
Here's the test file:
package com.github.mrpowers.scalatest.example
import org.scalatest.FunSpec
class CardiBSpec extends FunSpec {
describe("realName") {
it("returns her birth name") {
assert(CardiB.realName() === "Belcalis Almanzar")
}
}
describe("iLike") {
it("works with a single argument") {
assert(CardiB.iLike("dollars") === "I like dollars")
}
it("works with multiple arguments") {
assert(CardiB.iLike("dollars", "diamonds") === "I like dollars, diamonds")
}
it("throws an error if an integer argument is supplied") {
assertThrows[java.lang.IllegalArgumentException]{
CardiB.iLike()
}
}
it("does not compile with integer arguments") {
assertDoesNotCompile("""CardiB.iLike(1, 2, 3)""")
}
}
}
This command runs the four tests in the iLike describe block (from the SBT command line):
testOnly *CardiBSpec -- -z iLike
You can also use quotation marks, so this will also work:
testOnly *CardiBSpec -- -z "iLike"
This will run a single test:
testOnly *CardiBSpec -- -z "works with multiple arguments"
This will run the two tests that start with "works with":
testOnly *CardiBSpec -- -z "works with"
I can't get the -t option to run any tests in the CardiBSpec file. This command doesn't run any tests:
testOnly *CardiBSpec -- -t "works with multiple arguments"
Looks like the -t option works when tests aren't nested in describe blocks. Let's take a look at another test file:
class CalculatorSpec extends FunSpec {
it("adds two numbers") {
assert(Calculator.addNumbers(3, 4) === 7)
}
}
-t can be used to run the single test:
testOnly *CalculatorSpec -- -t "adds two numbers"
-z can also be used to run the single test:
testOnly *CalculatorSpec -- -z "adds two numbers"
See this repo if you'd like to run these examples. You can find more info on running tests here.