How to get the reference of fixture within the test function using pytest usefixtures - pytest

The pytest usefixtures is a great feature.
I would like to define it in my test module and have all tests use that fixture.
Need to get the reference of the fixture to retrieve some data.
The reference to the fixture is not available within the test function.
Is the use of usefixtures is only for the side effects?
If true, can this be changed so test function can reference the data within the fixture?

Related

Why Brownie Test Class has no access to imported variables?

I'm writting a Brownie test like below:
from brownie import accounts
class Test1:
my_account = accounts\[0\]
def test_fn:
...
The test result says "my_account = accounts[0], list index out of range"
But if I put "my_account = accounts[0]" inside test_fn like below, then the test runs fine.
from brownie import accounts
class Test1:
def test_fn:
my_account = accounts\[0\]
...
Why is that? what's the pytest scope for imported variables?
Tried searching anything related to pytest variable scope, but none suit my question.
Can not reproduce your example because I do not have any accounts in both cases.
However, I think your issue is happening because this accounts variable must be filled with values according to account management page in brownie's docs.
Class properties definition is executed during collection stage. Tests are executed after collection stage is finished. So if accounts in your case are filled somewhere else in code (e.g. in autouse fixture or other test) it will not be accessible during pytest's collection stage.
May be you can provide some more details about your case. How are you generating this accounts in your successful case?
UPD:
According to eth-brownie package source code using brownie test is executing pytest with enabled pytest-brownie plugin like this:
pytest.main(pytest_args, ["pytest-brownie"])
So this plugin declares some hooks.
One of them is pytest_collection_finish which is declared in PytestBrownieRunner class. This class is used as plugin constructor during 1 thread test execution. According to pytest docs, this hook is called after all test are collected.
This hook executes following code:
if not outcome.get_result() and session.items and not brownie.network.is_connected():
brownie.network.connect(CONFIG.argv["network"])
I believe here it is adding information about your configured network including accounts.
So here is the difference:
When you try to reach accounts during tests - code above has been already executed.
However, when you try to reach accounts during class definition - no hooks have been executed yet, so there is no information about your network.
May be I am wrong but I assume your issue is related to order of pytest execution stages.

Marking a Pytest fixture instead of all the tests using the fixture

Is there a way to define a mark in a PyTest fixture?
I am trying to disable slow tests when I specify -m "not slow" in pytest.
I have been able to disable individual tests, but not a fixture that I use for multiple tests.
My fixture code looks like this:
#pytest.fixture()
#pytest.mark.slow
def postgres():
# get a postgres connection (or something else that uses a slow resource)
yield conn
and several tests have this general form:
def test_run_my_query(postgres):
# Use my postgres connection to insert test data, then run a test
assert ...
I found the following comment within https://docs.pytest.org/en/latest/mark.html (updated link):
"Marks can only be applied to tests, having no effect on fixtures." Is the reason for this comment that fixtures are essentially function calls and marks can only be specified at compile time?
Is there a way to specify that all tests using a specific fixture (postgres in this case) can be marked as slow without specifying #pytest.mark.slow on each test?
It seems you already found the answer in the docs. Subscribe to https://github.com/pytest-dev/pytest/issues/1368 for watching this feature, it might be added in a later pytest version.
For now, you can sort of do a hack to workaround:
# in conftest.py
def pytest_collection_modifyitems(items):
for item in items:
if 'postgres' in getattr(item, 'fixturenames', ()):
item.add_marker("slow")

How to run test fixtures in order across multiple classes?

I have three classes all containing multiple tests. Each class has the annotation of TestFixture. I need each class to run the tests in order. (e.g. TextFixture1 runs all of its tests, then TestFixture2 runs all of its tests, and finally TestFixture3 runs all of its tests.) How can I accomplish this?
Use the OrderAttribute on each fixture, specifying the order in which you want the fixtures to run. Use the same attribute on each test method, specifying the order the test should run within the fixture.
See OrderAttribute in the docs for more info.

Delay-loading TestCaseSource in NUnit

I have some NUnit tests which uses a TestCaseSource function. Unfortunately, the TestCaseSource function that I need takes a long time to initialize, because it scans a folder tree recursively to find all of the test images that would be passed into the test function. (Alternatively it could load from a file list XML every time it's run, but automatic discovery of new image files is still a requirement.)
Is it possible to specify an NUnit attribute together with TestCaseSource such that NUnit does not enumerate the test cases (does not call the TestCaseSource function) until either the user clicks on the node, or until the test suite is being run?
The need to get all test images stored in a folder is a project requirement because other people who do not have access to the test project will need to add new test images to the folder, without having to modify the test project's source code. They would then be able to view the test result.
Some dogmatic unit-testers may counter that I am using NUnit to do something it's not supposed to do. I have to admit that I have to meet a requirement, and NUnit is such a great tool with a great GUI that satisfies most of my requirements, such that I do not care about whether it is proper unit testing or not.
Additional info (from NUnit documentation)
Note on Object Construction
NUnit locates the test cases at the
time the tests are loaded, creates
instances of each class with
non-static sources and builds a list
of tests to be executed. Each source
object is only created once at this
time and is destroyed after all tests
are loaded.
If the data source is in the test
fixture itself, the object is created
using the appropriate constructor for
the fixture parameters provided on the
TestFixtureAttribute or the default
constructor if no parameters were
specified. Since this object is
destroyed before the tests are run, no
communication is possible between
these two phases - or between
different runs - except through the
parameters themselves.
It seems the purpose of loading the test cases up front is to avoid having communications (or side-effects) between TestCaseSource and the execution of the tests. Is this true? Is this the only reason to require test cases to be loaded up front?
Note:
A modification of NUnit was needed, as documented in http://blog.sponholtz.com/2012/02/late-binded-parameterized-tests-in.html
There are plans to introduce this option to later versions of NUnit.
I don't know of a way to delay-load test names in the GUI. My recommendation would be to move those tests to a separate assembly. That way, you can quickly run all of your other tests, and load the slower exhaustive tests only when needed.

NUnit SetUpFixture attribute equivalent in xUnit?

In nUnit, SetUpFixture allowed me to run some code before any tests. Is there anything like that when using xUnit?
From nUnit documentation:
This is the attribute that marks a class that contains the one-time setup or teardown methods for all the test fixtures under a given namespace.
xUnit's comparison table shows that where you would use [TestFixtureSetUp] in NUnit, you make your test fixture class implement IUseFixture<T>.
If [TestFixtureSetUp] isn't the attribute you're looking for, then the header at the beginning of the compatibility table indicates that there is no equivalent:
Note: any testing framework attributes that are not in this list have no corresponding attribute in xUnit.net.