Why Brownie Test Class has no access to imported variables? - pytest

I'm writting a Brownie test like below:
from brownie import accounts
class Test1:
my_account = accounts\[0\]
def test_fn:
...
The test result says "my_account = accounts[0], list index out of range"
But if I put "my_account = accounts[0]" inside test_fn like below, then the test runs fine.
from brownie import accounts
class Test1:
def test_fn:
my_account = accounts\[0\]
...
Why is that? what's the pytest scope for imported variables?
Tried searching anything related to pytest variable scope, but none suit my question.

Can not reproduce your example because I do not have any accounts in both cases.
However, I think your issue is happening because this accounts variable must be filled with values according to account management page in brownie's docs.
Class properties definition is executed during collection stage. Tests are executed after collection stage is finished. So if accounts in your case are filled somewhere else in code (e.g. in autouse fixture or other test) it will not be accessible during pytest's collection stage.
May be you can provide some more details about your case. How are you generating this accounts in your successful case?
UPD:
According to eth-brownie package source code using brownie test is executing pytest with enabled pytest-brownie plugin like this:
pytest.main(pytest_args, ["pytest-brownie"])
So this plugin declares some hooks.
One of them is pytest_collection_finish which is declared in PytestBrownieRunner class. This class is used as plugin constructor during 1 thread test execution. According to pytest docs, this hook is called after all test are collected.
This hook executes following code:
if not outcome.get_result() and session.items and not brownie.network.is_connected():
brownie.network.connect(CONFIG.argv["network"])
I believe here it is adding information about your configured network including accounts.
So here is the difference:
When you try to reach accounts during tests - code above has been already executed.
However, when you try to reach accounts during class definition - no hooks have been executed yet, so there is no information about your network.
May be I am wrong but I assume your issue is related to order of pytest execution stages.

Related

Marking a Pytest fixture instead of all the tests using the fixture

Is there a way to define a mark in a PyTest fixture?
I am trying to disable slow tests when I specify -m "not slow" in pytest.
I have been able to disable individual tests, but not a fixture that I use for multiple tests.
My fixture code looks like this:
#pytest.fixture()
#pytest.mark.slow
def postgres():
# get a postgres connection (or something else that uses a slow resource)
yield conn
and several tests have this general form:
def test_run_my_query(postgres):
# Use my postgres connection to insert test data, then run a test
assert ...
I found the following comment within https://docs.pytest.org/en/latest/mark.html (updated link):
"Marks can only be applied to tests, having no effect on fixtures." Is the reason for this comment that fixtures are essentially function calls and marks can only be specified at compile time?
Is there a way to specify that all tests using a specific fixture (postgres in this case) can be marked as slow without specifying #pytest.mark.slow on each test?
It seems you already found the answer in the docs. Subscribe to https://github.com/pytest-dev/pytest/issues/1368 for watching this feature, it might be added in a later pytest version.
For now, you can sort of do a hack to workaround:
# in conftest.py
def pytest_collection_modifyitems(items):
for item in items:
if 'postgres' in getattr(item, 'fixturenames', ()):
item.add_marker("slow")

How to run test fixtures in order across multiple classes?

I have three classes all containing multiple tests. Each class has the annotation of TestFixture. I need each class to run the tests in order. (e.g. TextFixture1 runs all of its tests, then TestFixture2 runs all of its tests, and finally TestFixture3 runs all of its tests.) How can I accomplish this?
Use the OrderAttribute on each fixture, specifying the order in which you want the fixtures to run. Use the same attribute on each test method, specifying the order the test should run within the fixture.
See OrderAttribute in the docs for more info.

Scala inconclusive Assertion

I am writing an integration test in Scala. The test starts by searching for a configuration file to get access information of another system.
If it finds the file then the test should run as usual, however if it does not find the file I don't want to fail the test, I would rather make it inconclusive to indicate that the test can not run because of missing configurations only.
In C# I know there is Assert.Inconclusive which is exactly what I want, is there anything similar in Scala?
I think what you need here is assume / cancel (from "Assumptions" section, found here):
Trait Assertions also provides methods that allow you to cancel a test. You would cancel a test if a resource required by the test was unavailable. For example, if a test requires an external database to be online, and it isn't, the test could be canceled to indicate it was unable to run because of the missing database.

FitNesse: automatic fixture stub generation

When I write a test in FitNesse I usually write several tables in wiki format first and then write the fixture code afterwards. I do that by executing the test in the wiki server and then create the fixture classes with names I copied from the error messages out of the failed execution of the test page.
This is an annoying process and could be done by an automatic stub generator, that creates the fixture classes with appropriate class names and method names.
Is there already such a generator available?
Not as far as I know. It sounds like you are using Fit, correct?
It sounds like an interesting feature, maybe you can create one as a plugin?

Delay-loading TestCaseSource in NUnit

I have some NUnit tests which uses a TestCaseSource function. Unfortunately, the TestCaseSource function that I need takes a long time to initialize, because it scans a folder tree recursively to find all of the test images that would be passed into the test function. (Alternatively it could load from a file list XML every time it's run, but automatic discovery of new image files is still a requirement.)
Is it possible to specify an NUnit attribute together with TestCaseSource such that NUnit does not enumerate the test cases (does not call the TestCaseSource function) until either the user clicks on the node, or until the test suite is being run?
The need to get all test images stored in a folder is a project requirement because other people who do not have access to the test project will need to add new test images to the folder, without having to modify the test project's source code. They would then be able to view the test result.
Some dogmatic unit-testers may counter that I am using NUnit to do something it's not supposed to do. I have to admit that I have to meet a requirement, and NUnit is such a great tool with a great GUI that satisfies most of my requirements, such that I do not care about whether it is proper unit testing or not.
Additional info (from NUnit documentation)
Note on Object Construction
NUnit locates the test cases at the
time the tests are loaded, creates
instances of each class with
non-static sources and builds a list
of tests to be executed. Each source
object is only created once at this
time and is destroyed after all tests
are loaded.
If the data source is in the test
fixture itself, the object is created
using the appropriate constructor for
the fixture parameters provided on the
TestFixtureAttribute or the default
constructor if no parameters were
specified. Since this object is
destroyed before the tests are run, no
communication is possible between
these two phases - or between
different runs - except through the
parameters themselves.
It seems the purpose of loading the test cases up front is to avoid having communications (or side-effects) between TestCaseSource and the execution of the tests. Is this true? Is this the only reason to require test cases to be loaded up front?
Note:
A modification of NUnit was needed, as documented in http://blog.sponholtz.com/2012/02/late-binded-parameterized-tests-in.html
There are plans to introduce this option to later versions of NUnit.
I don't know of a way to delay-load test names in the GUI. My recommendation would be to move those tests to a separate assembly. That way, you can quickly run all of your other tests, and load the slower exhaustive tests only when needed.