i am using pytest-bdd
Here is my feature file
#recon_test.feature
Feature: This is used to run recon
Scenarios:Run Recon
Test File
'''python
#recon_test.py
Class Recon_Tests():
#scenario('recon_test.feature','Run Recon')
def test_run_recon(self):
#do something
when i run this using command pytest , i get error **fixture 'self' not found.**
Because due to scenario annotation it treats this function as fixture maybe , and expects **'self'** to be another fixture.
I want to use the '#scenario' in my test functions inside the test classes . Is there any way ?
Also , i have found a workaround for this , i have created a fixture
```python
def self():
pass
to avoid this , and the error is gone .
But it gives another error saying that 'Recon_Tests' does not have an attribute config.
as bdd tries to read the fixture's config object for pre test hooks.
Please suggest
This is because pytest has no way of knowing whether it is a self (in terms of class instance) or a fixture.
This is fixed when you inherit your class from unittest.TestCase.
Meaning instead of class Recon_Tests() you specify
class ReconTests(unittest.TestCase).
Related
I would like to create a Python package which contains a fixture for pytest. That fixture should mock the behavior of an identification web service. The service contains some parameters for the clients, e.g. a username and a password and other non-credentials. I want the plugin users to set those globally once so that they can test all behavior they want.
I've seen that I can parametrize fixtures and use pytest.mark.parametrize to pass the values.
How can I add a global setting for all tests for my fixture?
If I understand your requirement, you have a couple of issues: having a global fixture value shared between tests and having the fixture as a plugin.
First, you can develop your plugin as usual inside your code, along with your tests. A basic version of a shared value can be achieved using a global variable.
Consider this sample code in conftest.py:
class MyClientClass:
def __init__(self, auth):
self.auth = auth
default_client = None
#pytest.fixture(scope="session")
def web_client(request):
global default_client
param = getattr(request, "param", None)
if param:
default_client = MyClientClass(request.param)
return default_client
Then your tests can look like this (only the first test does the initialization, the others can benefit the auth):
import pytest
#pytest.mark.parametrize("web_client", [{"user": "user", "pass": "pass"}],
indirect=True)
def test_with_init_creds(web_client):
print(web_client.auth)
def test_some(web_client):
print(web_client.auth)
def test_another(web_client):
print(web_client.auth)
Now, once you are happy with the local fixture, you can put its code to an installable library. Check out these two links from the official documentation: https://docs.pytest.org/en/7.1.x/how-to/writing_plugins.html#writing-your-own-plugin and https://docs.pytest.org/en/7.1.x/how-to/writing_plugins.html#making-your-plugin-installable-by-others. The important thing is to have the entry point pytest11 so the plugin is discoverable.
I created a very small plugin using poetry, which you can also reference: https://github.com/pksol/pytest-fastapi-deps
I'm a starter with pytest. Just learned about fixture and tried to do this:
My tests call functions I wrote, and get test data from a code practicing website.
Each test is from a particular page and has several sets of test data.
So, I want to use #pytest.mark.parametrize to parametrize my single test func.
Also, as the operations of the tests are samelike, I want to made the pageObject instantiation and the steps to get test data from page as a fixture.
# content of conftest.py
#pytest.fixture
def get_testdata_from_problem_page():
def _get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
return page.get_sample_data()
return _get_testdata_from_problem_page
# content of test_problem_a.py
import pytest
from page_objects import problem_page
from problem_func import problem_a
#pytest.mark.parametrize('input,expected', test_data)
def test_problem_a(get_testdata_from_problem_page):
input, expected = get_testdata_from_problem_page("problem_a")
assert problem_a.problem_a(input) == expected
Then I realized, as above, I can't parametrize the test using pytest.mark as the test_data should be given outside the test function....
Are there solutions for this? Thanks very much~~
If I understand you correctly, you want to write one parameterized test per page. In this case you just have to write a function instead of a fixture and use that for parametrization:
import pytest
from page_objects import problem_page
from problem_func import problem_a
def get_testdata_from_problem_page(problem_name):
page = problem_page.ProblemPage(problem_name)
# returns a list of (input, expected) tuples
return page.get_sample_data()
#pytest.mark.parametrize('input,expected',
get_testdata_from_problem_page("problem_a"))
def test_problem_a(input, expected):
assert problem_a.problem_a(input) == expected
As you wrote, a fixture can only be used as a parameter to a test function or to another fixture, not in a decorator.
If you want to use the function to get test data elsewhere, just move it to a common module and import it. This can be just some custom utility module, or you could put it into contest.py, though you still have to import it.
Note also that the fixture you wrote does not do anything - it defines a local function that is not called and returns.
I am using pytest and would like to invoke a test function for a number of objects returned by a server, and for a number of servers.
The servers are defined in a YAML file and those definitions are provided as parametrization to a fixture "server_connection" that returns a Connection object for a single server. Due to the parametrization, it causes the test function to be invoked once for each server.
I am able to do this with a loop in the test function: There is a second fixture "server_objects" that takes a "server_connection" fixture as input and returns a list of server objects. The pytest test function then takes that second fixture and executes the actual test in a loop through the server objects.
Here is that code:
import pytest
SD_LIST = ... # read list of server definitions from YAML file
#pytest.fixture(
params=SD_LIST,
scope='module'
)
def server_connection(request):
server_definition = request.param
return Connection(server_definition.url, ...)
#pytest.fixture(
scope='module'
)
def server_objects(request, server_connection):
return server_connection.get_objects()
def test_object_foo(server_objects):
for server_object in server_objects:
# Perform test for a single server object:
assert server_object == 'foo'
However, the disadvantage is of course that a test failure causes the entire test function to end.
What I want to happen instead is that the test function is invoked for each single server object, so that a test failure for one object does not prevent the tests on the other objects. Ideally, I'd like to have a fixture that provides a single server object, that I can pass to the test function:
...
#pytest.fixture(
scope='module'
)
def server_object(request, server_connection):
server_objects = server_connection.get_objects()
# TBD: Some magic to parametrize this fixture with server_objects
def test_object_foo(server_object):
# Perform test for a single server object:
assert server_object == 'foo'
I have read through all pytest docs regarding fixtures but did not find a way to do this.
I know about pytest hooks and have used e.g. pytest_generate_tests() before, but I did not find a way how pytest_generate_tests() can access the values of other fixtures.
Any ideas?
Update: Let me add that I also did search SO for this, but did not find an answer. I specifically looked at:
pytest fixture of fixtures
How to parametrize a Pytest fixture
py.test: Pass a parameter to a fixture function
initing a pytest fixture with a parameter
I have a package object defined in both main and the test code tree as shown below. When I execute the program with sbt run the one in the main code tree takes effect. Whereas when I run the test cases (sbt test) the package object defined in the test code tree takes effect. For eg
src/main/scala/com/example/package.scala
package object core {
val foo = "Hello World"
}
src/test/scala/com/example/package.scala
package object core {
val foo = "Goodbye World"
}
on sbt run the value of com.example.core.foo is Hello World. on sbt test the value of com.example.core.foo is Goodbye World
Is this just a quirk of SBT or is it a well defined scala/sbt trait?. I currently use this behaviour for dependency injection by defining my module bindings for production and test in their corresponding package objects. This is an advisable approach?
Scala looks for package objects in your current path, so it's a well defined behavior. Since your code in test and main resides in different places it finds different val foos.
The way you are using this mechanism is very similar to using implicits. General advice with implicits and implicit resolution is not to abuse it. I think in this case it's not the best way of providing dependencies.
You always have to consider what scope you are in - if you are using a class defined in main in test scope how do you use foo from main, and how do you use foo from test - whenever you need one or the other. You have to think already about how it will work and consider various scenarios. What if your test class is in a different package, which foo would you get, does it depend on where your tested class is declared?
Make dependency injection more explicit and don't spend mental cycles on it, or run a chance to get someone confused.
I am trying to use a pytest fixture (scope=module) in a class skipif decorator, but I am getting an error saying the fixture is not defined. Is this possible?
conftest.py has a fixture with module scope called 'target' that returns a CurrentTarget object.
The CurrentTarget object has a function isCommandSupported.
test_mytest.py has a class Test_MyTestClass that contains a dozen test functions.
I want to skip all the tests in Test_MyTestClass based on if the fixture target.isCommandSupported so I decorate Test_MyTestClass with skipif like:
#pytest.mark.skipif(not target.isCommandSupprted('commandA), reason=command not supported')
class Test_MyTestClass:
...
I get this error: NameError: name 'target' is not defined
If I try:
#pytest.mark.skipif(not pytest.config.getvalue('tgt').isCommandSupprted('commandA), reason=command not supported')
class Test_MyTestClass:
...
I get this error: AttributeError: 'function' object has no attribute 'isCommandSupprted'
The reason you get an error in the first case is that pytest injects fixtures, so they become available in your test functions via function parameters. They are never imported into higher scope.
The reason you get the AttributeError is that fixtures are functions and are evaluated at first (or each) use. So, when you get it through pytest.config it's still a function. This is the same reason the other answer will fail - if you import it, you're importing the fixture function, not it's result.
There is no direct way of doing what you want, but you can work around it with an extra fixture:
#pytest.fixture(scope='module')
def check_unsupported(target):
if not target.isCommandSupported('commandA'):
pytest.skip('command not supported')
#pytest.mark.usefixtures('check_unsupported')
def test_one():
pass
def test_two(check_unsupported):
pass
You can import target from conftest like so:
from conftest import target
Then, you can use it in pytest.mark.skipif as you were intending in your example.
#pytest.mark.skipif(not target.isCommandSupported('commandA'), reason='command not supported')
def Test_MyTestClass:
If you needed to repeat the same pytest.mark.skipif logic across several tests and wanted to avoid copy-pasting, a simple decorator will help:
check_unsupported = pytest.mark.skipif(not target.isCommandSupported('commandA'),
reason='command not supported')
#check_unsupported
def test_one():
pass
#check_unsupported
def test_two():
pass