I am using pytest asserts to validate data and data conditions. I wanted to know if this is the same as Python assert and is it a good practice to use pytest assert to validate any test condition.
I could not find anywhere - NOT - to use pytest assert.
Thank you for your help
From the pytest docs the assert is just the standard python assert. Other packages may have their own assert methods with custom functionality though.
Note that if you are using python's assert you may need to check the implementation of the __eq__ method for some objects as the default for some objects is to just check if they point to the same address in memory.
For example, say you have a class TestClass where num is a string and id is an int
class TestClass:
def __init__(self, name, id):
self._name = name
self._id = id
Now if you instantiate two instances of TestClass like so:
test1 = TestClass("test", 1)
test2 = TestClass("test", 1)
Then assert test1 = test2 will give False by default as they are two separate objects.
However, you can override the __eq__ method in a class like this:
class TestClass2:
def __init__(self, name, id):
self._name = name
self._id = id
def __eq__(self, other_test_class):
return (self._name = other_test_class._name) and (self._id = other_test_class._id)
Now if you define:
test1 = TestClass2("test", 1)
test2 = TestClass2("test", 1)
Then assert test1 == test2 gives True.
Some other packages may have their own assert function and set of objects where this functionality is explicitly handled by the assert.
As for which to use that is mostly a matter of preference or who else you are working with. Best practice is to use the same methods your coworkers use. Outside of that, I would prefer base python functions because most should know how to use them, which may not be true for other packages.
Related
I'm using pytest for testing with mixer library for generating model data. So, now I'm trying to setup my tests once before they run. I grouped them into TestClasses, set to my fixtures 'class' scope, but this doesn't work for me.
#pytest.mark.django_db
class TestCreateTagModel:
#classmethod
#pytest.fixture(autouse=True, scope='class')
def _set_up(cls, create_model_instance, tag_model, create_fake_instance):
cls.model = tag_model
cls.tag = create_model_instance(cls.model)
cls.fake_instance = create_fake_instance(cls.model)
print('setup')
def test_create_tag(self, tag_model, create_model_instance, check_instance_exist):
tag = create_model_instance(tag_model)
assert check_instance_exist(tag_model, tag.id)
conftest.py
pytest.fixture(scope='class')
#pytest.mark.django_db(transaction=True)
def create_model_instance():
instance = None
def wrapper(model, **fields):
nonlocal instance
if not fields:
instance = mixer.blend(model)
else:
instance = mixer.blend(model, **fields)
return instance
yield wrapper
if instance:
instance.delete()
#pytest.fixture(scope='class')
#pytest.mark.django_db(transaction=True)
def create_fake_instance(create_related_fields):
"""
Function for creating fake instance of model(fake means that this instance doesn't exists in DB)
Args:
related (bool, optional): Flag which indicates create related objects or not. Defaults to False.
"""
instance = None
def wrapper(model, related=False, **fields):
with mixer.ctx(commit=False):
instance = mixer.blend(model, **fields)
if related:
create_related_fields(instance, **fields)
return instance
yield wrapper
if instance:
instance.delete()
#pytest.fixture(scope='class')
#pytest.mark.django_db(transaction=True)
def create_related_fields():
django_rel_types = ['ForeignKey']
def wrapper(instance, **fields):
for f in instance._meta.get_fields():
if type(f).__name__ in django_rel_types:
rel_instance = mixer.blend(f.related_model)
setattr(instance, f.name, rel_instance)
return wrapper
But I'm catching exception in mixer gen_value method: Database access not allowed, use django_db mark(that I'm already use). Do you have any ideas how this can be implemented?
You can set things up once before a run by returning the results of the setup, rather than modifying the testing class directly. From my own attempts, it seems any changes to the class made within class-scope fixtures are lost when individual tests are run. So here's how you should be able to do this. Replace your _setup fixture with these:
#pytest.fixture(scope='class')
def model_instance(self, tag_model, create_model_instance):
return create_model_instance(tag_model)
#pytest.fixture(scope='class')
def fake_instance(self, tag_model, create_fake_instance):
return create_fake_instance(tag_model)
And then these can be accessed through:
def test_something(self, model_instance, fake_instance):
# Check that model_instance and fake_instance are as expected
I'm not familiar with Django myself though, so there might be something else with it going on. This should at least help you solve one half of the problem, if not the other.
I use testinfra with ansible transport. It provides host fixture which has ansible, so I can do host.ansible.get_variables().
Now I need to create a parametrization of test based on value from this inventory.
Inventory:
foo:
hosts:
foo1:
somedata:
- data1
- data2
I want to write a test which tests each of 'data' from somedata for each host in inventory. 'Each host' part is handled by testnfra, but I'm struggling with parametrization of the test:
#pytest.fixture
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
I've tried both ways:
#pytest.fixture(params=somedata) -> TypeError: 'function' object is not iterable
#pytest.fixture(params=somedata()) -> Fixture "somedata" called directly. Fixtures are not meant to be called directly...
How can I do this? I understand that I can't change the number of tests at test time, but I pretty sure I have the same inventory at collection time, so, theoretically, it can be doable...
After reading a lot of source code I have came to conclusion, that it's impossible to call fixtures at collection time. There are no fixtures at collection time, and any parametrization should happen before any tests are called. Moreover, it's impossible to change number of tests at test time (so no fixture could change that).
Answering my own question on using Ansible inventory to parametrize a test function: It's possible, but it requires manually reading inventory, hosts, etc. There is a special hook for that: pytest_generate_tests (it's a function, not a fixture).
My current code to get any test parametrized by host_interface fixture is:
def cartesian(hosts, ar):
for host in hosts:
for interface in ar.get_variables(host).get("interfaces",[]):
yield (host, interface)
def pytest_generate_tests(metafunc):
if 'host_interface' in metafunc.fixturenames:
inventory_file = metafunc.config.getoption('ansible_inventory')
ansible_config = testinfra.utils.ansible_runner.get_ansible_config()
inventory = testinfra.utils.ansible_runner.get_ansible_inventory(ansible_config, inventory_file)
ar = testinfra.utils.ansible_runner.AnsibleRunner(inventory_file)
hosts = ar.get_hosts(metafunc.config.option.hosts)
metafunc.parametrize("host_interface", cartesian(hosts, ar))
You should use helper function instead of fixture to parametrize another fixture. Fixtures can not be used as decorator parameters in pytest.
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
This assumes that the host is not a fixture.
If the host is a fixture, there is hacky way to get around the problem. You should write the parameters to a tmp file or in a environment variable and read it with a helper function.
import os
#pytest.fixture(autouse=True)
def somedata(host):
os.environ["host_param"] = host.ansible.get_variables()["somedata"]
def get_params():
return os.environ["host_param"] # do some clean up to return a list instead of a string
#pytest.fixture(params=get_params()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
I would like to use some common data in all my py.test class methods, and only in that class, e.g.
n_files = 1000
n_classes = 10
n_file_per_class = int(n_files / n_classes)
I found out that I can use fixtures, e.g.:
class TestDatasplit:
#pytest.fixture()
def n_files(self):
return 1000
#pytest.fixture()
def n_classes(self):
return 10
#pytest.fixture()
def n_files_per_class(self, n_files, n_classes):
return int(n_files / n_classes)
def test_datasplit_1(self, n_files):
assert n_files == 1000
def test_datasplit(self, n_files_per_class):
assert n_files_per_class == 100
but here I need to create a fixture for all my variables, but that seems quite verbose (I have much more than 3 variables)...
What is the best way to create a bunch of shared variables in a py.test class?
Your tests don't seem to be mutating these values, so you can use module-level or class-level constants. Pytest fixtures are there to provide each test with a separate copy of a value, so that tests don't begin to depend on each other (or inadvertently make each other fail) when one or more test mutate the values.
I agree with what #das-g said, but if you wanted to use fixtures, then you could have a fixture which returns a object based on a custom class, or e.g. a namedtuple.
I have some code that requires an Environment Variable to run correctly. But when I run my unit tests, it bombs out once it reaches that point unless I specifically export the variable in the terminal. I am using Scala and sbt. My code does something like this:
class something() {
val envVar = sys.env("ENVIRONMENT_VARIABLE")
println(envVar)
}
How can I mock this in my unit tests so that whenever sys.env("ENVIRONMENT_VARIABLE") is called, it returns a string or something like that?
If you can't wrap existing code, you can change UnmodifiableMap System.getenv() for tests.
def setEnv(key: String, value: String) = {
val field = System.getenv().getClass.getDeclaredField("m")
field.setAccessible(true)
val map = field.get(System.getenv()).asInstanceOf[java.util.Map[java.lang.String, java.lang.String]]
map.put(key, value)
}
setEnv("ENVIRONMENT_VARIABLE", "TEST_VALUE1")
If you need to test console output, you may use separate PrintStream.
You can also implement your own PrintStream.
val baos = new java.io.ByteArrayOutputStream
val ps = new java.io.PrintStream(baos)
Console.withOut(ps)(
// your test code
println(sys.env("ENVIRONMENT_VARIABLE"))
)
// Get output and verify
val output: String = baos.toString(StandardCharsets.UTF_8.toString)
println("Test Output: [%s]".format(output))
assert(output.contains("TEST_VALUE1"))
Ideally, environment access should be rewritten to retrieve the data in a safe manner. Either with a default value ...
scala> scala.util.Properties.envOrElse("SESSION", "unknown")
res70: String = Lubuntu
scala> scala.util.Properties.envOrElse("SECTION", "unknown")
res71: String = unknown
... or as an option ...
scala> scala.util.Properties.envOrNone("SESSION")
res72: Option[String] = Some(Lubuntu)
scala> scala.util.Properties.envOrNone("SECTION")
res73: Option[String] = None
... or both [see envOrSome()].
I don't know of any way to make it look like any/all random env vars are set without actually setting them before running your tests.
You shouldn't test it in unit-test.
Just extract it out
class F(val param: String) {
...
}
In your prod code you do
new Foo(sys.env("ENVIRONMENT_VARIABLE"))
I would encapsulate the configuration in a contraption which does not expose the implementation, maybe a class ConfigValue
I would put the implementation in a class ConfigValueInEnvVar extends ConfigValue
This allows me to test the code that relies on the ConfigValue without having to set or clear environment variables.
It also allows me to test the base implementation of storing a value in an environment variable as a separate feature.
It also allows me to store the configuration in a database, a file or anything else, without changing my business logic.
I select implementation in the application layer.
I put the environment variable logic in a supporting domain.
I put the business logic and the traits/interfaces in the core domain.
I have test modules of this style:
#test_mammals.py:
PETS = ['cats', 'dogs']
def test_mammals_1(pet):
assert 0, pet
def test_mammals_2(pet):
assert 0, pet
And here another one:
#test_birds.py:
PETS = ['budgie', 'parrot']
def test_birds_1(pet):
assert 0, pet
def test_birds_2(pet):
assert 0, pet
And I would like to define the fixture "pet" only once:
#conftest.py:
import pytest
#pytest.fixture(scope='module', autouse=True)
def getpets(request):
return getattr(request.module, 'PETS', [])
#pytest.fixture(scope='module', params=getpets, autouse=True)
def pet(request):
return request.param
Unfortunately this doesn't work because "pet" expects a list for "params". But if I put "getpets" into a list the ficture will return a pointer to "getpets" but not the values from "PETS" from the corresponding module.
This is a bit hard to answer because your code doesn't make a lot of sense as it stands - if your 'PETS' really are just a list of strings, you should just use pytest.mark.parametrize and you don't need anything special in conftest, or any fixture, in fact.
However if you have something more complicated happening, probably the easiest thing to do is have a generic fixture in conftest, and in each test module, define a lightweight fixture that has your specific data for that module, that makes use of your generic pet fixture in whatever way it needs to.