Dynamically add CLI arguments in pytest tests - pytest

I'd like to run specific tests in pytest with dynamically added CLI arguments, i.e:
class TestHyperML:
def some_test(self):
# Setup some CLI argument such as --some_arg 3 -- some_other_arg 12
my_class = SomeClass()
class SomeClass:
def parse_cli_arguments(self):
# here I want to fetch my arguments in sys.argv.
parameters = {}
name = None
for x in sys.argv[1:]:
if name:
parameters[name] = {'default': ast.literal_eval(x)}
name = None
elif x.startswith('-'):
name = x.lstrip('-')
return parameters
I understand there is a way to do that programatically by running pytest test_something.py --somearg, but I would like to do that programatically from inside the test.
Is it possible ? Thanks !

Thanks to answers posted above, and similar SO questions, here is the solution that I used:
import mock
def test_parsing_cli_arguments(self):
args = 'main.py --my_param 1e-07 --my_other_param 2'.split()
with mock.patch('sys.argv', args):
parser = ConfigParser("config.yaml")
# Inside parser, sys.argv will contain the arguments set here.

Related

Pytest Assert vs Python Assert

I am using pytest asserts to validate data and data conditions. I wanted to know if this is the same as Python assert and is it a good practice to use pytest assert to validate any test condition.
I could not find anywhere - NOT - to use pytest assert.
Thank you for your help
From the pytest docs the assert is just the standard python assert. Other packages may have their own assert methods with custom functionality though.
Note that if you are using python's assert you may need to check the implementation of the __eq__ method for some objects as the default for some objects is to just check if they point to the same address in memory.
For example, say you have a class TestClass where num is a string and id is an int
class TestClass:
def __init__(self, name, id):
self._name = name
self._id = id
Now if you instantiate two instances of TestClass like so:
test1 = TestClass("test", 1)
test2 = TestClass("test", 1)
Then assert test1 = test2 will give False by default as they are two separate objects.
However, you can override the __eq__ method in a class like this:
class TestClass2:
def __init__(self, name, id):
self._name = name
self._id = id
def __eq__(self, other_test_class):
return (self._name = other_test_class._name) and (self._id = other_test_class._id)
Now if you define:
test1 = TestClass2("test", 1)
test2 = TestClass2("test", 1)
Then assert test1 == test2 gives True.
Some other packages may have their own assert function and set of objects where this functionality is explicitly handled by the assert.
As for which to use that is mostly a matter of preference or who else you are working with. Best practice is to use the same methods your coworkers use. Outside of that, I would prefer base python functions because most should know how to use them, which may not be true for other packages.

Using fixtures at collect time in pytest

I use testinfra with ansible transport. It provides host fixture which has ansible, so I can do host.ansible.get_variables().
Now I need to create a parametrization of test based on value from this inventory.
Inventory:
foo:
hosts:
foo1:
somedata:
- data1
- data2
I want to write a test which tests each of 'data' from somedata for each host in inventory. 'Each host' part is handled by testnfra, but I'm struggling with parametrization of the test:
#pytest.fixture
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
I've tried both ways:
#pytest.fixture(params=somedata) -> TypeError: 'function' object is not iterable
#pytest.fixture(params=somedata()) -> Fixture "somedata" called directly. Fixtures are not meant to be called directly...
How can I do this? I understand that I can't change the number of tests at test time, but I pretty sure I have the same inventory at collection time, so, theoretically, it can be doable...
After reading a lot of source code I have came to conclusion, that it's impossible to call fixtures at collection time. There are no fixtures at collection time, and any parametrization should happen before any tests are called. Moreover, it's impossible to change number of tests at test time (so no fixture could change that).
Answering my own question on using Ansible inventory to parametrize a test function: It's possible, but it requires manually reading inventory, hosts, etc. There is a special hook for that: pytest_generate_tests (it's a function, not a fixture).
My current code to get any test parametrized by host_interface fixture is:
def cartesian(hosts, ar):
for host in hosts:
for interface in ar.get_variables(host).get("interfaces",[]):
yield (host, interface)
def pytest_generate_tests(metafunc):
if 'host_interface' in metafunc.fixturenames:
inventory_file = metafunc.config.getoption('ansible_inventory')
ansible_config = testinfra.utils.ansible_runner.get_ansible_config()
inventory = testinfra.utils.ansible_runner.get_ansible_inventory(ansible_config, inventory_file)
ar = testinfra.utils.ansible_runner.AnsibleRunner(inventory_file)
hosts = ar.get_hosts(metafunc.config.option.hosts)
metafunc.parametrize("host_interface", cartesian(hosts, ar))
You should use helper function instead of fixture to parametrize another fixture. Fixtures can not be used as decorator parameters in pytest.
def somedata(host):
return host.ansible.get_variables()["somedata"]
#pytest.fixture(params=somedata()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data
This assumes that the host is not a fixture.
If the host is a fixture, there is hacky way to get around the problem. You should write the parameters to a tmp file or in a environment variable and read it with a helper function.
import os
#pytest.fixture(autouse=True)
def somedata(host):
os.environ["host_param"] = host.ansible.get_variables()["somedata"]
def get_params():
return os.environ["host_param"] # do some clean up to return a list instead of a string
#pytest.fixture(params=get_params()):
def data(request):
return request.param
def test_data(host, data):
assert 'data' in data

How to pass fixture when parametrizing a test

I am trying to parametrize my test.
In the setup method which returns a list, I am calling a fixture (app_config).
Now, i want to call the setup so that the list can be used as a parameter values inside the test.
The problem i am running into is that i cannot pass app_config fixture when calling setup in the parametrize decorator.
def setup(app_config):
member = app_config.membership
output = app_config.plan_data
ls = list(zip(member, output))
return ls
#pytest.mark.parametrize('member, output', setup(app_config))
def test_concentric(app_config, member, output):
....
....
Is there an elegant way to pass setup method in the parametrize decorator or any other way to approach this?
Unfortunately, starting with pytest version 4, it has become impossible to call fixtures like regular functions.
https://docs.pytest.org/en/latest/deprecations.html#calling-fixtures-directly
https://github.com/pytest-dev/pytest/issues/3950
In your case I can recommend not using fixtures and switch to normal functions.
For example, it might look like this:
import pytest
def app_config():
membership = ['a', 'b', 'c']
plan_data = [1, 2, 3]
return {'membership': membership,
'plan_data': plan_data}
def setup_func(config_func):
data = config_func()
member = data['membership']
output = data['plan_data']
ls = list(zip(member, output))
return ls
#pytest.mark.parametrize('member, output', setup_func(app_config))
def test_concentric(member, output):
print(member, output)
....
NB! Avoid the setup() function/fixture name because it will conflict with pytest.runner's internals.

Pytest yield fixture usage

I have a use case where I may use fixture multiple times inside a test in a "context manager" way. See example code below:
in conftest.py
class SomeYield(object):
def __enter__(self):
log.info("SomeYield.__enter__")
def __exit__(self, exc_type, exc_val, exc_tb):
log.info("SomeYield.__exit__")
def generate_name():
name = "{current_time}-{uuid}".format(
current_time=datetime.now().strftime("%Y-%m-%d-%H-%M-%S"),
uuid=str(uuid.uuid4())[:4]
)
return name
#pytest.yield_fixture
def some_yield():
name = generate_name()
log.info("Start: {}".format(name))
yield SomeYield()
log.info("End: {}".format(name))
in test_some_yield.py
def test_some_yield(some_yield):
with some_yield:
pass
with some_yield:
pass
Console output:
INFO:conftest:Start: 2017-12-06-01-50-32-5213
INFO:conftest:SomeYield.__enter__
INFO:conftest:SomeYield.__exit__
INFO:conftest:SomeYield.__enter__
INFO:conftest:SomeYield.__exit__
INFO:conftest:End: 2017-12-06-01-50-32-5213
Questions:
If I have some setup code in SomeYield.enter and cleanup code in
SomeYield.exit, is this the right way to do it using fixture for
multiple calls in my test?
Why didn't I see three occurrences of
enter and exit? Is this expected?

How do I test code that requires an Environment Variable?

I have some code that requires an Environment Variable to run correctly. But when I run my unit tests, it bombs out once it reaches that point unless I specifically export the variable in the terminal. I am using Scala and sbt. My code does something like this:
class something() {
val envVar = sys.env("ENVIRONMENT_VARIABLE")
println(envVar)
}
How can I mock this in my unit tests so that whenever sys.env("ENVIRONMENT_VARIABLE") is called, it returns a string or something like that?
If you can't wrap existing code, you can change UnmodifiableMap System.getenv() for tests.
def setEnv(key: String, value: String) = {
val field = System.getenv().getClass.getDeclaredField("m")
field.setAccessible(true)
val map = field.get(System.getenv()).asInstanceOf[java.util.Map[java.lang.String, java.lang.String]]
map.put(key, value)
}
setEnv("ENVIRONMENT_VARIABLE", "TEST_VALUE1")
If you need to test console output, you may use separate PrintStream.
You can also implement your own PrintStream.
val baos = new java.io.ByteArrayOutputStream
val ps = new java.io.PrintStream(baos)
Console.withOut(ps)(
// your test code
println(sys.env("ENVIRONMENT_VARIABLE"))
)
// Get output and verify
val output: String = baos.toString(StandardCharsets.UTF_8.toString)
println("Test Output: [%s]".format(output))
assert(output.contains("TEST_VALUE1"))
Ideally, environment access should be rewritten to retrieve the data in a safe manner. Either with a default value ...
scala> scala.util.Properties.envOrElse("SESSION", "unknown")
res70: String = Lubuntu
scala> scala.util.Properties.envOrElse("SECTION", "unknown")
res71: String = unknown
... or as an option ...
scala> scala.util.Properties.envOrNone("SESSION")
res72: Option[String] = Some(Lubuntu)
scala> scala.util.Properties.envOrNone("SECTION")
res73: Option[String] = None
... or both [see envOrSome()].
I don't know of any way to make it look like any/all random env vars are set without actually setting them before running your tests.
You shouldn't test it in unit-test.
Just extract it out
class F(val param: String) {
...
}
In your prod code you do
new Foo(sys.env("ENVIRONMENT_VARIABLE"))
I would encapsulate the configuration in a contraption which does not expose the implementation, maybe a class ConfigValue
I would put the implementation in a class ConfigValueInEnvVar extends ConfigValue
This allows me to test the code that relies on the ConfigValue without having to set or clear environment variables.
It also allows me to test the base implementation of storing a value in an environment variable as a separate feature.
It also allows me to store the configuration in a database, a file or anything else, without changing my business logic.
I select implementation in the application layer.
I put the environment variable logic in a supporting domain.
I put the business logic and the traits/interfaces in the core domain.