Why does pytest.main returns an int or an exit code - pytest

pytest.main is supposedly returning an integer or an ExitCode according to the type-hinting of its source code.
I don't understand in what situation would an integer be returned. I only get Exitcodes (ExitCode.OK, etc.).

If we take a look at Pytest's source code, we can see that the integers come from this line:
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(config=config)
This is the return code from Pytest plugins: Pytest will try running that hook for all the loaded plugins and return the first non-None result. You can see an example in the help/version hook.
I'm guessing that an integer was the expected type for this hook in the earliest versions of Pytest, since the ExitCode class dates from Pytest 5. Integers also allow Pytest plugins to return various exit codes without being constrained by Pytest's own ExitCode.
You can read more on Pytest plugins and the Pytest hook reference.

Related

Do i need to declare none for mypy pytest functions?

So i basically just read in the documentation that
If a function does not explicitly return a value, give it a return type of None. Using a None result in a statically typed context results in a type check error
Does that also include pytest functions? Do i have to annotate every pytest func with None?
Yes, or run mypy with --check-untyped-defs, though of course that may hide other functions you've forgotten to annotate

How to load multiple modules implementing the same behaviour

I do not understand how is one supposed to use multiple modules that each implement the same behaviour since i get this error at compile time:
Function is already imported from
In my case i have two modules implementing gen_event behaviour and i am trying to import them in a third module.
I get the error message whenever i am trying to compile this code:
-module(mgr).
-import(h1,[init/1]). // implements gen_event
-import(h2,[init/1]). // implements gen_event
You can't do that. Import is a simple trick to avoid to write the complete "definition" of a function. It does nothing but just says to the compiler : when you see init(P) in this module, replace with h1:init(P).
Thus it is not possible to import several function with the same name/arity.
For short names, I do not see any benefit to use import.
If you are using module:function with long names, and you want to shorten the lines in the code, it is possible to use macros instead, and there are no limitation (but also few chance that the function name are the same :o):
-define(Func1(Var1,...,VarN), module1:func(Var1,...,VarN)).
-define(Func2(Var1,...,VarN), module2:func(Var1,...,VarN)).
...
?Func1(A1,...,AN);
...
?Func2(B1,...,BN);
Edit
The next example illustrates how it works, first I create the module mod1 as follow:
-module (mod1).
-export ([test/1]).
test(P) ->
case P of
1 -> ok;
2 -> mod2:test()
end.
and I test it in the shell:
1> c(mod1).
{ok,mod1}
2> mod1:test(1).
ok
3> mod1:test(2).
** exception error: undefined function mod2:test/0
4> % this call failed because mod2 was not defined.
4> % lets define it and compile.
mod2 is created as:
-module (mod2).
-export ([test/0]).
test() ->
io:format("now it works~n").
continue in the shell:
4> c(mod2).
{ok,mod2}
5> mod1:test(1).
ok
6> mod1:test(2).
now it works
ok
7>
As you can see, it is not necessary to modify mod1, but only to create and compile mod2 (note that it would be the same if mod2 already exists but the function test/0 is not exported).
If you want to verify that your code is not using undefined function, you can use external tools. As I am using rebar3 to manage my projects, I use the command rebar3 xref to perform this check. Note that calling an undefined function is a simple warning, it is meaningful in the context of application upgrading. This verification is not bullet proof: it is done at build time, this does not guarantee that the modules yo need will be present, with the right version on a production system: it opens a lot more interesting questions about versioning, code loading...

Breakdown of fixture setup time in py.test

I have some py.test tests that have multiple dependent and parameterized fixtures and I want to measure the time taken by each fixture. However, in the logs with --durations it only shows time for setup for actual tests, but doesn't give me a breakdown of how long each individual fixture took.
Here is a concrete example of how to do this:
import logging
import time
import pytest
logger = logging.getLogger(__name__)
#pytest.hookimpl(hookwrapper=True)
def pytest_fixture_setup(fixturedef, request):
start = time.time()
yield
end = time.time()
logger.info(
'pytest_fixture_setup'
f', request={request}'
f', time={end - start}'
)
With output similar to:
2018-10-29 20:43:18,783 - INFO pytest_fixture_setup, request=<SubRequest 'some_data_source' for <Function 'test_ruleset_customer_to_campaign'>>, time=3.4723987579345703
The magic is hookwrapper:
pytest plugins can implement hook wrappers which wrap the execution of other hook implementations. A hook wrapper is a generator function which yields exactly once. When pytest invokes hooks it first executes hook wrappers and passes the same arguments as to the regular hooks.
One fairly important gotcha that I ran into is that the conftest.py has to be in your project's root folder in order to pick up the pytest_fixture_setup hook.
There isn't anything builtin for that, but you can easily implement yourself by using the new pytest_fixture_setup hook in a conftest.py file.

Pytest Finalizers - order of execution

I am writing py.test program, considering the following py.test fixture code:
#pytest.fixture(scope="class")
def my_fixture(request):
def fin1():
print("fin1")
request.addfinalizer(fin1)
def fin2():
print("fin2")
request.addfinalizer(fin2)
What the execution order? I didn't find any mentions on the documentation regarding the execution order of finalizers.
Thanks in advance.
I guess the easiest way would be to just try running your code with -s and see in which order the prints happen.
What I'd recommend is to use yield fixtures instead, so you can explicitly control the teardown order easily:
#pytest.yield_fixture(scope="class")
def my_fixture():
# do setup
yield
fin1()
fin2()
Starting with pytest 3.0 (which will be released soon), this will also work by just using yield with the normal #pytest.fixture decorator, and will be the recommended way of doing teardown.

When is #pytest.hookimpl executes

I am new to pytest. When is #pytest.hookimpl executes? And what is the complete usage of it? I have tried with logs. For (hookwrapper=true), it is printing, 3 sets of after and before yield for a single test.
pytest uses #pytest.hookimpl just to label hook methods. (So #pytest.hookimpl is executed when pytest collects the hook method.)
If you read the source code of pytest, you can find these codes:
def normalize_hookimpl_opts(opts):
opts.setdefault("tryfirst", False)
opts.setdefault("trylast", False)
opts.setdefault("hookwrapper", False)
opts.setdefault("optionalhook", False)
It means pytest will label the hook method with #pytest.hookimpl(tryfirst=False, trylast=False, hookwrapper=False, optionalhook=False) by default. Pytest will treat these hook methods in different ways according to this label(decorator) when executing them.
Take the hookwrapper parameter for example. If the hook method is labeled as hookwrapper=True, pytest will execute the part before yield first and then execute other same type hook methods. After these methods executed, the part after yield will be executed. (This feature is just like pytest fixtures.)
One usage of #pytest.hookimpl(hookwrapper=True) is that you can calculate the total cost time of some hook methods.
(Here, the example code will calculate the tests collect time.)
#pytest.hookimpl(hookwrapper=True)
def pytest_collection(session):
collect_timeout = 5
collect_begin_time = time.time()
yield
collect_end_time = time.time()
c_time = collect_end_time - collect_begin_time
if c_time > collect_timeout:
raise Exception('Collection timeout.')