How to write tests for micropython - micropython

I would like to write tests for the micropython code I am writing for the micro:bit. The examples here use doctest. I am open to work arounds for any testing system.
Working python example called testing_python.py:
def sum(a, b):
'''
>>> sum(3, 0)
3
'''
return a + b
print(sum(2,2))
When I test using:
python -m doctest -v testing_python.py
I get:
4
Trying:
sum(3, 0)
Expecting:
3
ok
Failing example using micropython for the micro:bit called testing_micropython.py:
from microbit import *
def sum(a, b):
'''
>>> sum(3, 0)
3
'''
return a + b
print(sum(2,2))
When I test using:
python -m doctest -v testing_micropython.py
I get
Traceback (most recent call last):
...
ModuleNotFoundError: No module named 'microbit'
I tried wrapping the 'import microbit' statement in a try, except clause. This will make this simple example work. However, when I start using any of the other non-python library functions found in the micro:bit library, such as Image or utime, then the doctest will fail again.

unittest.mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.
This can be used to write and test embedded software like micropython without the hardware.
You can go as sophisticated as you want, but a simple way to not get an error on importing the microbit module is to mock the microbit module. e.g. Have the following files:
|- testing_micropython.py
|- microbit
|- __init__.py
My testing_micropython.py has:
from microbit import *
def sum(a, b):
"""
>>> sum(3, 0)
3
"""
return a + b
display.show(sum(2,2))
The microbit/__init__.py has:
from unittest.mock import MagicMock
display = MagicMock()
Which gives the following output:
python -m doctest -v testing_micropython.py
Trying:
sum(3, 0)
Expecting:
3
ok
1 items had no tests:
testing_micropython
1 items passed all tests:
1 tests in testing_micropython.sum
1 tests in 2 items.
1 passed and 0 failed.
Test passed.
As the micro:bit hardware is very memory constrained, I would avoid putting anything unnecessary in the file that will be loaded onto the micro:bit so I would suggest avoiding doctest.

Related

How to pass extra argument to python unittest?

I want to pass library location while running unittest command. As I have to import library for using. Suppose the library name is some_lib . Same library will be executed on Linux as well as Windows.
Using python version 3.7.11
Used command : python3 -m unittest test_file.py lib_location
Details of test file.
import sys
sys.path.append(sys.argv[1]) # Hard code of path works fine.
import some_lib
import unittest
class TestCasesForSerializePy(unittest.TestCase):
#classmethod
def setUpClass(self):
self.archive_handler = some_lib.open('./test.archive')
def test_existing_archive(self):
self.assertTrue(self.archive_handler.isOpen())
if __name__ == '__main__':
# unittest.main(argv = [sys.argv[0]])
# sys.argv.pop()
unittest.main()
Error: ModuleNotFoundError: No module named 'C:/REC_158/build/lib'
Tried different approach as available on google.
Approach 1:
sys.argv.pop()
unittest.main()
Approach 2:
del sys.argv[1:]
unittest.main()
Approach 3:
unittest.main(argv=[sys.argv[0]]

Pytest skipping a test class that inherits from a builtin

TLTR:
The question is maximally easy: Please look at the code base case. Pytest just ignoring this class. How I should run tests on a such class?
I just started switching from a simple python tests (with just assert) to testing with pytest and come across with this problem. Most of my tests is are classes that extending a real classes with test methods. One of my classes inherit from collections.UserDict. Pytest just ignoring this class. How I should run tests on a such class?
# Inheritance from object are ok, Inheritance from dict are not ok. Need dict :(
class TestFoo(dict):
def test_foo(self):
assert 1
output:
/home/david/PycharmProjects/proj/venv/bin/python /snap/pycharm-professional/302/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 44145 --file /snap/pycharm-professional/302/plugins/python/helpers/pycharm/_jb_pytest_runner.py --path /home/david/PycharmProjects/proj/tests/unit_tests_2.py
Testing started at 11:07 ...
Launching pytest with arguments /home/david/PycharmProjects/proj/tests/unit_tests_2.py --no-header --no-summary -q in /home/david/PycharmProjects/proj/tests
============================= test session starts ==============================
collecting ... collected 0 items
============================= 2 warnings in 0.03s ==============================
Process finished with exit code 5
Empty suite
UPD Thanks for #Teejay Bruno, running tests from pycharm hiding a warning from me:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
The warning tells you the problem:
PytestCollectionWarning: cannot collect test class 'TestFoo' because it has a __init__ constructor
If I understand what you're trying to do, why not just pass the object as a fixture?
import pytest
#pytest.fixture
def my_dict():
return dict()
class TestFoo:
def test_foo(self, my_dict):
assert len(my_dict) == 0

how to pytest an app that can use ipython embed as arg parameter?

I have a python application that has an option "-y" to end its procedure in a ipython terminal with all objects created ready for an interactive manipulation.
I'm trying to think in how can I design a pytest that could allow me, somehow, to interact with this terminal, to check if objects are there in a python session, exit, and then capture the results for assert (I know how to use capsys for example).
During my attempts (all failed so far) I got a suggestion to use pytest -s option which, obviously, is not my case.
So I have this example:
go_to_python.py
import argparse
import random
parser = argparse.ArgumentParser()
parser.add_argument(
"-y",
"--ipython",
action="store_true",
dest="ipython",
help="start iPython interpreter")
args = parser.parse_args()
if __name__ == "__main__":
randomlist = []
for i in range(0, 5):
n = random.randint(1, 30)
randomlist.append(n)
if args.ipython:
import IPython
IPython.embed(colors="neutral")
How could I create a test that could assert that randomlist is inside the ipython session?

How can I debug my python unit tests within Tox with PUDB?

I'm trying to debug a python codebase that uses tox for unit tests. One of the failing tests is proving difficult due to figure out, and I'd like to use pudb to step through the code.
At first thought, one would think to just pip install pudb then in the unit test code add in import pudb and pudb.settrace(). But that results in a ModuleNotFoundError:
> import pudb
>E ModuleNotFoundError: No module named 'pudb'
>tests/mytest.py:130: ModuleNotFoundError
> ERROR: InvocationError for command '/Users/me/myproject/.tox/py3/bin/pytest tests' (exited with code 1)
Noticing the .tox project folder leads one to realize there's a site-packages folder within tox, which makes sense since the point of tox is to manage testing under different virtualenv scenarios. This also means there's a tox.ini configuration file, with a deps section that may look like this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
commands = pytest tests
adding pudb to the deps list should solve the ModuleNotFoundError, but leads to another error:
self = <_pytest.capture.DontReadFromInput object at 0x103bd2b00>
def fileno(self):
> raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
E io.UnsupportedOperation: redirected stdin is pseudofile, has no fileno()
.tox/py3/lib/python3.6/site-packages/_pytest/capture.py:583: UnsupportedOperation
So, I'm stuck at this point. Is it not possible to use pudb instead of pdb within Tox?
There's a package called pytest-pudb which overrides the pudb entry points within an automated test environment like tox to successfully jump into the debugger.
To use it, just make your tox.ini file have both the pudb and pytest-pudb entries in its testenv dependencies, similar to this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
pudb
pytest-pudb
commands = pytest tests
Using the original PDB (not PUDB) could work too. At least it works on Django and Nose testers. Without changing tox.ini, simply add a pdb breakpoint wherever you need, with:
import pdb; pdb.set_trace()
Then, when it get to that breakpoint, you can use the regular PDB commands:
w - print stacktrace
s - step into
n - step over
c - continue
p - print an argument value
a - print arguments of current function

how to rename a test name in pytest based on fixture param

Need to run same test on different devices. Used fixture to give ip addresses of the devices, and all tests run for the IPs provided by fixtures as requests. But at the same time, need to append the test name with the IP address to quickly analyze results. pytest results have test name as same for all params, only in the log or statement we could see the parameter used, is there anyway to change the testname by appending the param to the test name based on the fixture params ?
class TestClass:
def test1():
pass
def test2():
pass
We need to run the whole test class for every device, all test methods in sequence for each device. We can not run each test with paramter cycle, we need to run the whole test class in a parameter cycle. This we achieved by a fixture implementation, but we couldn't rename the tests.
You can read my answer: How to customize the pytest name
I could change the pytest name, by creating a hook in a conftest.py file.
However, I had to use pytest private variables, so my solution could stop working when you upgrade pytest
You don't need to change the test name. The use case you're describing is exactly what parametrized fixtures are for.
Per the pytest docs, here's output from an example test run. Notice how the fixture values are included in the failure output right after the name of the test. This makes it obvious which test cases are failing.
$ pytest
======= test session starts ========
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
test_expectation.py ..F
======= FAILURES ========
_______ test_eval[6*9-42] ________
test_input = '6*9', expected = 42
#pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
test_expectation.py:8: AssertionError
======= 1 failed, 2 passed in 0.12 seconds ========