TLDR: How can I get better output from pytest?
I'm using Django with regular python3 unittests.
I've just switched to pytest-django for running tests.
pytest throws an error for almost all my tests (149 in total).
Pages and pages with this error.
self = <RegexURLResolver 'project.urls' (None:None) ^/>
#property
def reverse_dict(self):
language_code = get_language()
if language_code not in self._reverse_dict:
self._populate()
> return self._reverse_dict[language_code]
E KeyError: 'en-us'
Which wasn't the problem. It led me down to a wrong path.
I had a syntax error in one of my views.py files.
./manage.py test resulted in:
snip
File "/home/roland/project/views.py", line 20
code = zip(list1, list2])
SyntaxError: invalid syntax
Notice the last: ] which was the problem.
So: How can I get more useful output on problems when using pytest?
Btw:
After finding this and scrolling back into the pytest output there was mention of the syntax error. It was just buried in the output.
You can use the --maxfail=1 option so it will stop immediately on first failure.
Also, make sure your pytest.ini is setup properly so that pytest knows it should be using django-pyest.
[pytest]
DJANGO_SETTINGS_MODULE='myapp.settings'
For my workflow, I usually do the following:
run pytest --maxfail=1 myfile.py &> pytest-output.txt
tail, grep, or search he text file for errors.
Fix and iterate
There are a lot of other configuration options that will help you to get more meaningful input from pytest.
Related
I am trying to run performance test on scenario tagged as perf from the below feature file-
#tag1 #tag2 #tag3
**background:**
user login
#tag4 #perf
**scenario1:**
#tag4
**scenario2:**
Below is my .scala file setup-
class PerfTest extends Simulation {
val protocol = karateProtocol()
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath"))
setUp(
getTags.inject(
atOnceUsers(1)
).protocols(protocol)
)
I have tried passing the tags from command line and as well as passing the tag as argument in exec method in scala setup.
Terminal command-
mvn clean test-compile gatling:test "-Dkarate.env={env}" "-Dkarate.options= --tags #perf"
.scala update:- I have also tried passing the tag as an argument in the karate execute.
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath", "#perf"))
Both scenarios are being executed with either approach. Any pointers how i can force only the test with tag perf to run?
I wanted to share the finding here. I realized it is working fine when i am passing the tag info in .scala file.
My scenario with perf tag was a combination of GET and POST call as i needed some data from GET call to pass in POST call. That's why i was seeing both calls when running performance test.
I did not find any reference in karate gatling documentation for passing tags in terminal execution command. So i am assuming that might not be a valid case.
I am undertaking https://www.kubeflow.org/docs/distributions/gke/deploy/deploy-cli/ and at the stage bash ./pull-upstream.sh there is a problem and I have isolated it to a single command inside the scripts:
kpt pkg get https://github.com/zijianjoy/pipelines.git/manifests/kustomize/#upgradekpt upstream
When I run this command alone, I get the same error as when it runs in the script:
Package "upstream":
Fetching https://github.com/zijianjoy/pipelines#upgradekpt
From https://github.com/zijianjoy/pipelines
* branch upgradekpt -> FETCH_HEAD
Adding package "manifests/kustomize".
Fetched 1 package(s).
Error: /home/tester_user/gcp-blueprints/kubeflow/apps/pipelines/upstream/third-party/argo/upstream/manifests/namespace-install/overlays/argo-server-deployment.yaml: wrong Node Kind for expected: MappingNode was SequenceNode: value: {- op: add
path: /spec/template/spec/containers/0/args/-
value: --namespaced}
I made some mistakes following the script during the setup (that I think I corrected) so it could be something I did. It would be good to know why this error is happening even so for my general understanding.
I am on google cloud platform, in the command line prompt that comes built in to the web ui.
I'm trying to debug a python codebase that uses tox for unit tests. One of the failing tests is proving difficult due to figure out, and I'd like to use pudb to step through the code.
At first thought, one would think to just pip install pudb then in the unit test code add in import pudb and pudb.settrace(). But that results in a ModuleNotFoundError:
> import pudb
>E ModuleNotFoundError: No module named 'pudb'
>tests/mytest.py:130: ModuleNotFoundError
> ERROR: InvocationError for command '/Users/me/myproject/.tox/py3/bin/pytest tests' (exited with code 1)
Noticing the .tox project folder leads one to realize there's a site-packages folder within tox, which makes sense since the point of tox is to manage testing under different virtualenv scenarios. This also means there's a tox.ini configuration file, with a deps section that may look like this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
commands = pytest tests
adding pudb to the deps list should solve the ModuleNotFoundError, but leads to another error:
self = <_pytest.capture.DontReadFromInput object at 0x103bd2b00>
def fileno(self):
> raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
E io.UnsupportedOperation: redirected stdin is pseudofile, has no fileno()
.tox/py3/lib/python3.6/site-packages/_pytest/capture.py:583: UnsupportedOperation
So, I'm stuck at this point. Is it not possible to use pudb instead of pdb within Tox?
There's a package called pytest-pudb which overrides the pudb entry points within an automated test environment like tox to successfully jump into the debugger.
To use it, just make your tox.ini file have both the pudb and pytest-pudb entries in its testenv dependencies, similar to this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
pudb
pytest-pudb
commands = pytest tests
Using the original PDB (not PUDB) could work too. At least it works on Django and Nose testers. Without changing tox.ini, simply add a pdb breakpoint wherever you need, with:
import pdb; pdb.set_trace()
Then, when it get to that breakpoint, you can use the regular PDB commands:
w - print stacktrace
s - step into
n - step over
c - continue
p - print an argument value
a - print arguments of current function
I use Anonymous Python Functions in BitBake recipes to set variables during parsing.
Now I wonder if I can check if a specific variable is set or not. If not, then I want to generate a BitBake Error, which stops the build process.
Pseudo code, that I want to create:
python __anonymous () {
if d.getVar('MY_VARIABLE', True) == "":
<BITBAKE ERROR with custom message "MY_VARIABLE not found">
}
You can call bb.fatal("MY_VARIABLE not set") which will print that error and abort the build by throwing an exception.
Beware that d.getVar() returns None when the variable is unset. You only get the empty string if that's your default value.
Outputs are possible on different loglevels and with python as well as shell script code
For usage in python there are:
bb.fatal
bb.error
bb.warn
bb.note
bb.plain
bb.debug
For usage in shell script there are:
bbfatal
bberror
bbwarn
bbnote
bbplain
bbdebug
for example if you want to throw an error in your recipe's do_install_append function:
bbfatal "something went terribly wrong!"
How can I drop into pdb inside a funcargs function? And how can I see output from print statements in funcargs functions?
My original question included the following, but it turns out I was simply instrumenting the wrong funcarg. Sigh.
I tried:
print "hi from inside funcargs"
invoking with and without -s.
I tried:
import pytest
pytest.set_trace()
And:
import pdb
pdb.set_trace()
And:
raise "hi from inside funcargs"
None produced any output or caused a test failure.
first thing that comes to mind is py.test -s
but by default funcargs give you tracebacks and output/error - what plugins are you using? something is clearly hiding it
for example for the program
def pytest_funcarg__foo(request):
print 'hi'
raise IOError
def test_fun(foo):
pass
a py.test call gives me both - a traceback in the funcarg function and text
To debug a funcarg:
def pytest_funcarg__myfuncarg(request):
import pytest
pytest.set_trace()
...
def test_function(myfuncarg):
...
Then:
python -m pytest test_function.py
As Ronny answered, to see output from a funcarg, pytest -s works.