Pytest not running conftest.py at all - pytest

I'm running pytest version 6.0.1
In the root of my project I have tests/x dirs
% ls tests/x
__init__.py __pycache__ conftest.py test_x.py
conftest.py contains:
print('IN CONFTEST')
def pytest_sessionstart(session):
print('SESSIONSTART')
test_x.py contains:
def test_x():
print("X")
It appears nothing in conftest.py runs when I run pytest:
% pytest tests/x -s
=================================================================================================== test session starts ===================================================================================================
platform darwin -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: OMITTED (It's the parent of "tests")
collected 1 item
tests/x/test_x.py X
.
==================================================================================================== 1 passed in 0.01s ====================================================================================================
I'm completely stumped. I haven't found anything about anyone with an issue like this. Help!

Update: Reinstalling pytest seems to have worked.

Related

Python3.8.14:ModuleNotFoundError: No module named 'commerce'

I'm building a Django project where project 1 is the core with Django project 2 inside it as as a feature. Project 2 is added as an app called mycommerce.
The objective is to have a common settings.py,urls.py,wsgi,manage.py for ease of use just like in a typical Django project. The necessary code from the above 4 .py files from project 2 has been added to project 1., keeping other aspects as is.
However, I'm getting an error when I'm building my docker container which while building is executing a script call setup.py on Ubuntu 22.04. This is where the error occurs.
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/app/setup.py", line 18, in <module>
from commerce import get_version # noqa isort:skip
ModuleNotFoundError: No module named 'commerce'
[end of output]
The setup.py lines of code which throw the error :
#!/usr/bin/env python
"""
Installation script:
To release a new version to PyPi:
- Ensure the version is correctly set in oscar.__init__.py
- Run: make release
"""
import os
import re
import sys
from setuptools import find_packages, setup
PROJECT_DIR = os.path.dirname(__file__)
sys.path.append(os.path.join(PROJECT_DIR, 'src'))
from commerce import get_version # noqa isort:skip -----> Line 18 in the error trace
My project structure :
myapp
|- __init__.py
|- manage.py
|- .docker
| |-commerce
| |-docker
| |-setup.py
|- docker-compose.yml
|- docker-compose.env
|- auth
|- posts
|- mycommerce
| |-src
| |-commerce
| |- __init__.py
| |- config.py
| |- defaults.py
| |-sandbox
| |- __init__.py
| |-manage.py
|-__init__.py
|- static
|- templates
|- .env
The init file in the project structure inside commerce folder is what the setup.py is trying to call while building the docker container. I have an understanding that this has to do with appending the path for setup.py to execute successfully. But its not working.
The init.py file inside which get_version() is called:
# Use 'alpha', 'beta', 'rc' or 'final' as the 4th element to indicate release type.
VERSION = (3, 2, 0, 'alpha', 2)
def get_short_version():
return '%s.%s' % (VERSION[0], VERSION[1])
def get_version():
version = '%s.%s' % (VERSION[0], VERSION[1])
# Append 3rd digit if > 0
if VERSION[2]:
version = '%s.%s' % (version, VERSION[2])
elif VERSION[3] != 'final':
mapping = {'alpha': 'a', 'beta': 'b', 'rc': 'c'}
version = '%s%s' % (version, mapping[VERSION[3]])
if len(VERSION) == 5:
version = '%s%s' % (version, VERSION[4])
return version
The docker file :
FROM python:3.8.14
ENV PYTHONUNBUFFERED 1
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
COPY requirements.txt /requirements.txt
RUN pip3 install -r /requirements.txt
RUN groupadd -r django && useradd -r -g django django
COPY . /opt/myapp/mycommerce
RUN chown -R django /opt/myapp/mycommerce
WORKDIR /opt/myapp/mycommerce
RUN make install
USER django
RUN make build_sandbox
RUN cp --remove-destination ./mycommerce/src/commerce/static/commerce/img/image_not_found.jpg ./mycommerce/sandbox/public/media/
VOLUME ["/opt/myapp/mycommerce"]
WORKDIR /opt/myapp/mycommerce/sandbox
CMD ["python", "manage.py", "runserver", "0.0.0.0:85","uwsgi --ini uwsgi.ini"]
EXPOSE 85
I'm using the following repo for commerce aspect of my django project.
https://github.com/django-oscar/django-oscar
However I have moved the docker file and other files like setup.py, make file and manifest file to my .docker(see project structure) folders which have other containers to enable docker related files to be in one place.
The core issue is I have 2 manage.py - one in the root folder-myapp and one inside the commerce folder which is a django-oscar project in itself. I have copied contents of settings.py of django-oscar to the core socialapp settings.py and done the same for urls as well. However other files are interlinked which I don't wish to move right away. I just need docker to find the manage.py command to execute the script.I tried pointing to the root-manage.py but it still doesn't work. I'm missing something which I cant figure.

Pytest-cov does not show term or html reports

I am writing unit tests in python and trying to generate code coverage, but I am not seeing the term or HTML reports.
My Python component structure is as follows:
.
|-- README.md
|-- bin
| |-- do_something.py
| `-- do_something.sh
|-- junit.xml
|-- lib
| `-- __init__.py
|-- pytest.ini
|-- requirements-dev.txt
|-- requirements.txt
|-- setup.py
`-- tests
|-- __init__.py
|-- env.py
`-- test_do_something.py
Source file:
class DoSomething:
def __init__(self, var=""):
# define some initial variables
self.var1 = var
self.var2 = "Done"
def do_something(self):
var1 = self.var1 if self.var1 else "nothing"
res = var1 + self.var2
return res
def main(args):
do_something_obj = DoSomething("Something")
print(do_something_obj.do_something())
if __name__ == "__main__":
main(sys.argv[1:])
Test class and test cases:
import bin.do_something as do_something
from bin.do_something import DoSomething
class TestDoSomething(TestCase):
def setUp(self):
self.a = "test"
def test_do_something(self):
do_something_test_obj = DoSomething(self.a)
self.assertEquals("test Done", do_something_test_obj.do_something())
CONFIGURATIONS:
.coveragerc
[run]
include =
bin/*.py,lib/*.py
omit =
setup.py,tests/*.py
[report]
exclude_lines =
if __name__ == .__main__.:
pytest.ini
[pytest]
testpaths = tests
The test cases execute and pass by executing the following command:
python3 -m py.test --cov='.' --cov-report=xml --cov-report=term --junitxml=junit.xml -o junit_family=xunit2
However, no reports are generated, and I can see the following output.
================================================================================= test session starts ==================================================================================
platform linux -- Python 3.6.8, pytest-7.0.1, pluggy-1.0.0
rootdir: /home/my_component, configfile: pytest.ini, testpaths: tests
plugins: cov-4.0.0
collected 5 items
tests/test_do_something.py ..... [100%]
============================================================================= 1 passed, 1 warning in 0.09s =============================================================================
ENVIRONMENT:
linux:
CentOS Linux release 7.9.2009 (Core)
5.4.201-1.el7.elrepo.x86_64
x86_64 x86_64 x86_64 GNU/Linux
Python:
Python 3.6.8
pip packages and versions:
astroid (2.11.7)
attrs (22.1.0)
coverage (6.2)
dill (0.3.4)
importlib-metadata (4.8.3)
iniconfig (1.1.1)
isort (5.10.1)
lazy-object-proxy (1.7.1)
mccabe (0.7.0)
packaging (21.3)
pip (9.0.3)
platformdirs (2.4.0)
pluggy (1.0.0)
py (1.11.0)
pylint (2.13.9)
pyparsing (3.0.9)
pytest (7.0.1)
pytest-cov (4.0.0)
setuptools (39.2.0)
tomli (1.2.3)
typed-ast (1.5.4)
typing-extensions (4.1.1)
wrapt (1.14.1)
zipp (3.6.0)
Any idea why I don't see the term or HTML reports (I also issue a separate --cov-report=html option but don't see the HTML dir either)? Am I missing something?
NOTE: I have tried deleting my virtual env and recreating several times, but I don't see a different outcome.
I don't know why it's not showing reports, but remember you can always create the reports after the test run also:
python3 -m pytest --cov='.' --junitxml=junit.xml -o junit_family=xunit2
python3 -m coverage report
python3 -m coverage xml
python3 -m coverage html

'no module named setuptools' but it is contained in the DEPENDS variable

This Problem regards Openembedded/Yocto.
I have source code which needs to be compiled by a custom python3 script.
That means, that some python3 script should run during the do_compile() process.
The script imports setuptools, therefore, I added DEPENDS += "python3-setuptools-native" to the recipe. As far as I understand the documentation, this should make the setuptools module available for the building process (native).
But when bitbake executes the do_compile() process, I get this error: no module named 'setuptools'.
Let me break it down to a minimal (non-)working example:
FILE: test.bb
LICENSE = "BSD"
LIC_FILES_CHKSUM = "file://test/LICENSE;md5=d41d8cd98f00b204e9800998ecf8427e"
DEPENDS += "python3-setuptools-native"
SRC_URI = "file://test.py \
file://LICENSE"
do_compile() {
python3 ${S}/../test.py
}
FILE: test.py
import setuptools
print("HELLO")
bitbaking:
$ bitbake test
ERROR: test-1.0-r0 do_compile: Function failed: do_compile (log file is located at /path/to/test/1.0-r0/temp/log.do_compile.8532)
ERROR: Logfile of failure stored in: /path/to/test/1.0-r0/temp/log.do_compile.8532
Log data follows:
| DEBUG: Executing shell function do_compile
| Traceback (most recent call last):
| File "/path/to/test-1.0/../test.py", line 1, in <module>
| import setuptools
| ImportError: No module named 'setuptools'
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_compile (log file is located at /path/to/test/1.0-r0/temp/log.do_compile.8532)
ERROR: Task (/path/to/test.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 400 tasks of which 398 didn't need to be rerun and 1 failed.
NOTE: Writing buildhistory
Summary: 1 task failed:
/path/to/test.bb:do_compile
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
Is my exepectation wrong, that DEPENDS += "python3-setuptools-native" makes the python3 module 'setuptools' available to the python3 script in do_compile()? How may I accomplish this?
Under the hood quite a bit more is needed to get working setuptools support. Luckily there's a class to handle that:
inherit setuptools3
This should be all that's need to package a setuptools based project with OE-Core. As long as your project has a standard setup.py you don't need to write any do_compile() or do_install() functions.
If you do need to look at the details, meta/classes/setuptools3.bbclass and meta/classes/distutils3.bbclass should contain what you need (including the rather unobvious way to call native python from a recipe).

PyTest Suppress Results Debug Statement

I am using PyTest with the following options: -s, -v, and --resultlog=results.txt. This suppresses print statements from my test, but prints the test names and results as they are run and logs the results to results.txt.
However, if any tests fail, I also get a spew of information containing traceback, debug, etc. Since I am logging this to a file anyway, I don't want it printed to the screen, cluttering up my output.
Is there any way to disable the printing of just these debug statements, but still have it logged to my results file?
Visual example:
Currently, I see something like this:
$ py.test -sv --resultlog=results.txt test.py
=============================== test session starts =========================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- /...
cachedir: .cache
rootdir: /Users/jdinkel/Documents, inifile:
plugins: profiling-1.1.1, session2file-0.1.9
collected 3 items
test.py::TestClass::test1 PASSED
test.py::TestClass::test2 PASSED
test.py::TestClass::test3 FAILED
===================================== FAILURES ==============================
__________________________________ TestClass.test3 __________________________
self = <test.TestClass instance at 0x10beb5320>
def test3(self):
> assert 0
E assert 0
test.py:7: AssertionError
========================== 1 failed, 2 passed in 0.01 seconds ===============
But I would like to see this:
$ py.test -sv --resultlog=results.txt test.py
=============================== test session starts =========================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- /...
cachedir: .cache
rootdir: /Users/jdinkel/Documents, inifile:
plugins: profiling-1.1.1, session2file-0.1.9
collected 3 items
test.py::TestClass::test1 PASSED
test.py::TestClass::test2 PASSED
test.py::TestClass::test3 FAILED
========================== 1 failed, 2 passed in 0.01 seconds ===============
With no change to the results.txt file.
You should use tb switch for controlling traceback.
e.g.
pytest tests/ -sv --tb=no --disable-warnings
--disable-warnings disable occasional pytest warnings which I assume you don't want either.
From pytest help:
--tb=style traceback print mode (auto/long/short/line/native/no).
In addition to the answer of #SilentGuy, -r N suppresses the summary of failed testcases.

unregistered task type import errors in celery

I'm having headaches with getting celery to work with my folder structure. Note I am using virtualenv but it should not matter.
cive /
celery_app.py
__init__.py
venv
framework /
tasks.py
__init__.py
civeAPI /
files tasks.py need
cive is my root project folder.
celery_app.py:
from __future__ import absolute_import
from celery import Celery
app = Celery('cive',
broker='amqp://',
backend='amqp://',
include=['cive.framework.tasks'])
# Optional configuration, see the application user guide.
app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
)
if __name__ == '__main__':
app.start()
tasks.py (simplified)
from __future__ import absolute_import
#import other things
#append syspaths
from cive.celery_app import app
#app.task(ignore_result=False)
def start(X):
# do things
def output(X):
# output files
def main():
for d in Ds:
m = []
m.append( start.delay(X) )
output( [n.get() for n in m] )
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
I then start workers via (outside root cive dir)
celery -A cive worker --app=cive.celery_app:app -l info
which seems to work fine, loading the workers and showing
[tasks]
. cive.framework.tasks.start_sessions
But when I try to run my tasks.py via another terminal:
python tasks.py
I get the error:
Traceback (most recent call last):
File "tasks.py", line 29, in <module>
from cive.celery_app import app
ImportError: No module named cive.celery_app
If I rename the import to:
from celery_app import app #without the cive.celery_app
I can eventually start the script but celery returns error:
Received unregistered task of type 'cive.start_sessions'
I think there's something wrong with my imports or config but I can't say what.
So this was a python package problem, not particularly a celery issue. I found the solution by looking at How to fix "Attempted relative import in non-package" even with __init__.py .
I've never even thought about this before, but I wasn't running python in package mode. The solution is cd'ing out of your root project directory, then running python as a package (note there is no .py after tasks):
python -m cive.framework.tasks
Now when I run the celery task everything works.