this is my first time using tox to create a python package. I didn't underestimate this task and read myself a little bit into how setuptools and pyscaffold works, what does what and why and so on, got some yt-videos to grasp how to get it on.
But, guess what, it doesn't work, and i have absolutely no clue why.
This is what i did:
putup --tox river
then i placed my sources under src/
and some tests under tests/
so this is my folder structure so far:
river/
src/
dispatcher/
__init__.py
updater.py
river/
__init__.py
dispatcher_config.py
dispatcher.py
flow.py
logger.py
log_config.yml
tests/
__init__.py
test_cases.py
AUTHORS.rst
CHANGELOG.rst
LICENSE.txt
README.rst
setup.cfg
setup.py
tox.ini
log_config.yml
all i want to achieve for now is getting my tests running properly.
tox.ini :
[tox]
minversion = 2.4
envlist = default
[testenv]
setenv = TOXINIDIR = {toxinidir}
sitepackages = True
commands =
python --version
pytest
deps = pytest
setup.cfg - almost unchanged :
# This file is used to configure your project.
# Read more about the various options under:
# http://setuptools.readthedocs.io/en/latest/setuptools.html#configuring-setup-using-setup-cfg-files
[metadata]
name = river
description = Add a short description here!
author = Kristian Jülfs
author-email = kristian.juelfs#...
license = mit
long-description = file: README.rst
long-description-content-type = text/x-rst; charset=UTF-8
url = https://github.com/pyscaffold/pyscaffold/
project-urls =
Documentation = https://pyscaffold.org/
# Change if running only on Windows, Mac or Linux (comma-separated)
platforms = any
# Add here all kinds of additional classifiers as defined under
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers =
Development Status :: 4 - Beta
Programming Language :: Python
[options]
zip_safe = False
packages = find:
include_package_data = True
package_dir =
=src
# DON'T CHANGE THE FOLLOWING LINE! IT WILL BE UPDATED BY PYSCAFFOLD!
setup_requires = pyscaffold>=3.2a0,<3.3a0
# Add here dependencies of your project (semicolon/line-separated), e.g.
# install_requires = numpy; scipy
# The usage of test_requires is discouraged, see `Dependency Management` docs
# tests_require = pytest; pytest-cov
# Require a specific Python version, e.g. Python 2.7 or >= 3.4
# python_requires = >=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*
[options.packages.find]
where = src
exclude =
tests
[options.extras_require]
# Add here additional requirements for extra features, to install with:
# `pip install river[PDF]` like:
# PDF = ReportLab; RXP
# Add here test requirements (semicolon/line-separated)
testing =
pytest
pytest-cov
# flake8
[options.entry_points]
# Add here console scripts like:
# console_scripts =
# script_name = river.module:function
# For example:
# console_scripts =
# fibonacci = river.skeleton:run
# And any other entry points, for example:
# pyscaffold.cli =
# awesome = pyscaffoldext.awesome.extension:AwesomeExtension
[test]
# py.test options when running `python setup.py test`
#addopts = --verbose
extras = True
[aliases]
dists = bdist_wheel
[bdist_wheel]
# Use this option if your package is pure-python
universal = 1
[build_sphinx]
source_dir = docs
build_dir = build/sphinx
[devpi:upload]
# Options for the devpi: PyPI server and packaging tool
# VCS export must be deactivated since we are using setuptools-scm
no-vcs = 1
formats = bdist_wheel
[tool:pytest]
addopts = --verbose
norecursedirs =
dist
build
.tox
[flake8]
# Some sane defaults for the code style checker flake8
exclude =
.tox
build
dist
.eggs
docs/conf.py
[pyscaffold]
# PyScaffold's parameters when the project was created.
# This will be used when updating. Do not change!
version = 3.2.1
package = river
extensions =
tox
alrighti so far. Whatever i'm missing, i don't get it.
when i start tox on verbose,
tox -r -vvv
i get this (cut)
...
creating '/tmp/pip-wheel-rb7acy7z/river-0.0.post0.dev1+gc371b6b.dirty-py2.py3-none-any.whl' and adding '.' to it
adding 'dispatcher/__init__.py'
adding 'dispatcher/updater.py'
adding 'river/__init__.py'
adding 'river/dispatcher.py'
adding 'river/dispatcher_config.py'
adding 'river/flow.py'
adding 'river/logger.py'
adding 'river-0.0.post0.dev1+gc371b6b.dirty.dist-info/top_level.txt'
adding 'river-0.0.post0.dev1+gc371b6b.dirty.dist-info/WHEEL'
adding 'river-0.0.post0.dev1+gc371b6b.dirty.dist-info/METADATA'
adding 'river-0.0.post0.dev1+gc371b6b.dirty.dist-info/RECORD'
removing build/bdist.freebsd-12.0-RELEASE-p7-amd64/wheel
done
Stored in directory: /home/kjuelf/.cache/pip/wheels/f7/c7/76/923a5b579b9178351cdbe053f020f660101c03b78a4085d281
Removing source in /tmp/pip-req-build-jhc29i8z
Successfully built river
Installing collected packages: river
Successfully installed river-0.0.post0.dev1+gc371b6b.dirty
Cleaning up...
...
default start: run-test-pre
default run-test-pre: PYTHONHASHSEED='4259061550'
default finish: run-test-pre after 0.00 seconds
default start: run-test
default run-test: commands[0] | python --version
setting PATH=/usr/home/kjuelf/infra/river/.tox/default/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/kjuelf/bin
[11307] /usr/home/kjuelf/infra/river$ /usr/home/kjuelf/infra/river/.tox/default/bin/python --version
Python 3.6.9
default run-test: commands[1] | pytest
setting PATH=/usr/home/kjuelf/infra/river/.tox/default/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/kjuelf/bin
WARNING: test command found but not installed in testenv
cmd: /usr/local/bin/pytest
env: /usr/home/kjuelf/infra/river/.tox/default
Maybe you forgot to specify a dependency? See also the whitelist_externals envconfig setting.
DEPRECATION WARNING: this will be an error in tox 4 and above!
[11308] /usr/home/kjuelf/infra/river$ /usr/local/bin/pytest
============================================================ test session starts ============================================================
platform freebsd12 -- Python 3.6.9, pytest-4.5.0, py-1.8.0, pluggy-0.12.0 -- /usr/local/bin/python3.6
cachedir: .tox/default/.pytest_cache
rootdir: /usr/home/kjuelf/infra/river, inifile: setup.cfg
plugins: cov-2.7.1, flake8-1.0.4
collected 0 items / 1 errors
================================================================== ERRORS ===================================================================
___________________________________________________ ERROR collecting tests/test_cases.py ____________________________________________________
/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py:359: in get_provider
module = sys.modules[moduleOrReq]
E KeyError: 'river'
During handling of the above exception, another exception occurred:
src/river/logger.py:18: in <module>
config = resource_string("river", "log_config.yml")
/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py:1156: in resource_string
return get_provider(package_or_requirement).get_resource_string(
/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py:361: in get_provider
__import__(moduleOrReq)
E ModuleNotFoundError: No module named 'river'
...
i mean, first
Successfully built river
then
E ModuleNotFoundError: No module named 'river'
in pkg_resources/__ init __.py - KeyError: 'river'
i can't get a clue whats wrong with pkg_resources here - where is my stuff gone suddenly o_0 ??!
I also don't get what that warning means:
WARNING: test command found but not installed in testenv
it's there, pytest works, and the dependency is set in tox.ini
lost in weirdness
Related
I have the following tox.ini file:
[tox]
envlist = py310, flake8
isolated_build = True
[testenv]
skip_install = True
deps = -rtest_requirements.txt
passenv = *
commands =
pytest {posargs} --teamcity
[testenv:flake8]
deps = flake8
skip_install = True
commands = flake8 tests/
On teamcity, I run my python test through tox from within a script build step where I call the following shell script
#! /bin/sh
python -m tox .
Now, there is one red test that I want to mute. When I mute it, however, teamcity makes my build red even though it marked my test as muted, like so:
The problem is well-known, since 11 years, as reported here.
How can I modify my command in my tox.ini file to make my build green again? I don't want to mark my python test with the skip tag. I don't want to change the tox command from
commands =
pytest {posargs} --teamcity
to
commands =
- pytest {posargs} --teamcity
because that will just ignore any error that might happen during the pytest run (like "Internal error happened while executing tests", or "No tests were collected" for example).
Ideally, I would like to call
commands =
pytest {posargs} --teamcity || [ $? = 1 ]
but apparently tox does not understand the symbol ||.
What can I do?
You can call a custom shell script in your commands section, and there you can do whatever you want, including using ||.
e.g.
commands = my_custom_script.sh
I tried this approach on hardknott but I couldn't get it to work recipe also produces -native output that needs packaging
It is a rust recipe that generates an x86_64 app which I would like to package the right way in sdk, so that it can be used.
I can separate the main package to -native-bin, and I see it in the recipe-sysroot, but I can't get it to populate the recipe-sysroot of the workdir of the file when building the -native-helper recipe. And I suspect the reason is that I get an error that the main recipe for x86_64 can't be found?
ERROR: Manifest xxxxxx.populate_sysroot not found in vs_imx8mp cortexa53 armv8a-crc armv8a aarch64 allarch x86_64_x86_64-nativesdk (variant '')?
So any helpful information would be appreciated!
Hacked like this:
Recipe.bb:
do_install_append() {
# Set permision without run flag so that it doesn't fail on checks
chmod 644 ${D}/usr/bin/#RECIPE#-compiler
}
# #RECIPE# generates a compiler during the target generation step
#separate this to the -native-bin package, and skip the ARCH checks
#also in the image file for stations_sdk move the app to right dir and add execute flag
PACKAGES_prepend = "${PN}-native-bin "
PROVIDES_prepend = "${PN}-native-bin "
INSANE_SKIP_${PN}-native-bin = "arch"
FILES_${PN}-native-bin = "/usr/bin/#RECIPE#-compiler"
SYSROOT_DIRS += "/"
Image.bb:
# #RECIPE# produces a compiler that is produced as a part of the target generation
#so we use the recipe and hack it to supply the -compiler as part of the
#host binaries
TOOLCHAIN_TARGET_TASK_append = " #RECIPE#-native-bin"
do_fix_#RECIPE#() {
mv ${SDK_OUTPUT}/${SDKTARGETSYSROOT}/usr/bin/#RECIPE#-compiler ${SDK_OUTPUT}/${SDKPATHNATIVE}/usr/bin/#RECIPE#-compiler
chmod 755 ${SDK_OUTPUT}/${SDKPATHNATIVE}/usr/bin/#RECIPE#-compiler
}
SDK_POSTPROCESS_COMMAND_prepend = "do_fix_#RECIPE#; "
This produces at the end the binary in the right directory
I'm trying to install lapack on my 64 bit ARMV8 board with yocto. I have lapack-3.9 bitbake recipe and it has been successfully built. It has successfully created libblas.so and liblapack.so inside image/usr/lib64 folder.
I added lapack to my local.conf . The problem is when i do
bitbake core-image-weston
I don't have these .so inside my rootfs. That is, inside /usr/lib64.
What am i missing here???
Below is my lapack_3.9.0.bb recipe-
SUMMARY = "Linear Algebra PACKage"
URL = "http://www.netlib.org/lapack"
LICENSE = "BSD-3-Clause"
LIC_FILES_CHKSUM = "file://LICENSE;md5=930f8aa500a47c7dab0f8efb5a1c9a40"
# Recipe needs FORTRAN support (copied from conf/local.conf.sample.extended)
# Enabling FORTRAN
# Note this is not officially supported and is just illustrated here to
# show an example of how it can be done
# You'll also need your fortran recipe to depend on libgfortran
#FORTRAN_forcevariable = ",fortran"
#RUNTIMETARGET_append_pn-gcc-runtime = " libquadmath"
DEPENDS = "libgfortran"
SRC_URI = "https://github.com/Reference-LAPACK/lapack/archive/v${PV}.tar.gz"
SRC_URI[md5sum] = "0b251e2a8d5f949f99b50dd5e2200ee2"
SRC_URI[sha256sum] = "106087f1bb5f46afdfba7f569d0cbe23dacb9a07cd24733765a0e89dbe1ad573"
EXTRA_OECMAKE = " -DBUILD_SHARED_LIBS=ON "
OECMAKE_GENERATOR = "Unix Makefiles"
inherit cmake pkgconfig
EXCLUDE_FROM_WORLD = "1"
Also, when i try to add lapack-dev and lapack-dbg ipks to my local.conf, it only allows lapack-dbg but gives an error for lapack-dev -
ERROR:
Collected errors:
* Solver encountered 1 problem(s):
* Problem 1/1:
* - nothing provides lapack = 3.9.0-r0 needed by lapack-dev-3.9.0-r0.aarch64
*
* Solution 1:
* - do not ask to install a package providing lapack-dev
I'm trying to debug a python codebase that uses tox for unit tests. One of the failing tests is proving difficult due to figure out, and I'd like to use pudb to step through the code.
At first thought, one would think to just pip install pudb then in the unit test code add in import pudb and pudb.settrace(). But that results in a ModuleNotFoundError:
> import pudb
>E ModuleNotFoundError: No module named 'pudb'
>tests/mytest.py:130: ModuleNotFoundError
> ERROR: InvocationError for command '/Users/me/myproject/.tox/py3/bin/pytest tests' (exited with code 1)
Noticing the .tox project folder leads one to realize there's a site-packages folder within tox, which makes sense since the point of tox is to manage testing under different virtualenv scenarios. This also means there's a tox.ini configuration file, with a deps section that may look like this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
commands = pytest tests
adding pudb to the deps list should solve the ModuleNotFoundError, but leads to another error:
self = <_pytest.capture.DontReadFromInput object at 0x103bd2b00>
def fileno(self):
> raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
E io.UnsupportedOperation: redirected stdin is pseudofile, has no fileno()
.tox/py3/lib/python3.6/site-packages/_pytest/capture.py:583: UnsupportedOperation
So, I'm stuck at this point. Is it not possible to use pudb instead of pdb within Tox?
There's a package called pytest-pudb which overrides the pudb entry points within an automated test environment like tox to successfully jump into the debugger.
To use it, just make your tox.ini file have both the pudb and pytest-pudb entries in its testenv dependencies, similar to this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
pudb
pytest-pudb
commands = pytest tests
Using the original PDB (not PUDB) could work too. At least it works on Django and Nose testers. Without changing tox.ini, simply add a pdb breakpoint wherever you need, with:
import pdb; pdb.set_trace()
Then, when it get to that breakpoint, you can use the regular PDB commands:
w - print stacktrace
s - step into
n - step over
c - continue
p - print an argument value
a - print arguments of current function
So this seems to be a really common problem with this setup, but I can't find any solutions that work on SO. I've setup a very new Ubuntu 15.04 server, then installed nginx, virtualenv (and -wrapper), and uWSGI (via apt-get, so globally, not inside the virtualenv).
My virtualenv is located at /root/Env/example. Inside of the virtualenv, I installed Django, then at /srv/www/example/app ran Django's startproject command with the project name example, so I have vaguely this structure:
-root
-Env
-example
-bin
-lib
-srv
-www
-example
-app
-example
manage.py
-example
wsgi.py
...
My example.ini file for uWSGI looks like this:
[uwsgi]
project = example
plugin = python
chdir = /srv/www/example/app/example
home = /root/Env/example
module = example.wsgi:application
master = true
processes = 5
socket = /run/uwsgi/app/example/example.socket
chmod-socket = 664
uid = www-data
gid = www-data
vacuum = true
But no matter whether I run this via uwsgi --ini /etc/uwsgi/apps-enabled/example.ini or via daemon, I get the exact same error:
Python version: 2.7.9 (default, Apr 2 2015, 15:37:21) [GCC 4.9.2]
Set PythonHome to /root/Env/example
ImportError: No module named site
I should note that the Django project works via the built-in development server ./manage.py runserver, and that when I remove home = /root/Env/example the thing works (but is obviously using the global Python and Django rather than the virtualenv versions, which means it's useless for a proper virtualenv setup).
Can anyone see some obvious path error that I'm not seeing? As far as I can tell, home is entirely correct based on my directory structure, and everything else in the ini too, so why is it not working with this ImportError?
In my case, I was seeing this issue because the django app I was trying to run was written in python 3 whereas uwsgi was configured for python 2. I fixed the problem by:
recompiling uwsgi to support both python 2 and python 3 apps
(I followed this guide)
adding this to my mydjangoproject_uwsgi.ini:
plugins = python35 # or whatever you specified while compiling uwsgi
For other folks using Django, you should also make sure you are correctly specifying the following:
# Django dir that contains manage.py
chdir = /var/www/project/myprojectname
# Django wsgi (myprojectname is the name of your top-level project)
module = myprojectname.wsgi:application
# the virtualenv you are using (full path)
home = /home/ubuntu/Env/mydjangovenv
plugins = python35
As #Freek said, site refers to a python module.
The error claims that python cannot find that package, which is because you have specified python_home to the wrong location.
I've encountered with the same problem and my uwsgi.ini is like below:
[uwsgi]
# variable
base = /home/xx/
# project settings
chdir = %(base)/
module = botservice.uwsgi:application
home = %(base)/env/bin
For this configuration uwsgi can find python executable in /env/bin but no packages could be found under this folder. So I changed home to
home = %(base)/env/
and it worked for me.
In your case, I suggest digging into home directive and point it to a location which contains both python executable and packages.
The site module is in the root of django.
First check is to activate the virtualenv manually (source /root/Env/example/bin/activate, start python and import site). If that fails, pip install django.
Assuming that django is correctly installed in the virtualenv, make sure that uWSGI activates the virtualenv. Relevant uWSGI configuration directives:
plugins = python
virtualenv = /root/Env/example
and in case you have error importing example.wsgi:
pythonpath = /srv/www/example/app/example