I've written some Python that I'm distributing as a custom package. I have some tests that I run against the source code while I'm developing, but I also want users who install the package to be able to run the same tests against the distributed package.
My package follows this structure:
my_package
├── MyPackage
│ ├── __init__.py
│ └── my_module.py
├── setup.py
└── tests
└── test_my_package.py
The my_package.py is
def my_function():
print("here!")
return True
And test_my_package.py is:
import unittest
import sys
sys.path.insert(0, "../")
from MyPackage.my_module import my_function
class TestMyModule(unittest.TestCase):
def test_somehting(self):
self.assertTrue(my_function())
As I'm manipulating sys.path, I'm always running the tests against the development code. Is there a way to use stuptools so I can run the tests against development code but the users run against the installed package?
Thanks!
There is a misconception when you say "tests that I run against the source code while I'm developing".
You always should run your tests against the packaged code because you want to be sure that the packaged code, which your users will run, works.
You could use tox to run your tests which automatically creates a package from your source code and even runs the tests for different Python Versions, eg the currently supported Python 3.6, 3.7 and 3.8.
While it would be a very rare thing, your users could then run the tests also.
Related
Problem statement
When building a Python package I want the build tool to automatically execute the steps to generate the necessary Python files and include them in the package.
Here are some details about the project:
the project repository contains only the hand-written Python and YAML files
to have a fully functional package the YAML files must be compiled into Python scripts
once the Python files are generated from YAMLs, the program needed to compile them is no longer necessary (build dependency).
the hand-written and generated Python files are then packaged together.
The package would then be uploaded to PyPI.
I want to achieve the following:
When the user installs the package from PyPI, all necessary files required for the package to function are included and it is not necessary to perform any compile steps
When the user checks-out the repository and builds the package with python -m build . --wheel, the YAML files are automatically compiled into Python and included in the package. Compiler is required.
When the user checks-out the repository and installs the package from source, the YAML files are automatically compiled into Python and installed. Compiler is required.
(nice to have) When the user checks-out the repository and installs in editable mode, the YAML files are compiled into Python. The user is free to make modifications to both generated and hand-written Python files. Compiler is required.
I have a repository with the following layout:
├── <project>
│ └── <project>
│ ├── __init__.py
│ ├── hand_written.py
│ └── specs
│ └── file.ksc (YAML file)
└── pyproject.toml
And the functional package should look something like this
├── <project>
│ └── <project>
│ ├── __init__.py
│ ├── hand_written.py
│ └── generated
│ └── file.py
├── pyproject.toml
└── <other package metadata>
How can I achieve those goals?
What I have so far
As I am very fresh to Python packaging, I have been struggling to understand the relations between the pyproject.toml, setup.cfg and setup.py and how I can use them to achieve the goals I have outlined above. So far I have a pyproject.toml with the following content:
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "<package>"
version = "xyz"
description = "<description>"
authors = [ <authors> ]
dependencies = [
"kaitaistruct",
]
From reading the setuptools documentation, I understand that there are the build commands, such as:
build_py -- simply copies Python files into the package (no compiling; works differently in editable mode)
build_ext -- builds C/C++ modules (not relevant here?)
I suppose adding the compile steps for the YAML files will involve writing a setup.py file and overwriting a command, but I don't know if this is the right approach, whether it will even work, or if there are better methods, such as using a different build backend.
Alternative approaches
A possible alternative approach would be to manually compile the YAML files prior to starting the installation or build of the package.
I know there's a similar question like this out here but after trying their solution, it still hasn't worked for me.
Project Structure:
README.md
LICENSE
setup.py
rolimons/
└── __init__.py
└── users.py
└── items.py
└── client.py
My setup file contains the following:
from setuptools import find_packages, setup
with open("README.md", encoding="utf-8") as f:
readme = f.read()
setup(
name="rolimons",
version="1.2.5",
author="walker",
description="Rolimons API Wrapper",
long_description=readme,
long_description_content_type="text/markdown",
packages=['rolimons'],
url="https://github.com/wa1ker38552/Rolimons-PY",
install_requires=["requests", "bs4", "requests_html"],
python_requires=">=3.7",
)
I have published the package after following these steps:
Change version in setup.py
python setup.py sdist
twine upload --skip-existing dist/*
After this, I go to a different project and run pip install rolimons --upgrade and after running, it gives an import error stating that it can't find the module.
What am I doing wrong?
I am trying to run a pytest test for filea.py using the following directory structure
test_filea.py
from filea import *
def test_one_p_one():
r = one_p_one()
assert r == 2
filea.py
def one_p_one():
return 1 + 1
When i have to following directory structure every thing works fine.
├── filea.py
├── test_filea.py
but when i move my tests into a sub directory like this
├── filea.py
└── tests
└── test_filea.py
i get the error:
test_filea.py:1: in <module>
from filea import *
E ModuleNotFoundError: No module named 'filea'
My editor seems to indicate the import in the file in the sub directory is ok.. (no read squiggly lines)
but when i run this using "pytest"
i get the error indicated above.
As per pytest documentation about test discovery, try like this:
add an empty __init__.py file in testsdirectory;
make sure that, when you run pytest ., the parent directory of filea.py and tests is the current working directory.
It depends where you run the tests from, and how you invoke pytest. Calling pytest tests is different than calling python -m pytest tests, the later adds the current working directory into the sys.path, which makes filea module importable.
What am I doing wrong here???
My structure :-
├── tst
│ ├── setup.py
│ └── tst
│ ├── __init__.py
│ ├── mre.py
│ └── start.py
contents of start.py
from mre import mre
def proc1():
mre.more()
return ('ran proc1')
if __name__ == "__main__":
print('test')
print(proc1())
contents of mre.py
class mre(object):
def more():
print('this is some more')
contents of setup.py
from setuptools import setup
setup(name='tst',
version='0.1',
description='just a test',
author='Mr Test',
author_email='test#example.com',
entry_points={'console_scripts': ['tst=tst.start:proc1']},
license='MIT',
packages=['tst'],
zip_safe=False)
nothing in __init__.py
When I run this from the command line all is fine, runs as expected.
However when I package this up using PIP and run using tst I get:-
Traceback (most recent call last):
File "/home/simon/.local/bin/tst", line 5, in <module>
from tst.start import proc1
File "/home/simon/.local/lib/python3.8/site-packages/tst/start.py", line 1, in <module>
from mre import mre
ModuleNotFoundError: No module named 'mre'
I've read numerous posts and I just can't seem to figure this out, if I go into the installed code and change the line
from mre import mre
to
from tst.mre import mre
then it works, but then that doesn't work when running it from the dir for development purposes... I'm obviously missing something obvious :) is it a path issue or am I missing a command in the setup.py?
If someone could point me in the right direction?
edit: do I need to do something different while developing a module thats going to be packaged, perhaps call the code some different way?
cheers
From my point of view, the absolute import from tst.mre import mre is the right thing. You could eventually use from .mre import mre, but the absolute import is safer.
For development purposes:
Use pip's editable mode:
path/to/pythonX.Y -m pip install --editable .
Similar to setuptools develop mode which is slowly going towards deprecation path/to/pythonX.Y setup.py develop.
And run the console script, or the executable module:
tst
path/to/pythonX.Y -m tst.start
Without installation, it is often sill possible to run the executable module:
path/to/pythonX.Y -m tst.start.
Suppose I have the following directory structure:
src/
└── python/
└── generated/
├── __init__.py
├── a.py
└── lib/
├── __init__.py
└── b.py
What does my setup.py need to look like in order to create a dist with a directory layout like:
src/
└── python/
├── __init__.py
├── a.py
└── lib/
├── __init__.py
└── b.py
The goal is to simply eliminate the generated folder. I've tried endless variations with package_dir and can't get anything produced other than the original directory structure.
Your setup.py should be placed in your src directory and should look like this:
#!/usr/bin/env python3
import setuptools
setuptools.setup(
name='Thing',
version='1.2.3',
packages=[
'python',
'python.lib',
],
package_dir={
'python': 'python/generated',
},
)
Note the package_dir setting. It instructs setuptools to get the code for the python package from the directory python/generated. In the built distributions you will then find the right directory structure.
First, here is my solution:
#!/usr/bin/env python
import os, shutil
from setuptools import setup
from setuptools.command.build_py import build_py
class BuildPyCommand(build_py):
"""Custom build command."""
def run(self):
shutil.rmtree('src.tmp', ignore_errors=True)
os.mkdir('src.tmp')
shutil.copytree('src/python/generated', 'src.tmp/python')
build_py.run(self)
setup(cmdclass={ 'build_py': BuildPyCommand },
name='Blabla',
version='1.0',
description='best desc ever',
author='Me',
packages=['python', 'python.lib'],
package_dir={'': 'src.tmp'},
setup_requires=['wheel']
)
And you can generate your distribution with:
python setup.py build bdist_wheel
The idea is perform a two steps build:
I generate a valid source structure
I build this temporary structure
And I deliver it in a wheel because it doesn't require future users to understand my trick. If you give it a try with a source distribution, you will notice that you need to publish the generated files as data (not difficult, but troublesome, and, I guess you will want to hide your tricks from your users).
But, I think that there is a design flaw in your process. The file src/python/generated/__init__.py, assumed to be a module <something>.generated eventually becomes your <something>.python, which is troublesome. It would be much simpler and more robust to generate a valid Python structure: src/generated/python/__init__.py. The setup.py would become trivial and your generator wouldn't be more complex.