Can't Import After Publishing Package - setuptools

I know there's a similar question like this out here but after trying their solution, it still hasn't worked for me.
Project Structure:
README.md
LICENSE
setup.py
rolimons/
└── __init__.py
└── users.py
└── items.py
└── client.py
My setup file contains the following:
from setuptools import find_packages, setup
with open("README.md", encoding="utf-8") as f:
readme = f.read()
setup(
name="rolimons",
version="1.2.5",
author="walker",
description="Rolimons API Wrapper",
long_description=readme,
long_description_content_type="text/markdown",
packages=['rolimons'],
url="https://github.com/wa1ker38552/Rolimons-PY",
install_requires=["requests", "bs4", "requests_html"],
python_requires=">=3.7",
)
I have published the package after following these steps:
Change version in setup.py
python setup.py sdist
twine upload --skip-existing dist/*
After this, I go to a different project and run pip install rolimons --upgrade and after running, it gives an import error stating that it can't find the module.
What am I doing wrong?

Related

Building a package with generated Python files

Problem statement
When building a Python package I want the build tool to automatically execute the steps to generate the necessary Python files and include them in the package.
Here are some details about the project:
the project repository contains only the hand-written Python and YAML files
to have a fully functional package the YAML files must be compiled into Python scripts
once the Python files are generated from YAMLs, the program needed to compile them is no longer necessary (build dependency).
the hand-written and generated Python files are then packaged together.
The package would then be uploaded to PyPI.
I want to achieve the following:
When the user installs the package from PyPI, all necessary files required for the package to function are included and it is not necessary to perform any compile steps
When the user checks-out the repository and builds the package with python -m build . --wheel, the YAML files are automatically compiled into Python and included in the package. Compiler is required.
When the user checks-out the repository and installs the package from source, the YAML files are automatically compiled into Python and installed. Compiler is required.
(nice to have) When the user checks-out the repository and installs in editable mode, the YAML files are compiled into Python. The user is free to make modifications to both generated and hand-written Python files. Compiler is required.
I have a repository with the following layout:
├── <project>
│ └── <project>
│ ├── __init__.py
│ ├── hand_written.py
│ └── specs
│ └── file.ksc (YAML file)
└── pyproject.toml
And the functional package should look something like this
├── <project>
│ └── <project>
│ ├── __init__.py
│ ├── hand_written.py
│ └── generated
│ └── file.py
├── pyproject.toml
└── <other package metadata>
How can I achieve those goals?
What I have so far
As I am very fresh to Python packaging, I have been struggling to understand the relations between the pyproject.toml, setup.cfg and setup.py and how I can use them to achieve the goals I have outlined above. So far I have a pyproject.toml with the following content:
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "<package>"
version = "xyz"
description = "<description>"
authors = [ <authors> ]
dependencies = [
"kaitaistruct",
]
From reading the setuptools documentation, I understand that there are the build commands, such as:
build_py -- simply copies Python files into the package (no compiling; works differently in editable mode)
build_ext -- builds C/C++ modules (not relevant here?)
I suppose adding the compile steps for the YAML files will involve writing a setup.py file and overwriting a command, but I don't know if this is the right approach, whether it will even work, or if there are better methods, such as using a different build backend.
Alternative approaches
A possible alternative approach would be to manually compile the YAML files prior to starting the installation or build of the package.

Running tests against source code or the package

I've written some Python that I'm distributing as a custom package. I have some tests that I run against the source code while I'm developing, but I also want users who install the package to be able to run the same tests against the distributed package.
My package follows this structure:
my_package
├── MyPackage
│ ├── __init__.py
│ └── my_module.py
├── setup.py
└── tests
└── test_my_package.py
The my_package.py is
def my_function():
print("here!")
return True
And test_my_package.py is:
import unittest
import sys
sys.path.insert(0, "../")
from MyPackage.my_module import my_function
class TestMyModule(unittest.TestCase):
def test_somehting(self):
self.assertTrue(my_function())
As I'm manipulating sys.path, I'm always running the tests against the development code. Is there a way to use stuptools so I can run the tests against development code but the users run against the installed package?
Thanks!
There is a misconception when you say "tests that I run against the source code while I'm developing".
You always should run your tests against the packaged code because you want to be sure that the packaged code, which your users will run, works.
You could use tox to run your tests which automatically creates a package from your source code and even runs the tests for different Python Versions, eg the currently supported Python 3.6, 3.7 and 3.8.
While it would be a very rare thing, your users could then run the tests also.

learning python packaging, the old ModuleNotFoundErrro

What am I doing wrong here???
My structure :-
├── tst
│   ├── setup.py
│   └── tst
│   ├── __init__.py
│   ├── mre.py
│   └── start.py
contents of start.py
from mre import mre
def proc1():
mre.more()
return ('ran proc1')
if __name__ == "__main__":
print('test')
print(proc1())
contents of mre.py
class mre(object):
def more():
print('this is some more')
contents of setup.py
from setuptools import setup
setup(name='tst',
version='0.1',
description='just a test',
author='Mr Test',
author_email='test#example.com',
entry_points={'console_scripts': ['tst=tst.start:proc1']},
license='MIT',
packages=['tst'],
zip_safe=False)
nothing in __init__.py
When I run this from the command line all is fine, runs as expected.
However when I package this up using PIP and run using tst I get:-
Traceback (most recent call last):
File "/home/simon/.local/bin/tst", line 5, in <module>
from tst.start import proc1
File "/home/simon/.local/lib/python3.8/site-packages/tst/start.py", line 1, in <module>
from mre import mre
ModuleNotFoundError: No module named 'mre'
I've read numerous posts and I just can't seem to figure this out, if I go into the installed code and change the line
from mre import mre
to
from tst.mre import mre
then it works, but then that doesn't work when running it from the dir for development purposes... I'm obviously missing something obvious :) is it a path issue or am I missing a command in the setup.py?
If someone could point me in the right direction?
edit: do I need to do something different while developing a module thats going to be packaged, perhaps call the code some different way?
cheers
From my point of view, the absolute import from tst.mre import mre is the right thing. You could eventually use from .mre import mre, but the absolute import is safer.
For development purposes:
Use pip's editable mode:
path/to/pythonX.Y -m pip install --editable .
Similar to setuptools develop mode which is slowly going towards deprecation path/to/pythonX.Y setup.py develop.
And run the console script, or the executable module:
tst
path/to/pythonX.Y -m tst.start
Without installation, it is often sill possible to run the executable module:
path/to/pythonX.Y -m tst.start.

Customizing python package directory layout with setup.py

Suppose I have the following directory structure:
src/
└── python/
└── generated/
├── __init__.py
├── a.py
└── lib/
├── __init__.py
└── b.py
What does my setup.py need to look like in order to create a dist with a directory layout like:
src/
└── python/
├── __init__.py
├── a.py
└── lib/
├── __init__.py
└── b.py
The goal is to simply eliminate the generated folder. I've tried endless variations with package_dir and can't get anything produced other than the original directory structure.
Your setup.py should be placed in your src directory and should look like this:
#!/usr/bin/env python3
import setuptools
setuptools.setup(
name='Thing',
version='1.2.3',
packages=[
'python',
'python.lib',
],
package_dir={
'python': 'python/generated',
},
)
Note the package_dir setting. It instructs setuptools to get the code for the python package from the directory python/generated. In the built distributions you will then find the right directory structure.
First, here is my solution:
#!/usr/bin/env python
import os, shutil
from setuptools import setup
from setuptools.command.build_py import build_py
class BuildPyCommand(build_py):
"""Custom build command."""
def run(self):
shutil.rmtree('src.tmp', ignore_errors=True)
os.mkdir('src.tmp')
shutil.copytree('src/python/generated', 'src.tmp/python')
build_py.run(self)
setup(cmdclass={ 'build_py': BuildPyCommand },
name='Blabla',
version='1.0',
description='best desc ever',
author='Me',
packages=['python', 'python.lib'],
package_dir={'': 'src.tmp'},
setup_requires=['wheel']
)
And you can generate your distribution with:
python setup.py build bdist_wheel
The idea is perform a two steps build:
I generate a valid source structure
I build this temporary structure
And I deliver it in a wheel because it doesn't require future users to understand my trick. If you give it a try with a source distribution, you will notice that you need to publish the generated files as data (not difficult, but troublesome, and, I guess you will want to hide your tricks from your users).
But, I think that there is a design flaw in your process. The file src/python/generated/__init__.py, assumed to be a module <something>.generated eventually becomes your <something>.python, which is troublesome. It would be much simpler and more robust to generate a valid Python structure: src/generated/python/__init__.py. The setup.py would become trivial and your generator wouldn't be more complex.

setuptools sdist ignore data_files

According to docs https://packaging.python.org/en/latest/distributing/#data-files
setuptools will honor data_files configed in setup.py. But i can't make it work. This is my setup.py:
setup(
name='booking_order',
version=version,
packages=find_packages(),
package_data={
'booking_order': ['fake_backends/static/*',
'scripts/*',
'*.sample'],
},
data_files=[
('/etc/booking', ['etc/booking.conf'])
],
This is the project's file tree:
.
├── booking_order
│   ├── __init__.py
│   ├── tests
│   │   ├── __init__.py
├── etc
│   ├── booking.conf
├── README.md
├── setup.py
The behavior is, if i run python setup.py install, file etc/booking.conf will got installed to /etc/booking. But if i first python setup.py sdist upload, then pip install booking_order, there will be an error "error: can't copy 'etc/booking.conf': doesn't exist or not a regular file".
I checked python setup.py sdist doesn't include files in etc at all.
EDIT:
it seems this is the answer: https://github.com/pypa/setuptools/issues/521
Answer it myself.
According to pypa, and non-package-data-files。"Setuptools doesn't support installing data files to some arbitrary location on a user’s machine; this is a feature, not a bug."
If one need to install files to locations like /etc, /usr/share, eg, then he/she may use data_files flag from distutils, which feature is not totally cleaned up from setuptools. "Not totally cleaned up" means you need to add those files to MANIFEST.in manually, which is different as in distutils.
Of course, it will be better if one can manage these configuration files with rpm or deb package system. For me it's just a temporary solution to use pip here.