Why does python setup.py bdist_wheel creates a build folder? - setuptools

I just learned to upload my own python packages to PyPI thanks to this amazing tutorial. I am trying now to better understand how wheels works and I found this article helpful.
However, I still do not understand why python setup.py bdist_wheel creates an almost empty directory named build with two subfolders: bdist.win-amd64 (empty) and lib (which contains a copy of my package), in addition to the .whl file in the dist directory that developers will later upload to PyPI by doing python -m twine upload dist/*.
Why is this build directory necessary? I mean, would the dist directory not be enough? Moreover, why is the .whl called a binary distribution if the code is not actually compiled.

python setup.py bdist_wheel internally runs python setup.py install which in turn runs python setup.py build which compiles/builds the project into a temporary location inside build/ directory and then installs compiled project into another temporary location inside build/ directory. From files in that second temporary location it creates a wheel.
As for the compilation — python modules could be written in C/C++ and often they are. So python setup.py build needs to compile. If there is nothing to compile — well, the compilation step is skipped but the build step is still run.

Related

Building python package with data files and using them

I'd like to build a python package from the following directory:
my_package/
main_funcs.py
extra_funcs.py
data/
data_file.dat
so that extra_funcs.py is used in main_funcs.py, and also gets data from data_file.dat.
I'd like to eventually have a .whl and a .tar.gz files for exporting the package.
How can I do that?
I didnt understand how to create them with setuptools, or how to add non-code files on pip build.
Also, I'm not sure how to call the data files from inside the code after the packing.

How to build pex or shiv package from pyproject-compliant project?

I have a Python project which I would like to distribute as a Pex or shiv self-contained Python-executable package, in the spirit of the Python Packaging Guide, "Depending on a pre-installed Python" section. My project is structured in the spirit of PEP518, and it has a pyproject.toml file. My project also includes a few libraries not in the Python Standard Library, so I use pipenv to manage those.
How to I build the pex package using a backend which I can specify in the [build-backend] of my pyproject.toml file?
The documentation for pex and shiv show how to build self-contained packages from the command line, or via setuptools.py, but not using the PEP518 structure and pyproject.toml. At least, not as far as I have been able to discover. (And, by "self-contained", I mean all Python language packages, but I am happy to use an existing Python 3 interpreter on the destination system.)
Note that of the three executable packages listed in the Packaging Guide, zipapps does not seem like a fit for me. It doesn't give me a way to manage my external libraries.
Update: some specific invocations, per request.
I currently use build as my build frontend. I use setuptools as my build backend. My pyproject.toml file currently reads,
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
I currently build a wheel via this shell command:
(MyPipenvVenv) % python -m build
…[many lines of output elided]…
Successfully built MyProject-0.0.6a0.tar.gz and MyProject-0.0.6a0-py3-none-any.whl
I can build a self-contained app (which relies on the system's Python interpreter) using these pipenv and shiv commands:
(MyPipenvVenv) % pipenv requirements > requirements.txt
(MyPipenvVenv) % shiv --console-script myapp -o app/myappfile.pyz -r requirements.txt .
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting click==8.1.3
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting pip==22.1.2
Using cached pip-22.1.2-py3-none-any.whl (2.1 MB)
Collecting setuptools==62.5.0
Using cached setuptools-62.5.0-py3-none-any.whl (1.2 MB)
Collecting shiv==1.0.1
Downloading shiv-1.0.1-py2.py3-none-any.whl (19 kB)
Building wheels for collected packages: MyProject
Building wheel for MyProject (pyproject.toml): started
Building wheel for MyProject (pyproject.toml): finished with status 'done'
Created wheel for MyProject: filename=MyProject-0.0.6a0-py3-none-any.whl size=5317 sha256=bbcc…cf
Stored in directory: /private/var/folders/…/pip-ephem-wheel-cache-eak1xqjp/wheels/…cc1d
Successfully built MyProject
Installing collected packages: MyProject, setuptools, pip, click, shiv
Successfully installed MyProject-0.0.6a0 click-8.1.3 pip-22.1.2 setuptools-62.5.0 shiv-1.0.1
What I want is to give the command to the PEP 517 front-end, have the pyproject.toml specify that the resulting build work be done by shiv, and point to whatever configuration shiv needs. I want the result be a self-contained app file app/myappfile.pyz. e.g.
(MyPipenvVenv) % python -m build
…[many lines of output elided]…
Successfully built MyProject
Installing collected packages: MyProject, setuptools, pip, click, shiv
Successfully installed MyProject-0.0.6a0 click-8.1.3 pip-22.1.2 setuptools-62.5.0 shiv-1.0.1
My pyproject.toml file would be something like,
[build-system]
requires = ["shiv"]
build-backend = "shiv.build_something_something"
As far as I know, shiv is not a "PEP 517 build back-end" (neither is pex), so it is not possible to write something like the following in pyproject.toml:
[build-system]
requires = ["shiv"]
build-backend = "shiv.build_something_something"
As discussed there, the PEP 517 interface is targeted at the generation of source distributions (sdist) and wheels only.
From my point of view, I consider tools like shiv and pex that generate zipapps to be (at least) one layer above. And when working at this level, it does not matter whether or not sdists and/or wheels are generated via the PEP 517 interface, in other words it does not matter whether or not pyproject.toml files are involved. I assume that shiv and pex either consume wheels and sdists that are already available (maybe downloaded from PyPI) or they delegate the "build" step to a 3rd party tool (maybe pip, maybe build), I do not know and it does not matter.
From my point of view, the input that makes the most sense to get a zipapp as output is some kind of "lock file", and not a (PEP 517) pyproject.toml file. Zipapps are basically one whole "virtual environment" in a single file. It means that the Python interpreter is fixed, and each dependency (direct or indirect) is fixed. This is best described with a lock file.
The requirements.txt files while not strictly lock files, are probably what is the closest thing with enough availability and support in the Python packaging ecosystem. And as far as I know the requirements.txt files are the only "lock file"-ish format that tools like shiv and pex accept as input.
So my recommendation for you would be to focus on requirements.txt files to provide as input to pex or shiv. As you are already doing.
In the Python packaging ecosystem...
It looks like PDM has a real lock file format and already has support for generating zipapps via a plugin pdm-packer.
Poetry also has a lock-file format and they are somewhat looking into supporting zipapps as well
There are discussions and work going on towards a standardized lock file format. But it is difficult work, and will probably still take some time to reach a conclusion.

when postinstall script in package.json is executed while installing the vscode extension

I have created an extension and specified to run an installer file which will download the zip and extract the binary and place it some other folder.
But it is not working?
Please tell me what is the use of the post-install script in package.json while installing the extension.
And when this will be executed.
Thanks,
Akhil
postinstall Run AFTER the package is installed
Means that postinstall applies only for npm packages. (This is the downside of abuse package.json by other ecosystems).
If you want to run some logic (download, extract etc.), you can use the activation event run the check (if a file exists for example) and if it failed, run the installation process.

How to submit a package to PyPI under a different user than my ~/.pypirc

As far as I can tell from the docs, unlike with say git and .gitignore files, setuptools will only look in your $HOME directory for a .pypirc file.
Mostly I am submitting as 'myself', but now I want to submit a specific project via my employer's dev team account.
setup.py register --help doesn't seem to indicate any way to supply a username/password other than the one from my ~/.pypirc
There's the setup.cfg file which could appear in my project root, but it seems that only allows to specify args accepted by the command, so same as above.
Same for .pydistutils.cfg (?)
Surely I can't be the only one - what's the usual way to do this?
I found a workaround, which is to use https://pypi.python.org/pypi/twine
After installing twine I was able to create a project-specific .pypirc file in the project root, containing the company username/password.
Before using twine you have to generate the package using setup.py though, so the procedure is (from your project root):
$ python setup.py sdist
$ twine register --config-file=./.pypirc dist/*
$ twine upload --config-file=./.pypirc dist/*

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.