How to build pex or shiv package from pyproject-compliant project? - python-packaging

I have a Python project which I would like to distribute as a Pex or shiv self-contained Python-executable package, in the spirit of the Python Packaging Guide, "Depending on a pre-installed Python" section. My project is structured in the spirit of PEP518, and it has a pyproject.toml file. My project also includes a few libraries not in the Python Standard Library, so I use pipenv to manage those.
How to I build the pex package using a backend which I can specify in the [build-backend] of my pyproject.toml file?
The documentation for pex and shiv show how to build self-contained packages from the command line, or via setuptools.py, but not using the PEP518 structure and pyproject.toml. At least, not as far as I have been able to discover. (And, by "self-contained", I mean all Python language packages, but I am happy to use an existing Python 3 interpreter on the destination system.)
Note that of the three executable packages listed in the Packaging Guide, zipapps does not seem like a fit for me. It doesn't give me a way to manage my external libraries.
Update: some specific invocations, per request.
I currently use build as my build frontend. I use setuptools as my build backend. My pyproject.toml file currently reads,
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
I currently build a wheel via this shell command:
(MyPipenvVenv) % python -m build
…[many lines of output elided]…
Successfully built MyProject-0.0.6a0.tar.gz and MyProject-0.0.6a0-py3-none-any.whl
I can build a self-contained app (which relies on the system's Python interpreter) using these pipenv and shiv commands:
(MyPipenvVenv) % pipenv requirements > requirements.txt
(MyPipenvVenv) % shiv --console-script myapp -o app/myappfile.pyz -r requirements.txt .
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting click==8.1.3
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting pip==22.1.2
Using cached pip-22.1.2-py3-none-any.whl (2.1 MB)
Collecting setuptools==62.5.0
Using cached setuptools-62.5.0-py3-none-any.whl (1.2 MB)
Collecting shiv==1.0.1
Downloading shiv-1.0.1-py2.py3-none-any.whl (19 kB)
Building wheels for collected packages: MyProject
Building wheel for MyProject (pyproject.toml): started
Building wheel for MyProject (pyproject.toml): finished with status 'done'
Created wheel for MyProject: filename=MyProject-0.0.6a0-py3-none-any.whl size=5317 sha256=bbcc…cf
Stored in directory: /private/var/folders/…/pip-ephem-wheel-cache-eak1xqjp/wheels/…cc1d
Successfully built MyProject
Installing collected packages: MyProject, setuptools, pip, click, shiv
Successfully installed MyProject-0.0.6a0 click-8.1.3 pip-22.1.2 setuptools-62.5.0 shiv-1.0.1
What I want is to give the command to the PEP 517 front-end, have the pyproject.toml specify that the resulting build work be done by shiv, and point to whatever configuration shiv needs. I want the result be a self-contained app file app/myappfile.pyz. e.g.
(MyPipenvVenv) % python -m build
…[many lines of output elided]…
Successfully built MyProject
Installing collected packages: MyProject, setuptools, pip, click, shiv
Successfully installed MyProject-0.0.6a0 click-8.1.3 pip-22.1.2 setuptools-62.5.0 shiv-1.0.1
My pyproject.toml file would be something like,
[build-system]
requires = ["shiv"]
build-backend = "shiv.build_something_something"

As far as I know, shiv is not a "PEP 517 build back-end" (neither is pex), so it is not possible to write something like the following in pyproject.toml:
[build-system]
requires = ["shiv"]
build-backend = "shiv.build_something_something"
As discussed there, the PEP 517 interface is targeted at the generation of source distributions (sdist) and wheels only.
From my point of view, I consider tools like shiv and pex that generate zipapps to be (at least) one layer above. And when working at this level, it does not matter whether or not sdists and/or wheels are generated via the PEP 517 interface, in other words it does not matter whether or not pyproject.toml files are involved. I assume that shiv and pex either consume wheels and sdists that are already available (maybe downloaded from PyPI) or they delegate the "build" step to a 3rd party tool (maybe pip, maybe build), I do not know and it does not matter.
From my point of view, the input that makes the most sense to get a zipapp as output is some kind of "lock file", and not a (PEP 517) pyproject.toml file. Zipapps are basically one whole "virtual environment" in a single file. It means that the Python interpreter is fixed, and each dependency (direct or indirect) is fixed. This is best described with a lock file.
The requirements.txt files while not strictly lock files, are probably what is the closest thing with enough availability and support in the Python packaging ecosystem. And as far as I know the requirements.txt files are the only "lock file"-ish format that tools like shiv and pex accept as input.
So my recommendation for you would be to focus on requirements.txt files to provide as input to pex or shiv. As you are already doing.
In the Python packaging ecosystem...
It looks like PDM has a real lock file format and already has support for generating zipapps via a plugin pdm-packer.
Poetry also has a lock-file format and they are somewhat looking into supporting zipapps as well
There are discussions and work going on towards a standardized lock file format. But it is difficult work, and will probably still take some time to reach a conclusion.

Related

Why is package included in Yocto rootfs?

I'm in the process of upgrading from Yocto Sumo to Yocto Dunfell. In this process there's quite a few packages getting added to the rootfs that wasn't there before and which I don't have use for. I would like to know why they are added? Which dependency triggers them to get added?
In previous versions of Yocto there was a pn-depends.dot file which provided this information. This has now been removed. All that is left is a task-depends.dot which I guess I should use, however it is harder to read as it lists dependencies between individual tasks and doesn't show why a certain package is added to the rootfs. The command bitbake -g <image-name> -u taskexp makes it slightly easier to read the file but it is still hard to understand as package names are not always the same as task names.
What is the preferred solution to get an answer to "why is included in my rootfs?"

recipe also produces -native output that needs packaging

I have a recipe which successfully invokes a legacy build command to cross-compile a target.
As a side effect it produces some custom native tools that are used in the build.
I want to reap those tools into a -tools-native package to allow other recipes to depend the main package to access the artifacts, and use the -tools-native package to further process those artifacts.
I can build such a native package as simply as adding:
PROVIDES = "${PN} ${PN}-tools-native"
SYSROOT_DIRS += "/"
PACKAGES += "${PN}-tools-native"
FILES_${PN}-tools-native += "/native-bin/*"
and having the install section install the native tools to /native-bin/
but yet it somehow isn't a real native package, and when DEPENDS'd by an additional recipe the native-bin artifacts are installed inrecipe-sysrootinstead ofrecipe-sysroot-native`
I also have to install the tools 0644 or bitbake tries to strip them (and fails, as they are native build).
Because the native tools are already generated by the legacy build commands, I don't need to actually invoke as a -native recipe variant.
It's a long process, I don't want to run it twice, either.
Currently I work around it by having the other recipes DEPEND on recipe-native-tools and fixup the permissions and PATH
But what's the proper way to do this?
This is generally handled by separate recipes. There is no mechanism to share native binaries from target recipes as their task hashes have the wrong kinds of information in them (they change depending on the target architecture).
Target recipes don't install their bindir/sbindir into the sysroot since we can't run them and as you mention, they're the wrong architecture so they confuse strip and so on.
You could try having a native recipe which depended upon this target recipe and which installs the binaries saved by the target recipe somewhere into its ${D} at do_install. That may well give some warnings since in general native recipes shouldn't depend on target recipes but is probably your best option if you can't build twice.

How to create and compile a custom module in MongooseIM

System Info:
MongooseIM version: 3.0.0
Installed from: pkg
Erlang/OTP version: 18
Ubuntu 16.04
I am having trouble creating a standard base for a custom module. I want to create a simple hello world program as outlined in the documentation for ejabberd.
However, I cannot get it to work for MongooseIM. Are there any instructions for how to do this? As a beginner I am just looking for building blocks to creating my own modules, and everything I look at is a little too complex for what I am trying to achieve at the moment.
Here is the code for my module: (taken from ejabberd) https://docs.ejabberd.im/developer/extending-ejabberd/modules/#mod-hello-world
And here is my log error:
I have added the following line in my config file with all other running modules:
{mod_hello_world, []}
I am assuming it has something to do with the compilation and there being no .beam file created for the modules as well as some syntax errors specific to MongooseIM. I am also unfamiliar with documentation for compiling modules when using a pre-built pkg as opposed to installing from source.
DISCLAIMER: I'm a MongooseIM developer working for Erlang Solutions.
The link you posted hints at the answer to the immediate question:
If you compiled ejabberd from source code, you can copy that source code file with all the other ejabberd source code files, so it will be compiled and installed with them. If you installed some compiled ejabberd package, you can create your own module dir, see Managing Your Own Modules.
MongooseIM (a.k.a. MIM) does not support the latter method of managing modules, i.e. it's not possible to drop source code into some predefined location when MIM is installed from a package and let it just compile and run the module. If we want to write a custom module, we have to build MongooseIM from source.
To be precise, we don't have to build the whole server from source and package it ourselves. We have to, however, clone the repository, place the new module source there (due to build time requirements like header files) and build it there. Once we get a .beam file of the new module we can just drop it into an installed MongooseIM's code path.
To be even more precise, let's say we have installed MIM from mongooseim_3.0.0-1~ubuntu~artful_amd64.deb available from the Downloads page at erlang-solutions.com, therefore we want to build a module compatible with 3.0.0:
Clone MIM: git clone https://github.com/esl/mongooseim
cd mongooseim
git checkout 3.0.0
Place mod_hello_world.erl under ./src/
rebar3 compile
Once rebar3 finishes get ./_build/default/lib/mongooseim/mod_hello_world.beam and copy to the target host where we installed MIM from a package.
Please note, though, that an example taken straight from ejabberd documentation might not work "as is" in MongooseIM. In this simple module, for example, we'll not be able to include logger.hrl as MongooseIM doesn't have such a header file - we would have to -include("mongoose_logger.hrl"). instead.

Programming in Swift on Linux

I would like to prepare the environment for working with Swift on Ubuntu 16.04.
I installed Swift and Atom editor.
I installed the Script package, which allows me to run code from the Atom editor.
Generally it is nice when I compile and run one file (Ctrl+Shift+B shortcut).
The problem is when I would like to build a project composed of several files.
Classes defined in the other files (not the one I compile) are not visible (compilation error).
Is it possible to configure the editor to compile and run the entire project?
How to import external library, eg ObjectMapper ?
You can use the Atom package build. It allows you to create custom build commands and such by using common build providers. You can build with a Makefile or JSON or CSON or YAML or even Javascript. It provides enough flexibility that you can build just about anything. Just make your build file so that it points to all the files to build with the right compiler (probably swiftc in your case). With a Javascript build file, you can even specify a command to run before and after the build, say, to run your newly built program.
There's a great open source project I have been watching called Marathon. It's a package manager and they have been Working on a deployment on linux. I'm not sure how much success they have had, but you can follow along here and maybe help out.
https://github.com/JohnSundell/Marathon/issues/37
Edit: It looks like it does work on linux!
git clone https://github.com/JohnSundell/Marathon.git
$ cd Marathon
$ swift build -c release
$ cp -f .build/release/Marathon /usr/local/bin/marathon
For dependencies, you should use Swift Package Manager.
You can check how Vapor is built - it is prepared for build apps for Ubuntu too.
Also, Vapor toolbox would help you with other projects
https://docs.vapor.codes/2.0/getting-started/install-on-ubuntu/
You can build a Swift project using VS Code + Swift Development Environment extension
If steps on the link above are not clear enough, I've put more details in a blog post

Building a bitbake component locally

I am writing a component that goes into the yocto build, but during development I don't want to build the entire image. I want to checkout my component(in its own GIT repo), build it using the cross-compiler used for building the entire tree, and test that before checking in(devtest) and building the entire filesystem for system test. I have not found a way to do that.
If I understand your question correctly, what you want to do is to build the SDK?
Run
bitbake - c populate_sdk <image-name>
that'll give you a nice SDK in a tarball. Then you execute that tarball to install it on you desired location.
In the shell where you're developing your application, you source the environment-.... file in the installed location. Now everything is configured to crosscompile, as long as you're using eg CC instead of directly calling gcc.