Is there a simple way to convert setup.py to pyproject.toml - setuptools

We have multiple python projects, and are considering converting them to use pyproject.toml instead of setup.py.
Is there a simple way to automate this?

While dealing with some pyproject.toml issues, I encountered this project:
https://pypi.org/project/ini2toml/
The projects description:
This project is experimental and under active development Issue reports and contributions are very welcome.
The original purpose of this project is to help migrating setup.cfg files to PEP 621, but by extension it can also be used to convert any compatible .ini/.cfg file to .toml.
While this only helps to turn .cfg/.ini files to PEP 621 .toml, there is another project that turn setup.py files into cfg files.
https://github.com/gvalkov/setuptools-py2cfg
This script helps convert existing setup.py files to setup.cfg in the format expected by setuptools.
Combining these two processes by writing a script you could potentially come up with an automated way to transform the files. I have not tried this as of yet but would be interested if you could make this idea work :)

pdm supports importing metadata from various existing and older metadata files, including setup.py.
You can either:
run pdm init on your project root (with the old metadata files) and follow the instructions, or
run pdm import setup.py explicitly.
See Import project metadata from existing project files for more details.

Related

How to build pex or shiv package from pyproject-compliant project?

I have a Python project which I would like to distribute as a Pex or shiv self-contained Python-executable package, in the spirit of the Python Packaging Guide, "Depending on a pre-installed Python" section. My project is structured in the spirit of PEP518, and it has a pyproject.toml file. My project also includes a few libraries not in the Python Standard Library, so I use pipenv to manage those.
How to I build the pex package using a backend which I can specify in the [build-backend] of my pyproject.toml file?
The documentation for pex and shiv show how to build self-contained packages from the command line, or via setuptools.py, but not using the PEP518 structure and pyproject.toml. At least, not as far as I have been able to discover. (And, by "self-contained", I mean all Python language packages, but I am happy to use an existing Python 3 interpreter on the destination system.)
Note that of the three executable packages listed in the Packaging Guide, zipapps does not seem like a fit for me. It doesn't give me a way to manage my external libraries.
Update: some specific invocations, per request.
I currently use build as my build frontend. I use setuptools as my build backend. My pyproject.toml file currently reads,
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
I currently build a wheel via this shell command:
(MyPipenvVenv) % python -m build
…[many lines of output elided]…
Successfully built MyProject-0.0.6a0.tar.gz and MyProject-0.0.6a0-py3-none-any.whl
I can build a self-contained app (which relies on the system's Python interpreter) using these pipenv and shiv commands:
(MyPipenvVenv) % pipenv requirements > requirements.txt
(MyPipenvVenv) % shiv --console-script myapp -o app/myappfile.pyz -r requirements.txt .
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting click==8.1.3
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting pip==22.1.2
Using cached pip-22.1.2-py3-none-any.whl (2.1 MB)
Collecting setuptools==62.5.0
Using cached setuptools-62.5.0-py3-none-any.whl (1.2 MB)
Collecting shiv==1.0.1
Downloading shiv-1.0.1-py2.py3-none-any.whl (19 kB)
Building wheels for collected packages: MyProject
Building wheel for MyProject (pyproject.toml): started
Building wheel for MyProject (pyproject.toml): finished with status 'done'
Created wheel for MyProject: filename=MyProject-0.0.6a0-py3-none-any.whl size=5317 sha256=bbcc…cf
Stored in directory: /private/var/folders/…/pip-ephem-wheel-cache-eak1xqjp/wheels/…cc1d
Successfully built MyProject
Installing collected packages: MyProject, setuptools, pip, click, shiv
Successfully installed MyProject-0.0.6a0 click-8.1.3 pip-22.1.2 setuptools-62.5.0 shiv-1.0.1
What I want is to give the command to the PEP 517 front-end, have the pyproject.toml specify that the resulting build work be done by shiv, and point to whatever configuration shiv needs. I want the result be a self-contained app file app/myappfile.pyz. e.g.
(MyPipenvVenv) % python -m build
…[many lines of output elided]…
Successfully built MyProject
Installing collected packages: MyProject, setuptools, pip, click, shiv
Successfully installed MyProject-0.0.6a0 click-8.1.3 pip-22.1.2 setuptools-62.5.0 shiv-1.0.1
My pyproject.toml file would be something like,
[build-system]
requires = ["shiv"]
build-backend = "shiv.build_something_something"
As far as I know, shiv is not a "PEP 517 build back-end" (neither is pex), so it is not possible to write something like the following in pyproject.toml:
[build-system]
requires = ["shiv"]
build-backend = "shiv.build_something_something"
As discussed there, the PEP 517 interface is targeted at the generation of source distributions (sdist) and wheels only.
From my point of view, I consider tools like shiv and pex that generate zipapps to be (at least) one layer above. And when working at this level, it does not matter whether or not sdists and/or wheels are generated via the PEP 517 interface, in other words it does not matter whether or not pyproject.toml files are involved. I assume that shiv and pex either consume wheels and sdists that are already available (maybe downloaded from PyPI) or they delegate the "build" step to a 3rd party tool (maybe pip, maybe build), I do not know and it does not matter.
From my point of view, the input that makes the most sense to get a zipapp as output is some kind of "lock file", and not a (PEP 517) pyproject.toml file. Zipapps are basically one whole "virtual environment" in a single file. It means that the Python interpreter is fixed, and each dependency (direct or indirect) is fixed. This is best described with a lock file.
The requirements.txt files while not strictly lock files, are probably what is the closest thing with enough availability and support in the Python packaging ecosystem. And as far as I know the requirements.txt files are the only "lock file"-ish format that tools like shiv and pex accept as input.
So my recommendation for you would be to focus on requirements.txt files to provide as input to pex or shiv. As you are already doing.
In the Python packaging ecosystem...
It looks like PDM has a real lock file format and already has support for generating zipapps via a plugin pdm-packer.
Poetry also has a lock-file format and they are somewhat looking into supporting zipapps as well
There are discussions and work going on towards a standardized lock file format. But it is difficult work, and will probably still take some time to reach a conclusion.

How to create and compile a custom module in MongooseIM

System Info:
MongooseIM version: 3.0.0
Installed from: pkg
Erlang/OTP version: 18
Ubuntu 16.04
I am having trouble creating a standard base for a custom module. I want to create a simple hello world program as outlined in the documentation for ejabberd.
However, I cannot get it to work for MongooseIM. Are there any instructions for how to do this? As a beginner I am just looking for building blocks to creating my own modules, and everything I look at is a little too complex for what I am trying to achieve at the moment.
Here is the code for my module: (taken from ejabberd) https://docs.ejabberd.im/developer/extending-ejabberd/modules/#mod-hello-world
And here is my log error:
I have added the following line in my config file with all other running modules:
{mod_hello_world, []}
I am assuming it has something to do with the compilation and there being no .beam file created for the modules as well as some syntax errors specific to MongooseIM. I am also unfamiliar with documentation for compiling modules when using a pre-built pkg as opposed to installing from source.
DISCLAIMER: I'm a MongooseIM developer working for Erlang Solutions.
The link you posted hints at the answer to the immediate question:
If you compiled ejabberd from source code, you can copy that source code file with all the other ejabberd source code files, so it will be compiled and installed with them. If you installed some compiled ejabberd package, you can create your own module dir, see Managing Your Own Modules.
MongooseIM (a.k.a. MIM) does not support the latter method of managing modules, i.e. it's not possible to drop source code into some predefined location when MIM is installed from a package and let it just compile and run the module. If we want to write a custom module, we have to build MongooseIM from source.
To be precise, we don't have to build the whole server from source and package it ourselves. We have to, however, clone the repository, place the new module source there (due to build time requirements like header files) and build it there. Once we get a .beam file of the new module we can just drop it into an installed MongooseIM's code path.
To be even more precise, let's say we have installed MIM from mongooseim_3.0.0-1~ubuntu~artful_amd64.deb available from the Downloads page at erlang-solutions.com, therefore we want to build a module compatible with 3.0.0:
Clone MIM: git clone https://github.com/esl/mongooseim
cd mongooseim
git checkout 3.0.0
Place mod_hello_world.erl under ./src/
rebar3 compile
Once rebar3 finishes get ./_build/default/lib/mongooseim/mod_hello_world.beam and copy to the target host where we installed MIM from a package.
Please note, though, that an example taken straight from ejabberd documentation might not work "as is" in MongooseIM. In this simple module, for example, we'll not be able to include logger.hrl as MongooseIM doesn't have such a header file - we would have to -include("mongoose_logger.hrl"). instead.

How can I make a FindMyPackage.cmake module fall back to downloading?

I have a simple CMake find module I've written, for a library of mine used by other projects. It's pretty simplistic, with its full text available here. Mainly there's one find_path() and one find_library(), and then some variables are set.
Now, I want CMake, when trying to find my package, to fall back on:
git-cloning or downloading the package/library from its GitHub repository,
Unpacking the archive, if it was a download
Building the package, either be using the running CMake itself somehow (the package has its own CMakeLists.txt), or by running an arbitrary shell command in the directory into which the packages was downloaded/cloned
The specifics of what happens post-download are less important to me than actually having a download fall-back.
How can I / how should I make this happen?
Notes:
Of course if the download/git clone fails, than finding the package has failed.
No need to worry about specific versions at the repo, although you can if you want to.

Programming in Swift on Linux

I would like to prepare the environment for working with Swift on Ubuntu 16.04.
I installed Swift and Atom editor.
I installed the Script package, which allows me to run code from the Atom editor.
Generally it is nice when I compile and run one file (Ctrl+Shift+B shortcut).
The problem is when I would like to build a project composed of several files.
Classes defined in the other files (not the one I compile) are not visible (compilation error).
Is it possible to configure the editor to compile and run the entire project?
How to import external library, eg ObjectMapper ?
You can use the Atom package build. It allows you to create custom build commands and such by using common build providers. You can build with a Makefile or JSON or CSON or YAML or even Javascript. It provides enough flexibility that you can build just about anything. Just make your build file so that it points to all the files to build with the right compiler (probably swiftc in your case). With a Javascript build file, you can even specify a command to run before and after the build, say, to run your newly built program.
There's a great open source project I have been watching called Marathon. It's a package manager and they have been Working on a deployment on linux. I'm not sure how much success they have had, but you can follow along here and maybe help out.
https://github.com/JohnSundell/Marathon/issues/37
Edit: It looks like it does work on linux!
git clone https://github.com/JohnSundell/Marathon.git
$ cd Marathon
$ swift build -c release
$ cp -f .build/release/Marathon /usr/local/bin/marathon
For dependencies, you should use Swift Package Manager.
You can check how Vapor is built - it is prepared for build apps for Ubuntu too.
Also, Vapor toolbox would help you with other projects
https://docs.vapor.codes/2.0/getting-started/install-on-ubuntu/
You can build a Swift project using VS Code + Swift Development Environment extension
If steps on the link above are not clear enough, I've put more details in a blog post

Should I check in *.mo files?

Should I check in *.mo translation files into my version control system?
This is a general question. But in particular I'm working on Django projects with git repositories.
The general answer is:
if you do need those files to compile or to deploy (in shot: to "work" with) your component (set of files queried from your VCS), then yes, they should be stored in it (here: in Git).
This is the same for other kind of files (like project files for instance)
.mo files are particular:
django-admin.py compilemessages utility.
This tool runs over all available .po files and creates .mo files, which are binary files optimized for use by gettext
Meaning:
you should be able to rebuild them every time you need them (guarantying in effect that they are in synch with their .po couterparts)
Git is not so good with binary storage and that would avoid it to store a full version for every changes
So the specific answer is not so clear-cut:
if your po files are stables and will not evolve too often, you could definitively store the .mo file
you should absolutely store a big README file explaning how to generate mo from po files.
The general answer is to not store generated contents in version control.
You can include it in tarball, if it requires rare tools, or even have separate repository or disconnected branch with only those generated files (like 'html' and 'man' branches in git.git repository).
For asked question Jakub answer is pretty neat.
But one could ask:
So where should I store such files? Should I generate them every time I deploy my code?
And for that... it depends. You could deploy it in tarball (as Jakub sugested) or even better - create pip or system package (RPM for fedora, DEB for debian, etc.).