What's difference between [tool.poetry] and [project] in pyproject.toml? - setuptools

Context
So, I'm trying to create a new python package following this guideline: https://packaging.python.org/en/latest/tutorials/packaging-projects/
As a guideline says - in my pyproject.toml I should have this structure:
[project]
name = "example_package_YOUR_USERNAME_HERE"
version = "0.0.1"
authors = [
{ name="Example Author", email="author#example.com" },
]
description = "A small example package"
but, when I've created this file with poetry init I have this structure:
[tool.poetry]
name = "example_package_YOUR_USERNAME_HERE"
version = "0.0.1"
authors = [
{ name="Example Author", email="author#example.com" },
]
description = "A small example package"
The main difference between this two is project and tool.poetry headers for sections.
I also see, that poetry can't do anything with project, when there is no [tool.poetry] in pyproject.toml
So my questions is:
What the difference between this two?
Should I have only one or both at the same time in my pyproject.toml? And what should contain what if I should keep both?
If there should be only [tool.poetry] - so I need to follow same rules for [project] sections naming? So [project.urls] will be renamed to [tool.poetry.urls]?
What is best for future publishing on PyPI? Or there is no difference?
Is changing [build-system] from poetry-core to setuptools is a good idea? Or I should keep poetry-core?

1. What the difference between this two?
The [project] section is standardized (also known as PEP-621). But Poetry is older than the creation of this standard, so it started by using its own section [tool.poetry]. Poetry is planning to add support for the standardized [project] (see python-poetry/poetry/issues/3332 and python-poetry/roadmap/issues/3), but it takes time.
The differences between the two are quite small, they are basically different notations for the same package metadata.
2. Should I have only one or both at the same time in my pyproject.toml? And what should contain what if I should keep both?
You should have only one. You have to choose a build back-end. If your build back-end is poetry-core then you need the [tool.poetry] section. If you choose a build back-end that requires [project] (which is the case of setuptools), then that is what you should have.
3. If there should be only [tool.poetry] - so I need to follow same rules for [project] sections naming? So [project.urls] will be renamed to [tool.poetry.urls]?
This is not exactly one-to-one equivalent, there are some differences. Follow Poetry's documentation if you use Poetry. Or the [project] specification if you use something else (setuptools, etc.).
4. What is best for future publishing on PyPI? Or there is no difference?
There is not much difference. You could argue that choosing a build back-end that follows the [project] standard is better, but really it is not what you should base your choice on. There are many other criteria you should base your choice on.
For example:
https://sinoroc.gitlab.io/kb/python/packaging_tools_comparisons.html#development-workflow-tools
https://sinoroc.gitlab.io/kb/python/packaging_tools_comparisons.html#build-back-ends
5. Is changing [build-system] from poetry-core to setuptools is a good idea? Or I should keep poetry-core?
Poetry the "development workflow tool" does not allow using any other build back-end than poetry-core. So if you want to keep using Poetry for your project, you have no choice but to keep using poetry-core as build back-end.

The [project] section is mandatory in pyproject.toml. If the entry is missing, the build tool (defined in [build-system] section) have to add it dynamically. I guess that's exactly what poetry does.
From the documentation:
The keys defined in this specification MUST be in a table named [project] in pyproject.toml. No tools may add keys to this table which are not defined by this specification. For tools wishing to store their own settings in pyproject.toml, they may use the [tool] table as defined in the build dependency declaration specification. The lack of a [project] table implicitly means the build back-end will dynamically provide all keys.
So you don't need the [project] while you are using poetry. If you change the build system, you must convert your pyproject.toml to be PEP 621 compliant.

Related

Specifying build requirements as a file in a setuptools pyproject.toml

Setuptools supports dynamic metadata for project properties in pyproject.toml, and as a PEP517 backend, it also has the option to specify build requirements by implementing get_requires_for_build_wheel. But I cannot figure out whether it uses the chance and does implement a way to specify build requirements based on configuration options, and if so, how to specify it in the pyproject.toml.
I naively tried
[build-system]
requires = {file = "requirements-build.txt"}
but that understandably leads to pip complaining “This package has an invalid build-system.requires key in pyproject.toml. It is not a list of strings.” And adding
[project]
dynamic = ["build-system.requires"]
also doesn't work, because the possible options of dynamic are explicitly enumerated. I would be somewhat surprised if there wasn't an option for this, given that all the infrastructure elements are available, but how do I specify it?
As far as I know, it is not possible.
If it is really necessary for your use case, and you think it is worth the cost, maybe it is possible to add some dynamic behavior here by (mis-)using the "in-tree build backends" feature.

Yocto - Why do runtime variables (RDEPENDS, RPROVIDES, etc), require package name overrides?

essentially I don't understand why variables like RDEPENDS require a package name conditional override such as "RDEPENDS_${PN}" while other variables, including DEPENDS, do not require this. Isn't putting the package name as a conditional after the variable pointless? I feel like my confusion may stem from some fundamental misunderstanding of the way bitbake works.
When a recipe is built, that single recipe can generate multiple packages. For example, debugging information is in ${PN}-dbg, docs in ${PN}-doc and development headers/files in ${PN}-dev. The "main" files for a recipe would go to ${PN} but many recipes split other pieces into other separate packages by adding entries to PACKAGES (which defaults to the above values).
Since there are multiple output "runtime" packages, runtime variables such as RDEPENDS have to be applied to a specific output package, hence the RDEPENDS:${PN} or for older releases RDEPENDS_${PN} variable name format, otherwise it would be unclear which package they applied to.

Adding additional libraries to MODELICAPATH in JModelica

In JModelica I want to create models using components from multiple existing libraries.
This means that it would be very useful to add the multiple libraries to the MODELICAPATH so components can be referenced without changing their existing paths. Something similar seems possible in Dymola.
In JModelica 1.13 it seems that this was once possible using:
c_opts =
{'extra_lib_dirs':['c:\MyLibs1','c:\MyLibs2']}
compile_fmu(class_path, compiler_options=c_opts)
I have read through the JModelica 2.1 document and there seems to be no mention of this argument. I have also tried running the script above and the compiler is not able to locate the path of the model contained within a library listed in the options.
Adding libraries to the Third Party MSL Folder inside the JModelica installation is not an option, as the multiple libraries I'll be working with are GitHub repos.
Is it possible to add these multiple libraries to the MODELICAPATH via startup script or IPython code?
The option "extra_lib_dirs" has been removed in favour of the simpler interface:
from pymodelica import compile_fmu
name = compile_fmu("MyModel", ["MyModelicaFile.mo", "C:\My\Modelica\Lib", ...])
The list after the model is specificed can take any number of Modelica files or directories to where Modelica libraries are located.
Yes, JModelica.org looks at the environment variable MODELICAPATH for additional locations of Modelica libraries (as per the Modelica language specification, section 13.2.4).
Either you modify the variable in batch before starting JModelica.org, or you modify the environment inside Python:
import os
os.environ['MODELICAPATH'] = "C:/somePath/;" + os.environ['JMODELICA_HOME'] + "/ThirdParty/MSL"
from pymodelica import compile_fmu
compile_fmu("SomeLibrary.SomeModel")
Note, if you're going to compile models from MSL or models using parts of MSL, then you have to add the MSL folder from the JModelica.org installation to the MODELICAPATH as well. The reason for this is that we are overriding the default MODELICAPATH and JModelica.org uses MODELICAPATH to find MSL.
I might add that it is more efficient to add the library folders to MODELICAPATH than listing them in the compile_fmu command. The reason for this is that if you list them to the compile_fmu command, then all the libraries will be parsed, while, if you add them (or rather the parent folder) to MODELICAPATH, then they are loaded as needed.

How do Atom's 'spec' files work?

I'm making a package for Atom, and Travis CI keeps telling me my build failed.
Update: I created a blank spec file and now my builds are passing.
You can see my package here: https://travis-ci.org/frayment/language-jazz
The console is telling me:
sh: line 105: ./spec: No such file or directory
Missing spec folder! Please consider adding a test suite in
I went looking around at Atom packages on GitHub for 'spec' files and they seem to be CoffeeScript based, but I can't understand what on earth they contain. There isn't much documentation on the subject, so:
What is a 'spec' file, and what do I put in it?
Help is very appreciated.
The ./spec directory should contain one or more Jasmine Specifications for the Atom Package you are developing, for example, this spec is taken from the Atom documentation:
describe "when a test is written", ->
it "has some expectations that should pass", ->
expect("apples").toEqual("apples")
expect("oranges").not.toEqual("apples")
One of the biggest challenges with Open Source software is maintaining quality when a large number of individual contributors are providing code, one solution to this is providing a high level of test coverage:
Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing.
In Atom's case, all of the specifications are added to the ./spec folder and must end with -spec.coffee, so for example if you were creating a package named awesome and your code sat within /awesome.coffee then you spec would be ./spec/awesome.coffee. Your spec should exercise the key areas of your code to give you confidence when committing pull requests to your master branch.
I have a couple of packages on Atom.io and both of these have tests included with them, you are welcome to use these as concrete examples of how Jasmine 1.3 tests can be written to support the functionality of your packages. Equally the majority of packages on Atom.io also have a set of tests that you can draw upon to build your own test suite.

String constants for NetBeans project types

I am developing a plugin for NetBeans 8.0 and I created a LookupProvider which is registered like that:
#LookupProvider.Registration(projectType = {
"org-netbeans-modules-ant-freeform",
"org-netbeans-modules-j2ee-archiveproject",
"org-netbeans-modules-j2ee-clientproject",
"org-netbeans-modules-j2ee-earproject",
"org-netbeans-modules-j2ee-ejbjarproject",
"org-netbeans-modules-java-j2seproject",
"org-netbeans-modules-maven",
"org-netbeans-modules-web-clientproject",
"org-netbeans-modules-web-project"
})
I would like to know if there is the possibility to reference the project types from a constant (which is already defined by the NetBeans platform) or do I really have to declare them as strings (like org-netbeans-modules-web-clientproject)?
I believe there are constants for these but the question is if you really want to depend on them. Oftentimes the constants are hidden in the project type's own module that doesn't provider API packages or provides them only to friends. And typically your own primary dependency is on the interface that you implement and put into the lookup. There could be some sort of master list in a public package somewhere but that could always just list the subset of project types. Also please note that for maven you can actually have an unlimited number of constants as we support only registering your LP to a given maven packaging type.