Qbs Depends Item to affect products list - qbs

Assuming I have some project with tons of sub-projects, and in most cases I only need to compile few of them at once:
Project {
property stringList plugins: ["plugin3", "plugin4"] // this is set externally
references: plugins.map(function(p) { return "plugins/"+p+".qbs" })
}
Assuming plugin3 depends on plugin1 and plugin2:
Depends { name: "plugin1" }
Depends { name: "plugin2" }
In that case, I would have to set plugins property as:
plugins: ["plugin1", "plugin2", "plugin3", "plugin4"]
which is what I want to avoid. So, the question is: Is there a way to make subproject dependencies automatically added as project references?
p.s. there is also an alternative way to make all sub-projects present, but conditionally disabled. Can I somehow make them enabled by dependent sub-project?

references should pretty much always be a static list; there is no need to make it conditional just because you only want to compile a certain subset of your products at once.
Instead, what you'll want to do is run qbs with the -p option which lets you specify the name of a particular product to build. Using this option will only built that product and its dependencies, but not the rest of the products in your project.

Related

What's difference between [tool.poetry] and [project] in pyproject.toml?

Context
So, I'm trying to create a new python package following this guideline: https://packaging.python.org/en/latest/tutorials/packaging-projects/
As a guideline says - in my pyproject.toml I should have this structure:
[project]
name = "example_package_YOUR_USERNAME_HERE"
version = "0.0.1"
authors = [
{ name="Example Author", email="author#example.com" },
]
description = "A small example package"
but, when I've created this file with poetry init I have this structure:
[tool.poetry]
name = "example_package_YOUR_USERNAME_HERE"
version = "0.0.1"
authors = [
{ name="Example Author", email="author#example.com" },
]
description = "A small example package"
The main difference between this two is project and tool.poetry headers for sections.
I also see, that poetry can't do anything with project, when there is no [tool.poetry] in pyproject.toml
So my questions is:
What the difference between this two?
Should I have only one or both at the same time in my pyproject.toml? And what should contain what if I should keep both?
If there should be only [tool.poetry] - so I need to follow same rules for [project] sections naming? So [project.urls] will be renamed to [tool.poetry.urls]?
What is best for future publishing on PyPI? Or there is no difference?
Is changing [build-system] from poetry-core to setuptools is a good idea? Or I should keep poetry-core?
1. What the difference between this two?
The [project] section is standardized (also known as PEP-621). But Poetry is older than the creation of this standard, so it started by using its own section [tool.poetry]. Poetry is planning to add support for the standardized [project] (see python-poetry/poetry/issues/3332 and python-poetry/roadmap/issues/3), but it takes time.
The differences between the two are quite small, they are basically different notations for the same package metadata.
2. Should I have only one or both at the same time in my pyproject.toml? And what should contain what if I should keep both?
You should have only one. You have to choose a build back-end. If your build back-end is poetry-core then you need the [tool.poetry] section. If you choose a build back-end that requires [project] (which is the case of setuptools), then that is what you should have.
3. If there should be only [tool.poetry] - so I need to follow same rules for [project] sections naming? So [project.urls] will be renamed to [tool.poetry.urls]?
This is not exactly one-to-one equivalent, there are some differences. Follow Poetry's documentation if you use Poetry. Or the [project] specification if you use something else (setuptools, etc.).
4. What is best for future publishing on PyPI? Or there is no difference?
There is not much difference. You could argue that choosing a build back-end that follows the [project] standard is better, but really it is not what you should base your choice on. There are many other criteria you should base your choice on.
For example:
https://sinoroc.gitlab.io/kb/python/packaging_tools_comparisons.html#development-workflow-tools
https://sinoroc.gitlab.io/kb/python/packaging_tools_comparisons.html#build-back-ends
5. Is changing [build-system] from poetry-core to setuptools is a good idea? Or I should keep poetry-core?
Poetry the "development workflow tool" does not allow using any other build back-end than poetry-core. So if you want to keep using Poetry for your project, you have no choice but to keep using poetry-core as build back-end.
The [project] section is mandatory in pyproject.toml. If the entry is missing, the build tool (defined in [build-system] section) have to add it dynamically. I guess that's exactly what poetry does.
From the documentation:
The keys defined in this specification MUST be in a table named [project] in pyproject.toml. No tools may add keys to this table which are not defined by this specification. For tools wishing to store their own settings in pyproject.toml, they may use the [tool] table as defined in the build dependency declaration specification. The lack of a [project] table implicitly means the build back-end will dynamically provide all keys.
So you don't need the [project] while you are using poetry. If you change the build system, you must convert your pyproject.toml to be PEP 621 compliant.

Generate java source code under my project's source code package

I have my annotation processor:
public class MyAnnotationProcessor extends AbstractProcessor {
...
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
// Here I deal with the annotated element
...
// use JavaPoet to generate Java source file
TypeSpec generatedClazz = generate_code();
JavaFile javaFile = JavaFile.builder("com.my.foo", generatedClazz).build();
javaFile.writeTo(filer);
}
}
After processing annotated element in above process callback, I use JavaPoet to generate java source code & create the Java file for the code. When build my project, everything works except that the generated java source code file by default goes to build/generated/sources/myApp/com/my/foo. How can I make the generated Java file to be located in my project's source code location src/main/java/com/my/foo ?
My gradle build:
plugins {
id 'java'
}
group 'com.my.app'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
testImplementation 'junit:junit:4.12'
implementation 'com.squareup:javapoet:1.11.1'
implementation 'com.google.guava:guava:28.1-jre'
}
The bad news: Annotation processors can't do this - the nature of how their rounds work mean that it isn't going to make sense to generate sources in the same directory where "actual" sources live, since those generated sources will be treated as inputs the next time the annotation processor runs.
Good news: JavaPoet is agnostic of how you actually invoke it, so you can just write a simple main() that does the code generation, and either ask your IDE to invoke it when it builds, or attach it to your gradle build. If you plan on manually editing the sources after they are generated, you probably don't want this to happen, since you likely intend your manual changes to be kept instead of being overwritten each time you build.
The JavaFile.writeTo(...) method has several overrides, and only one of them takes the annotation processor Filer. Using the Filer has some advantages - it is very clear where you intend the class to be written - but JavaFile.writeTo(File directory) is also meant to be used in this way. You don't pass it the actual file where you want the MyClass.java to be, just the source directory you want to write to. In your case, this would be roughly javaFile.writeTo(new File("myProject/src/main/java")).
You probably still should parameterize how to invoke this main, so that it knows what inputs to use, how to understand your existing sources, etc. On the other hand, if your generate_code() doesn't need any existing sources from the same project to run, this should be quite straightforward.
Not sure about gradle but with maven you can defined the generated source directory using below tab in maven-compiler-plugin.
<generatedSourcesDirectory>
${project.basedir}/src/main/java
</generatedSourcesDirectory>
For complete example check the below link.
https://www.thetechnojournals.com/2019/12/annotation-processor-to-generate-dto.html

String constants for NetBeans project types

I am developing a plugin for NetBeans 8.0 and I created a LookupProvider which is registered like that:
#LookupProvider.Registration(projectType = {
"org-netbeans-modules-ant-freeform",
"org-netbeans-modules-j2ee-archiveproject",
"org-netbeans-modules-j2ee-clientproject",
"org-netbeans-modules-j2ee-earproject",
"org-netbeans-modules-j2ee-ejbjarproject",
"org-netbeans-modules-java-j2seproject",
"org-netbeans-modules-maven",
"org-netbeans-modules-web-clientproject",
"org-netbeans-modules-web-project"
})
I would like to know if there is the possibility to reference the project types from a constant (which is already defined by the NetBeans platform) or do I really have to declare them as strings (like org-netbeans-modules-web-clientproject)?
I believe there are constants for these but the question is if you really want to depend on them. Oftentimes the constants are hidden in the project type's own module that doesn't provider API packages or provides them only to friends. And typically your own primary dependency is on the interface that you implement and put into the lookup. There could be some sort of master list in a public package somewhere but that could always just list the subset of project types. Also please note that for maven you can actually have an unlimited number of constants as we support only registering your LP to a given maven packaging type.

How can I declare OR-Dependency in smart.json for Meteorite?

You define the packages your package is dependent on in the smart.json for example like this:
{
[...],
"packages": {
"package1": {},
"package2": {}
}
}
This means my package is dependent on package1 AND package2. Is it possible to declare my package dependent on package1 OR package2?
No, I would be highly suprised if there were such a way to include packages. If you really want to be dependent on one of two packages you are going to have to implement that in your package code (you would be dependant on both packages and your logic would have to choose which package to use).
The next best thing I can think of is editing the package.js file which allows you to create a weak dependency:
It is possible to create weak dependencies between packages. If
package A has a weak dependency on package B, it means that including
A in an app does not force B to be included too — but, if B is
included, say by the app developer or by another package, then B will
load before A. You can use this to make packages that optionally
integrate with or enhance other packages if those packages are
present. To create a weak dependency, pass {weak: true} as the third
argument to api.use. When you weakly depend on a package you don't see
its exports. You can detect if the possibly-present weakly-depended-on
package is there by seeing if Package.foo exists, and get its exports
from the same place.
Maybe there is an alternative soulution, care to elaborate why you want your package dependecies to be like this?
I have the same problem with one of packages I have developed. what I did was simply NOT indicating any dependancy in the smart.json file and let the user decide which package he/she wants to use, and I have mentioned it clearly on package's getting started guide.
in my case it is a bootstrap3 package I wanted

What is correct way to define tasks in Gradle plugin?

I'm trying to create my first Gradle plugin.
1. Add extension for properties: project.extensions.create("abc", AbcExtension)
2. Define copy task. When I define task following way
project.task("abcTask", type: Copy) {
from project.abc.fromPath
into project.abc.intoPath
}
project.abc.fromPath equals to AbcExtension.fromPath value - it doesn't read values from build.gradle.
When I define task following way
project.task("abcTask", type: Copy) << {
from project.abc.fromPath
into project.abc.intoPath
}
it always print UP-TO-DATE and doesn't run task.
Pls explain this behaviour and tell me what is correct way to define tasks in Gradle plugins (with type and dependsOn functionallity)
Plugins have to defer every read of a mutable build model value (i.e. anything that can be set from a build script) until at least the end of the configuration phase. There are several ways to achieve this goal. Among them are:
Using a Gradle API that accepts closures as values (e.g. Copy.from)
Using callbacks like project.afterEvaluate {} or gradle.projectsEvaluated {}
Using the convention mapping mechanism (note that this is not considered a public feature)
Choosing the best option for the job at hand requires some expertise. (It may help to study some of the plugins in the Gradle codebase.) In your case, I might do the following:
project.task("abcTask", type: Copy) {
from { project.abc.fromPath }
into { project.abc.intoPath }
}
Your << version doesn't work because it configures the Copy task too late. Generally speaking, all configuration should take place in the configuration phase, rather than the execution phase. You can learn more about Gradle build phases in the Gradle User Guide.