What is correct way to define tasks in Gradle plugin? - plugins

I'm trying to create my first Gradle plugin.
1. Add extension for properties: project.extensions.create("abc", AbcExtension)
2. Define copy task. When I define task following way
project.task("abcTask", type: Copy) {
from project.abc.fromPath
into project.abc.intoPath
}
project.abc.fromPath equals to AbcExtension.fromPath value - it doesn't read values from build.gradle.
When I define task following way
project.task("abcTask", type: Copy) << {
from project.abc.fromPath
into project.abc.intoPath
}
it always print UP-TO-DATE and doesn't run task.
Pls explain this behaviour and tell me what is correct way to define tasks in Gradle plugins (with type and dependsOn functionallity)

Plugins have to defer every read of a mutable build model value (i.e. anything that can be set from a build script) until at least the end of the configuration phase. There are several ways to achieve this goal. Among them are:
Using a Gradle API that accepts closures as values (e.g. Copy.from)
Using callbacks like project.afterEvaluate {} or gradle.projectsEvaluated {}
Using the convention mapping mechanism (note that this is not considered a public feature)
Choosing the best option for the job at hand requires some expertise. (It may help to study some of the plugins in the Gradle codebase.) In your case, I might do the following:
project.task("abcTask", type: Copy) {
from { project.abc.fromPath }
into { project.abc.intoPath }
}
Your << version doesn't work because it configures the Copy task too late. Generally speaking, all configuration should take place in the configuration phase, rather than the execution phase. You can learn more about Gradle build phases in the Gradle User Guide.

Related

Build a macos executable from an ios app's source code

I have an iOS app with some business logic. I would like to create an executable which would expose some calculations from the cli, so that I don't have to use the app.
Unfortunately, the code with the business logic that I want to expose depends on lots of other parts of the code, eventually pulling almost all the code. Even worse, it depends on some dependencies that are not available on macos (UIKit or DJISDK via pods).
I just want to write a main.swift, parse the arguments and use some functions from the codebase. What is the simplest way to do that ?
I don't mind if it's ugly (like pulling whole of UIKit in the executable) as long as it doesn't take to much time to implement.
I have two ideas:
remove all dependencies, optionally defining "dummy" classes compiled only for this target for some dependencies. That requires changing a lot of code, but seems to be cleaner.
find a way to pull the whole app in the new target. For example I have a UnitTests target which does that and can run on mac (though an app window appears when the tests run). But I have no idea how to do that.
Any help or guidance to good documentation would be greatly appreciated :)
I did my best to remove all unneeded dependencies, but there was a lot left that couldn't be removed easily. I resulted in using "dummy" classes to be able to compile without providing any logic.
Though it's not super elegant, it's a nice way to cut the dependency chain, and will let me remove the dependencies over time, while still being able to compile.
Ex:
// In common source file
class Business {
func usefulComputation() -> String {
...
}
func unneededFunction() -> UselessObject {
...
}
// In my CLI-target project
// file `DummyUselessObject.swift`
class UselessObject {
// add any function called from source file here, without putting any logic
}
// `main.swift`
let business = Business()
print(business.usefulComputation())

Generate java source code under my project's source code package

I have my annotation processor:
public class MyAnnotationProcessor extends AbstractProcessor {
...
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
// Here I deal with the annotated element
...
// use JavaPoet to generate Java source file
TypeSpec generatedClazz = generate_code();
JavaFile javaFile = JavaFile.builder("com.my.foo", generatedClazz).build();
javaFile.writeTo(filer);
}
}
After processing annotated element in above process callback, I use JavaPoet to generate java source code & create the Java file for the code. When build my project, everything works except that the generated java source code file by default goes to build/generated/sources/myApp/com/my/foo. How can I make the generated Java file to be located in my project's source code location src/main/java/com/my/foo ?
My gradle build:
plugins {
id 'java'
}
group 'com.my.app'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
testImplementation 'junit:junit:4.12'
implementation 'com.squareup:javapoet:1.11.1'
implementation 'com.google.guava:guava:28.1-jre'
}
The bad news: Annotation processors can't do this - the nature of how their rounds work mean that it isn't going to make sense to generate sources in the same directory where "actual" sources live, since those generated sources will be treated as inputs the next time the annotation processor runs.
Good news: JavaPoet is agnostic of how you actually invoke it, so you can just write a simple main() that does the code generation, and either ask your IDE to invoke it when it builds, or attach it to your gradle build. If you plan on manually editing the sources after they are generated, you probably don't want this to happen, since you likely intend your manual changes to be kept instead of being overwritten each time you build.
The JavaFile.writeTo(...) method has several overrides, and only one of them takes the annotation processor Filer. Using the Filer has some advantages - it is very clear where you intend the class to be written - but JavaFile.writeTo(File directory) is also meant to be used in this way. You don't pass it the actual file where you want the MyClass.java to be, just the source directory you want to write to. In your case, this would be roughly javaFile.writeTo(new File("myProject/src/main/java")).
You probably still should parameterize how to invoke this main, so that it knows what inputs to use, how to understand your existing sources, etc. On the other hand, if your generate_code() doesn't need any existing sources from the same project to run, this should be quite straightforward.
Not sure about gradle but with maven you can defined the generated source directory using below tab in maven-compiler-plugin.
<generatedSourcesDirectory>
${project.basedir}/src/main/java
</generatedSourcesDirectory>
For complete example check the below link.
https://www.thetechnojournals.com/2019/12/annotation-processor-to-generate-dto.html

Qbs Depends Item to affect products list

Assuming I have some project with tons of sub-projects, and in most cases I only need to compile few of them at once:
Project {
property stringList plugins: ["plugin3", "plugin4"] // this is set externally
references: plugins.map(function(p) { return "plugins/"+p+".qbs" })
}
Assuming plugin3 depends on plugin1 and plugin2:
Depends { name: "plugin1" }
Depends { name: "plugin2" }
In that case, I would have to set plugins property as:
plugins: ["plugin1", "plugin2", "plugin3", "plugin4"]
which is what I want to avoid. So, the question is: Is there a way to make subproject dependencies automatically added as project references?
p.s. there is also an alternative way to make all sub-projects present, but conditionally disabled. Can I somehow make them enabled by dependent sub-project?
references should pretty much always be a static list; there is no need to make it conditional just because you only want to compile a certain subset of your products at once.
Instead, what you'll want to do is run qbs with the -p option which lets you specify the name of a particular product to build. Using this option will only built that product and its dependencies, but not the rest of the products in your project.

Right way to evaluate sbt task from a simple method

I have a task which, depends on other settings, should determine whether deploy my project to the production server or not, basically i'm call publish if everything is ok. But as i understand if pass publish task as a dependency or call .value on it, it's gonna be evaluated before the deploy task which is wrong. So i have to somehow run publish later from my method, i have the following structure:
val deploy: Initialize[...] = (...) map { (...) =>
def innerMethod() = { ... } // <- here i need run publish
}
The only way i know of is:
EvaluateTask(struct, publish in Deploy, state, projRef)
It works, but i need to depend on buildStructure, stats, thisProjectRef settings, which i don't like. There is also a method on task .evaluate which expects some Setting[Scope] and where to get this. Are there any other ways to achive the similar logic?
Have you considered making it a command instead of a task? http://www.scala-sbt.org/release/docs/Extending/Commands.html
Settings may only depend on other settings; tasks may only depend on settings and other tasks; commands, however, can do whatever they want, basically. They're top-level constructs. A setting or task can't depend on a command, so you can't just use commands for everything, but it sounds like what you're trying to do is a top-level kind of thing.

How to configure lazy or incremental build in general with Ant?

Java compiler provides incremental build, so javac ant task as well. But most other processes don't.
Considering build processes, they transform some set of files (source) into another set of files (target).
I can distinct two cases here:
Transformator cannot take a subset of source files, only the whole set. Here we can only make lazy build - if no files from source was modified - we skip processing.
Transformator can take a subset of sources files and produce a partial result - incremental build.
What are ant internal, third-party extensions or other tools to implement lazy and incremental build?
Can you provide some widespread buildfile examples?
I am interested this to work with GWT compiler in particular.
The uptodate task is Ant's generic solution to this problem. It's flexible enough to work in most situations where lazy or incremental compilation is desirable.
I had the same problem as you: I have a GWT module as part of my code, and I don't want to pay the (hefty!) cost of recompiling it when I don't need to. The solution in my case looked something like this:
<uptodate property="gwtCompile.mymodule.notRequired"
targetfile="www/com.example.MyGwtModule/com.example.MyGwtModule.nocache.js">
<srcfiles dir="src" includes="**"/>
</uptodate>
<target name="compile-mymodule-gwt" unless="gwtCompile.mymodule.notRequired">
<compile-gwt-module module="com.example.MyGwtModule"/>
</target>
Related to GWT, it's not possible to do incremental builds because the GWT compiler looks at all the source code at once and optimizes and inlines code. This means code that wasn't changed could be evaluated differently, for example if you start using a method from a class that wasn't changed, the method was in the previous compilation step left out, but now needs to be compiled in.