Generate java source code under my project's source code package - annotation-processing

I have my annotation processor:
public class MyAnnotationProcessor extends AbstractProcessor {
...
#Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
// Here I deal with the annotated element
...
// use JavaPoet to generate Java source file
TypeSpec generatedClazz = generate_code();
JavaFile javaFile = JavaFile.builder("com.my.foo", generatedClazz).build();
javaFile.writeTo(filer);
}
}
After processing annotated element in above process callback, I use JavaPoet to generate java source code & create the Java file for the code. When build my project, everything works except that the generated java source code file by default goes to build/generated/sources/myApp/com/my/foo. How can I make the generated Java file to be located in my project's source code location src/main/java/com/my/foo ?
My gradle build:
plugins {
id 'java'
}
group 'com.my.app'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
testImplementation 'junit:junit:4.12'
implementation 'com.squareup:javapoet:1.11.1'
implementation 'com.google.guava:guava:28.1-jre'
}

The bad news: Annotation processors can't do this - the nature of how their rounds work mean that it isn't going to make sense to generate sources in the same directory where "actual" sources live, since those generated sources will be treated as inputs the next time the annotation processor runs.
Good news: JavaPoet is agnostic of how you actually invoke it, so you can just write a simple main() that does the code generation, and either ask your IDE to invoke it when it builds, or attach it to your gradle build. If you plan on manually editing the sources after they are generated, you probably don't want this to happen, since you likely intend your manual changes to be kept instead of being overwritten each time you build.
The JavaFile.writeTo(...) method has several overrides, and only one of them takes the annotation processor Filer. Using the Filer has some advantages - it is very clear where you intend the class to be written - but JavaFile.writeTo(File directory) is also meant to be used in this way. You don't pass it the actual file where you want the MyClass.java to be, just the source directory you want to write to. In your case, this would be roughly javaFile.writeTo(new File("myProject/src/main/java")).
You probably still should parameterize how to invoke this main, so that it knows what inputs to use, how to understand your existing sources, etc. On the other hand, if your generate_code() doesn't need any existing sources from the same project to run, this should be quite straightforward.

Not sure about gradle but with maven you can defined the generated source directory using below tab in maven-compiler-plugin.
<generatedSourcesDirectory>
${project.basedir}/src/main/java
</generatedSourcesDirectory>
For complete example check the below link.
https://www.thetechnojournals.com/2019/12/annotation-processor-to-generate-dto.html

Related

Using a custom class loader for a module dependency in SBT

I have a multi-module SBT build consisting of api, core and third-party. The structure is roughly this:
api
|- core
|- third-party
The code for third-party implements api and is copied verbatim from somewhere else, so I don't really want to touch it.
Because of the way third-party is implemented (heavy use of singletons), I can't just have core depend on third-party. Specifically, I only need to use it via the api, but I need to have multiple, isolated copies of third-party at runtime. (This allows me to have multiple "singletons" at the same time.)
If I'm running outside of my SBT build, I just do this:
def createInstance(): foo.bar.API = {
val loader = new java.net.URLClassLoader("path/to/third-party.jar", parent)
loader.loadClass("foo.bar.Impl").asSubclass(classOf[foo.bar.API]).newInstance()
}
But the problem is that I don't know how to figure out at runtime what I should give as an argument to URLClassLoader if I'm running via sbt core/run.
This should work, though I didn't quite tested it with your setup.
The basic idea is to let sbt write the classpath into a file that you
can use at runtime. sbt-buildinfo
already provides a good basis for this, so I'm gonna use it here, but you
might extract just the relevant part and not use this plugin as well.
Add this to your project definition:
lazy val core = project enablePlugins BuildInfoPlugin settings (
buildInfoKeys := Seq(BuildInfoKey.map(exportedProducts in (`third-party`, Runtime)) {
case (_, classFiles) ⇒ ("thirdParty", classFiles.map(_.data.toURI.toURL))
})
...
At runtime, use this:
def createInstance(): foo.bar.API = {
val loader = new java.net.URLClassLoader(buildinfo.BuildInfo.thirdParty.toArray, parent)
loader.loadClass("foo.bar.Impl").asSubclass(classOf[foo.bar.API]).newInstance()
}
exportedProducts only contains the compiled classes for the project (e.g. .../target/scala-2.10/classes/). Depending on your setup, you might want to use fullClasspath instead
(which also contains the libraryDependencies and dependent projects) or any other classpath related key.

How can I use JDT compiler programmatically?

I use JDT to compile my java classes. BatchCompiler returns a string but I need an array of problems/errors with their column and row information. compiler.compile(units); prints the error to its printwriter, compiler.resolve(unit) does exactly what I want but it can compile only one java file.
I created a compiler object in this way:
Compiler compiler = new Compiler(env, DefaultErrorHandlingPolicies.exitAfterAllProblems(), new CompilerOptions(), requestor, new DefaultProblemFactory());
And create CompilationUnits that contains filenames and file contents to the compiler.
CompilationUnit[] units = project.toCompilationUnit();
AFAIK, there are 2 ways to compile, one of them is compile(units) method that returns void and prints errors and problems to its PrintWriter, because it doesn't return column information it's not useful for me. The other way is resolve(unit) method but it can work with only one CompilationUnit.
compiler.resolve(units[index], true, true, true);
Does anyone know how I can use JDT compiler programmatically to compile multiple files?
org.eclipse.jdt.internal.compiler.Compiler is internal API. According to the JavaDoc of its resolve method: Internal API used to resolve a given compilation unit. Can run a subset of the compilation process.
Instead, the official way of compiling Java files and determining the compilation problems is described here. Basically, you create a Java project and invoke Eclipse's builder on it, then query the project's Java problem markers.

Maven plugin loading classes

I have an application which has legacy Struts actions extending org.apache.struts.StrutsActions. I would like to be sure that all my classes which are extending StrutsActions have a custom annotation.
In order to provide this I have written a small maven enforcer rule to validate my requirement. However I dont know how to load my classes at my mojo to validate them.
Actually I have done something not fancy which is injection outputDirectory and with a custom class loader I have recursively loaded all classes at my build folder.
Thanks
All classes ? What do you mean by that ? Maybe you mean target/classes/** (which is the default output place for classes) or maybe you mean a list of multiple directory tree locations ?
Can you better explain what your Mojo does and which phase and goals you want it to bind with.
I think maybe you are thinking about how to apply Maven's build cycle to your project incorrectly. Could you explain better what your plugin does, maybe it is does "packaging" work ?
But if I understand you correctly you want the plugin's execution to pickup the additional classpath entry for target/classes/** ? So it can load code and resources from the project itself to change some dynamic behaviour inside the maven-plugin ?
The default way to do this is <dependency> however of course this requires a fixed unit.
Other plugins that allow for this behavior (like maven-antrun-plugin) provide a mechansism to change classpath inside the Mojo and use something from <configuration> section of their pom.xml to do it. It is not clear if the plugin you are using is a Maven one or one you have written ?
.
Validation and packaging that is a valid use case. But I question why on the "classpath" ? I would guess you are binding on process-classes phase.
i.e. the classpath is for the purpose of providing code/resources to the runtime to execute. But in your case you have an input directory rather than a class path requirement.
It is possible in a Mojo to setup a directory scanner on an input directory */.class and then it is possible to (using some library) open each file and inspect the annotation without loading it.
This is also a good kind of separation between unreliable input data and the consistent behaviour of the plugin code itself. What happens if a project decides it wants to implement the same package and/or class as used in the implementation of the plugin itself.
UPDATE: If you really are loading the classes you are checkig into the JVM from your Mojo, then at least implement your own ClassLoader to do it. This is not necessarily a simple problem to solve. You make this ClassLoader find things specified from configuration in the input directory.
I have done it with the help of reflections
<dependency>
<groupId>org.reflections</groupId>
<artifactId>reflections</artifactId>
<version>0.9.5</version>
</dependency>
my implementation is like this:
public void execute(EnforcerRuleHelper helper) throws EnforcerRuleException {
URL url = getURL(helper.evaluate("${project.build.outputDirectory}"));
Predicate<String> filter = new FilterBuilder().include(getTargetSuperTypeForSubtypes());
Predicate<String> filter2 = new FilterBuilder().include(getMustHaveAnnotation());
Reflections reflections = new Reflections(new ConfigurationBuilder()
.setScanners(
new TypeAnnotationsScanner().filterResultsBy(filter2),
new SubTypesScanner().filterResultsBy(filter))
.setUrls(url));
validate(reflections);
}

Error with Groovy AST transformations when cleaning project in Eclipse

I'm trying to work through groovy's Implementing Local AST Transformations tutorial, but whenever I clean my project I get this error in each file that has the #WithLogging annotation in it:
Groovy:Could not find class for Transformation Processor AC.LoggingASTTransformation declared by AC.WithLogging
So you have a package named "AC" that contains both "WithLogging.groovy" and "LoggingASTTransformation.groovy" classes? Does it also contain any classes that implement the "WithLogging" interface?
If so, I'd suggest you move the class(es) that use your annotation to a location outside of the annotation defining package (the default will suffice, for diagnostic purposes) - Order of compilation matters with transformations. See this post on the groovy users mailing list for more on that.
Also try changing the annotation from #WithLogging to #AC.WithLogging.
As far as cleaning with Eclipse is concerned, I had a similar issue and found that I had to make a trivial modification after a clean to any file that contained my annotation. IE, add a space somewhere. Then save the file. This should rebuild everything properly.

How to configure lazy or incremental build in general with Ant?

Java compiler provides incremental build, so javac ant task as well. But most other processes don't.
Considering build processes, they transform some set of files (source) into another set of files (target).
I can distinct two cases here:
Transformator cannot take a subset of source files, only the whole set. Here we can only make lazy build - if no files from source was modified - we skip processing.
Transformator can take a subset of sources files and produce a partial result - incremental build.
What are ant internal, third-party extensions or other tools to implement lazy and incremental build?
Can you provide some widespread buildfile examples?
I am interested this to work with GWT compiler in particular.
The uptodate task is Ant's generic solution to this problem. It's flexible enough to work in most situations where lazy or incremental compilation is desirable.
I had the same problem as you: I have a GWT module as part of my code, and I don't want to pay the (hefty!) cost of recompiling it when I don't need to. The solution in my case looked something like this:
<uptodate property="gwtCompile.mymodule.notRequired"
targetfile="www/com.example.MyGwtModule/com.example.MyGwtModule.nocache.js">
<srcfiles dir="src" includes="**"/>
</uptodate>
<target name="compile-mymodule-gwt" unless="gwtCompile.mymodule.notRequired">
<compile-gwt-module module="com.example.MyGwtModule"/>
</target>
Related to GWT, it's not possible to do incremental builds because the GWT compiler looks at all the source code at once and optimizes and inlines code. This means code that wasn't changed could be evaluated differently, for example if you start using a method from a class that wasn't changed, the method was in the previous compilation step left out, but now needs to be compiled in.