How to create a custom annotation processor for Eclipse - eclipse

I'm trying to create a custom annotation processor that generates code at compilation time (as hibernate-jpamodelgen does). I've looked in the web, and I find custom annotation processors that works with maven, but do nothing when added to the Annotation Processing > Factory Path option. How could I create a processor compatible in this way? I have not found a tutorial that works.
My idea is to, for example, annotate an entity to generate automatically a base DTO, a base mapper, etc that can be extended to use in the final code.
Thank you all

OK, Already found out the problem. The tutorial I hda found out dint't specified that, in order to the compiler to be able to apply the annotation processor, there must be a META-INF/services/javax.annotation.processing.Processor file that contains the qualified class name of the processor (or processors).
I created the file pointing to my processor class, generated the jar and added it to Annotation Processing > Factory Path and all worked correctly.
Just be careful to keep the order of the processors correctly (for example, hibernate model generator claims the classes, so no more generation will be made after it), and change the jar file name each time you want to replace the library (it seems eclipse keeps a cache). These two things have given me a good headache.
Thanks all

Related

Acceleo: The generation failed to generate any file

First of all , I am new to Acceleo and the modeling features of eclipse. What I am trying to do is just to create a simple test file. So for starters, I created a main module:
comment encoding = UTF-8 /]
[module generate('file:/C:/Users/maria/Documents/workspace/org.eclipse.acceleo.module.m2tTransformation/model/PSMMetamodel.ecore')]
[template public generateElement( aServicePSM : ServicePSM)]
[comment #main/]
[file ('test.java', false, 'UTF-8')]
Test
[/file]
[/template]
When i run this I get:
The generation failed to generate any file because there are no model elements that matches at least the type of the first parameter of one of your main templates.
The problem may be caused by a problem with the registration of your metamodel, please see the method named "registerPackages" in the Java launcher of your generator. It could also come from a missing [comment #main/] in the template used as the entry point of the generation.
Also the URI I use is the nsURI attribute value I set to the root of metamodel. I am sure that my input model does contain ServicePSM elements.
What am i doing wrong?
Thanks in advance.
This issue will arise in two cases
You do not have an element of the proper type in your model
The metamodel cannot be resolved
From your message I think we can safely ignore 1. since it seems you have at least one ServicePSM in your model, so we need to address 2.
If you look at your module, you've declared it to generate on metamodel file:/C:/Users/maria/Documents/workspace/org.eclipse.acceleo.module.m2tTransformation/model/PSMMetamodel.ecore. However, EMF rarely, if ever, uses this kind of URIs to refer to its metamodels. If you open your actual model with the text editor (right click > Open With > Text Editor), you can look at the URI that's actually been used to reference the metamodel with the "xmlns" tags at the start.
For example, if I open a model that references OCL elements, I can see xmlns:ocl.ecore="http://www.eclipse.org/ocl/1.1.0/Ecore". You have to make sure you use the same URI in your module file as what you see EMF using in the model file, in this case, it would be http://www.eclipse.org/ocl/1.1.0/Ecore.

ELKI: Implementing a custom ResultHandler

I need to implement a custom ResultHandler but I am confused about how to actually integrate my custom class into the software package.
I have read this: http://elki.dbs.ifi.lmu.de/wiki/HowTo/InvokingELKIFromJava but my question is how are you meant to implement a custom result handler such that it shows up in the GUI?
The only way I can think of doing it is by extracting the elki.jar package and manually inserting my custom class into the source code, and then re-jarring the package. However I am fairly sure this is not the way it is meant to be done.
Also, in my resulthandler I need to output all the rows to a single text file with the cluster that each row belongs to displayed. How tips on how I can achieve this?
There are two questions in here.
in order to make your class instantiable by the UIs (both MiniGUI and command line), the classes must implement our Parameterization API. There are essentially two choices to make your class instantiable:
Add a public constructor without parameters (the UI won't know how to set your parameters!)
Add an inner static class Parameterizer that handles parameterization
in order to add your class to autocompletion (dropdown menu), the classes must be discovered by the MiniGUI/CLI/other UIs. ELKI uses two methods of discovery:
for .jar files, it reads the META-INF/elki/interfacename service files. This is a classic service-loader approach; except that we also allow ordering instances.
for directories only, ELKI will also scan for all .class files, and inspect them. This is mostly meant for development time, to avoid having to update the service files all the time. For performance reasons, we do not inspect the contents of .jar files; these are expected to use service files.
You do not need your class to be in the dropdown menu - you can always type the full class name. If this does not work, adding the name to the service file will not help either, but ELKI can either not find the class at all, or cannot instantiate it.
There is also a tutorial on implementing a custom result handler, but it does not discuss how to add it to the menu. In "development mode" - when having a folder with .class files - it will show up automatically.

Strange behavior of Document createCDATASection method with Saxon (Maven Saxon-HE artifact 9.4)

I tried to use Saxon in place of JDK's default implementation (Xalan I guess) for XML transformation and Xpath. In my code I am creating a CDATA node using document.createCDATASection(data) method. Code looks as given below:
CDATASection cdata = doc.createCDATASection("data");
Node valueNode = node.appendChild(doc.createElement("value"));
valueNode.appendChild(cdata);
Where node is some random node in my XML.
It works fine with JDK's default implementation and resulting XML looks like:
<node>
<value><![CDATA[data]]></value>
</node>
The same code starts behaving strange if I include Saxon maven artifact (Please note it is just inclusion and factory selection/instantiation is default, as it was earlier) and all the cdata nodes are treated as simple text nodes i.e. XML becomes:
<node>
<value>data</value>
</node>
which on retrieval is causing issues as that code specifically checks for cdata elements which in later case has been removed. I am not sure why this is happening (looks like I have not used it correctly). I also tried excluding Xerces artifacts from my POM (transitive dependency for Saxon) but no luck. Also, verified that implementation classes for DocumentBuilderFactory etc are being used from JDK itself. Experts please help me if I am doing something wrong.
Thanks in advance.
I guess your application is probably doing a JAXP identity transformation from a DOMSource to a StreamResult in order to serialize the DOM. The Saxon implementation of the JAXP identity transformation uses the serialization rules of XSLT, which have the effect of dropping CDATA sections. This is perfectly conformant with JAXP, even if it isn't what the default JDK implementation does.
If you are dependent on the behaviour of a particular implementation of the JAXP identity transformer, then you shouldn't be writing your application to pick up whatever implementation happens to be lying around on the classpath; you should instantiate the implementation you want explicitly.
This can be difficult of course if the code that invokes the identity transform is something you didn't write yourself and can't easily change. In that case the best approach is to set the system property javax.xml.transform.TransformerFactory to select Xalan, and where you want to invoke Saxon, do it explicitly rather than relying on the JAXP factory search.

Problems compiling routes after migrating to Play 2.1

After migrating to Play-2.1 I stuck into problem that routes compiler stopped working for my routes file. It's been completely fine with Play-2.0.4, but now I'm getting the build error and can't find any workaround for it.
In my project I'm using cake pattern, so controller actions are visible not through <package>.<controller class>.<action>, but through <package>.<component registry>.<controller instance>.<action>. New Play routes compiler is using all action path components except for the last two to form package name that will be used in managed sources (as far as I can get code in https://github.com/playframework/Play20/blob/2.1.0/framework/src/routes-compiler/src/main/scala/play/router/RoutesCompiler.scala). In my case it leads to situation when <package>.<component registry> is chosen as package name, which results in error during build:
[error] server/target/scala-2.10/src_managed/main/com/grumpycats/mmmtg/componentsRegistry/routes.java:5: componentsRegistry is already defined as object componentsRegistry
[error] package com.grumpycats.mmmtg.componentsRegistry;
I made the sample project to demonstrate this problem: https://github.com/rmihael/play-2.1-routes-problem
Is it possible to workaround this problem somehow without dropping cake pattern for controllers? It's the pity that I can't proceed with Play 2.1 due to this problem.
Because of reputation I can not create a comment.
The convention is that classes and objects start with upper case. This convention is applied to pattern matching as well. Looking at a string there seems to be no difference between a package object and normal object (appart from the case). I am not sure how Play 2.1 handles things, that's why this is not an answer but a comment.
You could try the new # syntax in the router. That allows you to create an instance from the Global class. You would still specify <package>.<controller class>.<action>, but in the Global you get it from somewhere else (for example a component registry).
You can find a bit of extra information here under the 'Managed Controller classes instantiation': http://www.playframework.com/documentation/2.1.0/Highlights
This demo project shows it's usage: https://github.com/guillaumebort/play20-spring-demo

Error with Groovy AST transformations when cleaning project in Eclipse

I'm trying to work through groovy's Implementing Local AST Transformations tutorial, but whenever I clean my project I get this error in each file that has the #WithLogging annotation in it:
Groovy:Could not find class for Transformation Processor AC.LoggingASTTransformation declared by AC.WithLogging
So you have a package named "AC" that contains both "WithLogging.groovy" and "LoggingASTTransformation.groovy" classes? Does it also contain any classes that implement the "WithLogging" interface?
If so, I'd suggest you move the class(es) that use your annotation to a location outside of the annotation defining package (the default will suffice, for diagnostic purposes) - Order of compilation matters with transformations. See this post on the groovy users mailing list for more on that.
Also try changing the annotation from #WithLogging to #AC.WithLogging.
As far as cleaning with Eclipse is concerned, I had a similar issue and found that I had to make a trivial modification after a clean to any file that contained my annotation. IE, add a space somewhere. Then save the file. This should rebuild everything properly.