Controlling which classes GWT includes with -generateJsInteropExports - gwt

Can I control which classes are included when I use -generateJsInteropExports?
I'm finding that when I use the flag, the JS output includes a bunch of classes that I'm not using in the project, but whose source appears in some of the packages I'm using. I don't want these classes to be included in the output. Normally, GWT does a good job of only bringing in classes that I'm actually using.
How can I tell the compiler "in this compilation I'd like you to generate JsInterop for these classes, but not these"?
I found these GWT compiler options:
-includeJsInteropExports/excludeJsInteropExports
Include/exclude members and classes while generating JsInterop exports. Flag could be set multiple times to expand the pattern. (The flag has only effect if exporting is enabled via -generateJsInteropExports)
But I couldn't seem to get them to work. I tried using:
-generateJsInteropExports
-includeJsInteropExports com.example.MyClass
The class wasn't included.

The filtering is at the level of class member (i.e. fields and methods) rather than type name. To match all members of a class the syntax is:
-generateJsInteropExports
-includeJsInteropExports com.example.MyClass.*
Note: It's a regular expression, so the dots represent "any character" rather than periods. You'd have to escape them if there were ambiguity.

Related

ELKI: Implementing a custom ResultHandler

I need to implement a custom ResultHandler but I am confused about how to actually integrate my custom class into the software package.
I have read this: http://elki.dbs.ifi.lmu.de/wiki/HowTo/InvokingELKIFromJava but my question is how are you meant to implement a custom result handler such that it shows up in the GUI?
The only way I can think of doing it is by extracting the elki.jar package and manually inserting my custom class into the source code, and then re-jarring the package. However I am fairly sure this is not the way it is meant to be done.
Also, in my resulthandler I need to output all the rows to a single text file with the cluster that each row belongs to displayed. How tips on how I can achieve this?
There are two questions in here.
in order to make your class instantiable by the UIs (both MiniGUI and command line), the classes must implement our Parameterization API. There are essentially two choices to make your class instantiable:
Add a public constructor without parameters (the UI won't know how to set your parameters!)
Add an inner static class Parameterizer that handles parameterization
in order to add your class to autocompletion (dropdown menu), the classes must be discovered by the MiniGUI/CLI/other UIs. ELKI uses two methods of discovery:
for .jar files, it reads the META-INF/elki/interfacename service files. This is a classic service-loader approach; except that we also allow ordering instances.
for directories only, ELKI will also scan for all .class files, and inspect them. This is mostly meant for development time, to avoid having to update the service files all the time. For performance reasons, we do not inspect the contents of .jar files; these are expected to use service files.
You do not need your class to be in the dropdown menu - you can always type the full class name. If this does not work, adding the name to the service file will not help either, but ELKI can either not find the class at all, or cannot instantiate it.
There is also a tutorial on implementing a custom result handler, but it does not discuss how to add it to the menu. In "development mode" - when having a folder with .class files - it will show up automatically.

How to create a scala class based on user input?

I have a use case where I need to create a class based on user input.
For example, the user input could be : "(Int,fieldname1) : (String,fieldname2) : .. etc"
Then a class has to be created as follows at runtime
Class Some
{
Int fieldname1
String fieldname2
..so..on..
}
Is this something that Scala supports? Any help is really appreciated.
Your scenario doesn't seem to make sense. It's not so much an issue of runtime instantiation (the JVM can certainly do this with reflection). Really, what you're asking is to dynamically generate a class, which is only useful if your code makes use of it later on. But how can your code make use of it later on if you don't know what it looks like? For example, how would your later code know which fields it could reference?
No, not really.
The idea of a class is to define a type that can be checked at compile time. You see, creating it at runtime would somewhat contradict that.
You might want to store the user input in a different way, e.g. a map.
What are you trying to achieve by creating a class at runtime?
I think this makes sense, as long as you are using your "data model" in a generic manner.
Will this approach work here? Depends.
If your data coming from a file that is read at runtime but available at compile time, then you're in luck and type-safety will be maintained. In fact, you will have two options.
Split your project into two:
In the first run, read the file and write the new source
programmatically (as Strings, or better, with Treehugger).
In the second run, compile your generated class with the rest of your project and use it normally.
If #1 is too "manual", then use Macro Annotations. The idea here is that the main sub-project's compile time follows the macro sub-project's runtime. Therefore, if we provide the main sub-project with an "empty" class, members can be added to it dynamically at compile time using data that the macro sees at runtime. - To get started, Modify the macro to read from a file in this example
Else, if you're data are truly only knowable at runtime, then #Rob Starling's suggestion may work for you as it did me. I'll share my attempt if you want to be a guinea pig. For debugging, I've got an App.scala in there that shows how to pass strings to a runtime class generator and access it at runtime with Java reflection, even define a Scala type alias with it. So the question is, will your new dynamic class serve as a type-parameter in Slick, or fail to, as it sometimes does with other libraries?

How to internationalize java source code?

EDIT: I completely re-wrote the question since it seems like I was not clear enough in my first two versions. Thanks for the suggestions so far.
I would like to internationalize the source code for a tutorial project (please notice, not the runtime application). Here is an example (in Java):
/** A comment */
public String doSomething() {
System.out.println("Something was done successfully");
}
in English , and then have the French version be something like:
/** Un commentaire */
public String faitQuelqueChose() {
System.out.println("Quelque chose a été fait avec succès.");
}
and so on. And then have something like a properties file somewhere to edit these translations with usual tools, such as:
com.foo.class.comment1=A comment
com.foo.class.method1=doSomething
com.foo.class.string1=Something was done successfully
and for other languages:
com.foo.class.comment1=Un commentaire
com.foo.class.method1=faitQuelqueChose
com.foo.class.string1=Quelque chose a été fait avec succès.
I am trying to find the easiest, most efficient and unobtrusive way to do this with the least amount of manual grunt work (other than obviously translating the actual text). Preferably working under Eclipse. For example, the original code would be written in English, then externalized (to properties, preferably leaving the original source untouched), translated (humanly) and then re-generated (as a separate source file / project).
Some trails I have found (other than what AlexS suggested):
AntLR, a language parser / generator. There seems to be a supporting Eclipse plugin
Using Eclipse's AST (Abstract Syntax Tree) and I guess building some kind of plugin.
I am just surprised there isn't a tool out there that does this already.
I'd use unique strings as methodnames (or anything you want to be replaced by localized versions.
public String m37hod_1() {
System.out.println(m355a6e_1);
}
then I'd define a propertyfile for each language like this:
m37hod_1=doSomething
m355a6e_1="Something was done successfully"
And then I'd write a small program parsing the sourcefiles and replacing the strings. So everything just outside eclipse.
Or I'd use the ant task Replace and propertyfiles as well, instead of a standalone translation program.
Something like that:
<replace
file="${src}/*.*"
value="defaultvalue"
propertyFile="${language}.properties">
<replacefilter
token="m37hod_1"
property="m37hod_1"/>
<replacefilter
token="m355a6e_1"
property="m355a6e_1"/>
</replace>
Using one of these methods you won't have to explain anything about localization in your tutorials (except you want to), but can concentrate on your real topic.
What you want is a massive code change engine.
ANTLR won't do the trick; ASTs are necessary but not sufficient. See my essay on Life After Parsing. Eclipse's "AST" may be better, if the Eclipse package provides some support for name and type resolution; otherwise you'll never be able to figure out how to replace each "doSomething" (might be overloaded or local), unless you are willing to replace them all identically (and you likely can't do that, because some symbols refer to Java library elements).
Our DMS Software Reengineering Toolkit could be used to accomplish your task. DMS can parse Java to ASTs (including comment capture), traverse the ASTs in arbitrary ways, analyze/change ASTs, and the export modified ASTs as valid source code (including the comments).
Basically you want to enumerate all comments, strings, and declarations of identifiers, export them to an external "database" to be mapped (manually? by Google Translate?) to an equivalent. In each case you want to note not only the item of interest, but its precise location (source file, line, even column) because items that are spelled identically in the original text may need different spellings in the modified text.
Enumeration of strings is pretty easy if you have the AST; simply crawl the tree and look for tree nodes containing string literals. (ANTLR and Eclipse can surely do this, too).
Enumeration of comments is also straightforward if the parser you have captures comments. DMS does. I'm not quite sure if ANTLR's Java grammar does, or the Eclipse AST engine; I suspect they are both capable.
Enumeration of declarations (classes, methods, fields, locals) is relatively straightforward; there's rather more cases to worry about (e.g., anonymous classes containing extensions to base classes). You can code a procedure to walk the AST and match the tree structures, but here's the place that DMS starts to make a difference: you can write surface-syntax patterns that look like the source code you want to match. For instance:
pattern local_for_loop_index(i: IDENTIFIER, t: type, e: expression, e2: expression, e3:expression): for_loop_header
= "for (\t \i = \e,\e2,\e3)"
will match declarations of local for loop variables, and return subtrees for the IDENTIFIER, the type, and the various expressions; you'd want to capture just the identifier (and its location, easily done by taking if from the source position information that DMS stamps on every tree node). You'd probably need 10-20 such patterns to cover the cases of all the different kinds of identifiers.
Capture step completed, something needs to translate all the captured entities to your target language. I'll leave that to you; what's left is to put the translated entities back.
The key to this is the precise source location. A line number isn't good enough in practice; you may have several translated entities in the same line, in the worst case, some with different scopes (imagine nested for loops for example). The replacement process for comments, strings and the declarations are straightforward; rescan the tree for nodes that match any of the identified locations, and replace the entity found there with its translation. (You can do this with DMS and ANTLR. I think Eclipse ADT requires you generate a "patch" but I guess that would work.).
The fun part comes in replacing the identifier uses. For this, you need to know two things:
for any use of an identifier, what is the declaration is uses; if you know this, you can replace it with the new name for the declaration; DMS provides full name and type resolution as well as a usage list, making this pretty easy, and
Do renamed identifiers shadow one another in scopes differently than the originals? This is harder to do in general. However, for the Java language, we have a "shadowing" check, so you can at least decide after renaming that you have an issues. (There's even a renaming procedure that can be used to resolve such shadowing conflicts
After patching the trees, you simply rewrite the patched tree back out as a source file using DMS's built-in prettyprinter. I think Eclipse AST can write out its tree plus patches. I'm not sure ANTLR provides any facilities for regenerating source code from ASTs, although somebody may have coded one for the Java grammar. This is harder to do than it sounds, because of all the picky detail. YMMV.
Given your goal, I'm a little surprised that you don't want a sourcefile "foo.java" containing "class foo { ... }" to get renamed to .java. This would require not only writing the transformed tree to the translated file name (pretty easy) but perhaps even reconstructing the directory tree (DMS provides facilities for doing directory construction and file copies, too).
If you want to do this for many languages, you'd need to run the process once per language. If you wanted to do this just for strings (the classic internationalization case), you'd replace each string (that needs changing, not all of them do) by a call on a resource access with a unique resource id; a runtime table would hold the various strings.
One approach would be to finish the code in one language, then translate to others.
You could use Eclipse to help you.
Copy the finished code to language-specific projects.
Then:
Identifiers: In the Outline view (Window>Show View>Outline), select each item and Refactor>Rename (Alt+Shift+R). This takes care of renaming the identifier wherever it's used.
Comments: Use Search>File to find all instances of "/*" or "//". Click on each and modify.
Strings:
Use Source>Externalize strings to find all of the literal strings.
Search>File for "Messages.getString()".
Click on each result and modify.
On each file, ''Edit>Find/Replace'', replacing "//\$NON-NLS-.*\$" with empty string.
for the printed/logged string, java possess some internatization functionnalities, aka ResourceBundle. There is a tutorial about this on oracle site
Eclipse also possess a funtionnality for this ("Externalize String", as i recall).
for the function name, i don't think there anything out, since this will require you to maintain the code source on many version...
regards
Use .properties file, like:
Locale locale = new Locale(language, country);
ResourceBundle captions= ResourceBundle.getBundle("Messages",locale);
This way, Java picks the Messages.properties file according to the current local (which is acquired from the operating system or Java locale settings)
The file should be on the classpath, called Messages.properties (the default one), or Messages_de.properties for German, etc.
See this for a complete tutorial:
http://docs.oracle.com/javase/tutorial/i18n/intro/steps.html
As far as the source code goes, I'd strongly recommend staying with English. Method names like getUnternehmen() are worse to the average developer then plain English ones.
If you need to familiarize foreign developers to your code, write a proper developer documentation in their language.
If you'd like to have Javadoc in both English and other languages, see this SO thread.
You could write your code using freemarker templates (or another templating language such as velocity).
doSomething.tml
/** ${lang['doSomething.comment']} */
public String ${lang['doSomething.methodName']}() {
System.out.println("${lang['doSomething.message']}");
}
lang_en.prop
doSomething.comment=A comment
doSomething.methodName=doSomething
doSomething.message=Something was done successfully
And then merge the template with each language prop file during your build (using Ant / Gradle / Maven etc.)

Error with Groovy AST transformations when cleaning project in Eclipse

I'm trying to work through groovy's Implementing Local AST Transformations tutorial, but whenever I clean my project I get this error in each file that has the #WithLogging annotation in it:
Groovy:Could not find class for Transformation Processor AC.LoggingASTTransformation declared by AC.WithLogging
So you have a package named "AC" that contains both "WithLogging.groovy" and "LoggingASTTransformation.groovy" classes? Does it also contain any classes that implement the "WithLogging" interface?
If so, I'd suggest you move the class(es) that use your annotation to a location outside of the annotation defining package (the default will suffice, for diagnostic purposes) - Order of compilation matters with transformations. See this post on the groovy users mailing list for more on that.
Also try changing the annotation from #WithLogging to #AC.WithLogging.
As far as cleaning with Eclipse is concerned, I had a similar issue and found that I had to make a trivial modification after a clean to any file that contained my annotation. IE, add a space somewhere. Then save the file. This should rebuild everything properly.

does netbeans support naming conventions for fields, parameters and local variables like eclipse

eclipse supports naming conventions for fields, parameters and local variables. For each variable type it is possible to configure a list of prefix or suffix or both. eclipse respects this configuration when generating methods or getters/setters.
is there a similar configuration option in netbeans? is there another way to achieve the same thing: i want to get parameters with prefixes, when generating implementations for abstract methods and i want the prefix to be removed, when generating getters/setters (example: for _myVar it should generate getMyVar and setMyVar).
You can use Alt + Insert to generate some feature you need like getter and setter and constructors and ... . when you change something you can use re-factor.