Is there a way to GWT compiler/serializer/linker issue? - gwt

Lets say I have a class...
com.mycom.app.AbstractMessage
There is another class in
com.mycom.model.QueryResponse
QueryResponse extends AbstractMessage and notice they are in different pacakges
com.mycom.model is a GWT Module and in the module XML
When I compile model there are errors. However when I try to use QueryReponse in another GWT module, I get runtime errors
"No source code is available for type com.mycom.app.AbstractMessage; did you forget to inherit a required module"
This lends me to believe that AbstractMessage was not compiled/compiled right to begin understandably because I DO NOT WANT to have "app" package be a GWT module
In other words, I only want to compile all classes in "model" and not any super classes. How can I tell the GWT compiler/rpc/linker/serializer etc not to do so?
i.e Is there a way to tell GWT not to walk beyond certain classes when it serializing/compiling it
I am doing this a source environment where we have a lot of packages, most of them depend on MODEL only and I DO NOT want to make a GWT module out of every package, just so it compiles.
Thoughts anyone?

I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be - I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.

There is one more option : If as I suspect you are using the command pattern - I have seen all the abstract interfaces, super classes for Command and Response etc always go in the client side packages - i.e., those that are GWT compiled. They are referrable and usable and instantiable for the server end of the application - so these source files are compiled twice, once by GWT into javascript for browser usage, and once by javac into bytecode for allowing reference from serverside. Thus in all GWT modules, including gwt-user.jar if you open them with 7Zip or WinZip you will see source and class files JARed together.

I recommend Moving AbstractMessage into the models package - as it is the model QueryResponse's super class.
And also inhertance in models is only a good idea, if you have 0 fields and only methods(behaviour) in the super class.
Lastly, if GWT is to make your QueryResponse into javascript - it needs ALL Types mentioned in the source file, to compile properly. So do not mention any server-only-classes in a source file meant to become javascript.
Have a region that has all the server-side java classes that will be run in a JVM on the server, and another region full of source files that will be compiled into javascript by the GWT compiler. The server-side region code/classes CAN refer to client region code/classes but defenitely NOT the vice versa. Make sure that no code thats gonna become javascript is referring (even an unused import statement) to a server side class.
GWT compiler works with source files only, however you need to compile client code into .class files so your serverside classes can refer to them.
NEW EDIT :
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be -
I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.

Related

shared domain with scala.js, how?

Probably a basic question, but I'm confused with the various documentations and examples around scala.js.
I have a domain model I would like to share between scala and scala.js, let's say:
class Estimator(val nickname: String)
... and of course I would like to send objects between the web-client (scala.js with angular via angulate) and the server (scala with spring-mvc on spring-boot).
Should the class extends js.Object? And be annotated with #ScalaJSDefined (not yet deprecated in v0.6.15)?
If yes, it would be an unwanted dependency that comes also in the server part. Neither #scalaJSDefined nor js.Object are in the dummy scalajs-stubs. Or am I missing something?
If no, how to pass them through $http.post which expects a js.Any? I also get some TypeError at other places. Should I picke/unpickle everywhere or is there an automatic way?
EDIT 2017-03-30:
Actually this relates to Angulate, the facade for AngularJS I choose. For 2 features (communications to an http server and displaying model fields in html), the domain classes have to be Javascript classes. In Angulate's example, the domain model is duplicated.
There are also (and sadly) no plan to include js.Object in scalajs-stubs to overcome this problem. Details in https://github.com/scala-js/scala-js/issues/2564 . Perhaps js.Object doesn't hurt so much on the jvm...
So, what web frameworks and facade for scala.js does / doesn't nicely support shared domain? Not angulate1, probably Udash, perhaps react?
(Caveat: I don't know Angulate, which might affect some of this. Speaking generally, though...)
No, those shared objects shouldn't derive from js.Object or use #ScalaJSDefined -- those are only for objects that are designed to interface with JavaScript itself, and it doesn't sound like that's what you have in mind. Objects that are just for Scala don't need them.
But yes -- in general, you're usually going to need to pickle the communications in one way or another. Which pickling library you use is up to you (there are several), but remember that the communication is simply a stream of bytes -- you have to tell the system how to serialize and deserialize between your domain objects and those bytes.
There isn't anything automatic in Scala.js per se -- that's just a language, and doesn't dictate your library choices. You can use implicits to make the pickling semi-automatic, but I recommend being a bit careful with that. I don't see anything obvious in the Angulate docs that indicate that it does the pickling automatically.

By how much is the compiled js reduced in size by using concrete classes instead of interfaces

I have read that for GWT, specifying methods to return a concrete implementation, for example:
public ArrayList<String> getList();
instead of the normally-preferred "abstract interface", for example:
public List<String> getList();
results in GWT producing a smaller compiled javascript file, because the client (ie js) code doesn't have to cater for all known implementations of the interface (in the example of List, the client code would have to be able to handle LinkedList, ArrayList, Vector, etc), so it can optimize the js by not compiling unused implementations.
My closely-related questions are:
Is this true? (the following questions assume it is true)
Is the optimization per-class that uses interfaces, or per application? ie
Do I see a benefit just refactoring up one class? or
Do I only see a benefit once all client classes are refactored to not use interfaces?
The following assumes that you use the interface as part of the signature of GWT RPC service. I think if you do not use the interface in the signature of GWT RPC service, the effect of using classes instead of interfaces should be minimal (e.g. the GWT compiler will only compile the used implementations)
Is this true? (the following questions assume it is true)
Yes, the output of the GWT compiler gets smaller when it 'knows' better which classes might be send from server to client.
Is the optimization per-class that uses interfaces, or per application? ie
In case of GWT RPC, per application.
Do I see a benefit just refactoring up one class?
Yes, one interface replaced by an implementation can reduce generated code size by a few kb, if the interface would require to include code for many classes.
However, apart from using implementations instead of interfaces, also a 'blacklist' of classes can be defined in the module definition file to explicitly circumvent the inclusion of implementations in the generated code: something like
<extend-configuration-property name="rpc.blacklist"
value="-java.util.ArrayList" />
I just did a test based on the sample app generated by webAppCreator, but I added 3 simple services that returned either List<String> or ArrayList<String>, depending on the build.
The results were that having all services use ArrayList<String> saved about 5Kb from the compiled javascript over having any mix of the return types.
That proves the saving is real and per-app (not per-service).
It also show how much it saves (in this case).
This doesn't actual to the GWT-compiler in general. Such approach is applied only for classes used with code generation. For example, when using Remote Procedure Calls. See this question for more detail information. Thus, if you declare an interface instead of a concrete class as the return type, the compiler includes all possible implementations in your compiled code. This increases time of compilation and a amount of generated code.
Actually one might develop application using GWT without RPC. In this case compiled code doesn't bloat when using interfaces.

Scala: Lazy baking and runtime compilation of cake pattern

One of the great limitations of the cake pattern is that its static. I would like to be able to mix-in traits potentially written by different coders completely independently. However the traits would not need to be mixed-in frequently. The user would have an initialisation screen where they would choose the traits / assemblies, before the main application was run. So the thought occurred to me why not mix-in and compile the chosen traits from with in the user choice selection module. If the compilation failed, no problem the user would just get back some message - incompatible assemblies or what ever. If the compilation succeeded then the top UI module would load the newly compiled classes with the pre-compiled parts of the assemblies and run the main application. Note there might only need to be one or two classes compiled duruing run time initialisation. All the rest of the code could have been compiled normally.
I'm pretty new to Scala. Is this a recognised pattern? Is there any support for it? It seems mad to have to use Guice for a relative simple dependency situation. Can I run the Scala compiler easily from within an application? Can I run it in memory and its outputs be used from memory without unnecessary file creation?
Note: Although appearing to be dynamic, this methodology would remain 100% static.
Edit it occurs to that one of the drives of Microsoft's Roslyn project was to enable just this sort of thing for C# and Visual Basic. But that seems to have been a pretty big project even for a high powered Microsoft team.
Calling the compiler directly from within Scala is doable, but not for the timid. Luckily, the good people at Twitter have automated the process for you. (140 character celebrity micro-blogging, and some cool Scala utilities! Thanks Twitter.) You can use the com.twitter.utils.Eval class to compile and evaluate Scala strings. In your example, you would do something like
val eval = new Eval()
val myObj = eval[BaseClass]("new BaseClass extends " + traitNameList.mkString(" with "))
This will create you a new object with all of the traits you desire built in. The question then arises as to whether this is a good idea. Downsides:
Calling out to the Scala compiler is not quick
If you do this enough, you will overload the PermGen space, as the classes you create will never be garbage collected
This really is more of the sort of thing you want a dynamic language for rather than Scala. You're likely to find places where this all kinds of works, but clashes with the rest of your architecture (yes, that's vague).

Why do web development frameworks tend to work around the static features of languages?

I was a little surprised when I started using Lift how heavily it uses reflection (or appears to), it was a little unexpected in a statically-typed functional language. My experience with JSP was similar.
I'm pretty new to web development, so I don't really know how these tools work, but I'm wondering,
What aspects of web development encourage using reflection?
Are there any tools (in statically typed languages) that handle (1) referring to code from a template page (2) object-relational mapping, in a way that does not use reflection?
Please see lift source. It doesn't use reflection for most of the code that I have studied. Almost everything is statically typed. If you are referring to lift views they are processed as Xml nodes, that too is not reflection.
Specifically referring to the <lift:Foo.bar/> issue:
When <lift:Foo.bar/> is encountered in the code, Lift makes a few guesses, how the original name should have been (different naming conventions) and then calls java.lang.Class.forName to get the class. (Relevant code in LiftSession.scala and ClassHelpers.scala.) It will only find classes registered with addToPackages during boot.
Note that it is also possible (and common) to register classes and methods manually. Convention is still that all transformations must be of the form NodeSeq => NodeSeq because that is the only thing which makes sense for an untyped HTML/XHTML output.
So, what you have is Lift‘s internal registry of node transformations on one side, and on the other side the implicit registry of the module. Both types use a simple string lookup to execute a method. I guess it is arguable if one is more reflection based than the other.

GWT Generators - Determine Whether a Class is Referenced Anywhere

I have GWT project that uses Generators to create light dynamic reflection objects.
I was wondering if anybody knows of a way to determine whether or not a particular class is referenced in the dependency tree beginning at all EntryPoints. If I could do this, I could avoid generating reflection data for classes that will never be used anyway.
My understanding is that when GWT does its compiling, it performs a similar check so that it can reduce the total size of the compiled code, but I haven't been able to find any related methods in TypeOracle or anything like that.
This is an indirect method of accomplishing what you are getting at. I believe each GWT module, is fully packaged into a regular java package. You can use
TypeOracle.findPackage(String pkgName)
to get the JPackage instance, and on that instance you use findType(String typeName) to see if a type is present in that package. If present, its likely that it is referenced in some file and GWT will compile it.
There is also this method getPackages() which returns an array of all packages known to this type oracle - therefore reachable for GWT compiler.
JPackage[] getPackages()
You can iteratively findType() on each package to find if the type is going to be compiled or not.
The BEST method is to define a custom annotation and whitelist all the classes that you do want to generate reflection code. You can annotate the required classes with it, and checking for that presence of annotation before generating code for it.
My favorite is to follow a naming convention over annotation, (I did both together), and thus maintain a whitelist, and make the convention (its usually a REGEX) a "setting" that can be changed however the team wants.