shared domain with scala.js, how? - scala.js

Probably a basic question, but I'm confused with the various documentations and examples around scala.js.
I have a domain model I would like to share between scala and scala.js, let's say:
class Estimator(val nickname: String)
... and of course I would like to send objects between the web-client (scala.js with angular via angulate) and the server (scala with spring-mvc on spring-boot).
Should the class extends js.Object? And be annotated with #ScalaJSDefined (not yet deprecated in v0.6.15)?
If yes, it would be an unwanted dependency that comes also in the server part. Neither #scalaJSDefined nor js.Object are in the dummy scalajs-stubs. Or am I missing something?
If no, how to pass them through $http.post which expects a js.Any? I also get some TypeError at other places. Should I picke/unpickle everywhere or is there an automatic way?
EDIT 2017-03-30:
Actually this relates to Angulate, the facade for AngularJS I choose. For 2 features (communications to an http server and displaying model fields in html), the domain classes have to be Javascript classes. In Angulate's example, the domain model is duplicated.
There are also (and sadly) no plan to include js.Object in scalajs-stubs to overcome this problem. Details in https://github.com/scala-js/scala-js/issues/2564 . Perhaps js.Object doesn't hurt so much on the jvm...
So, what web frameworks and facade for scala.js does / doesn't nicely support shared domain? Not angulate1, probably Udash, perhaps react?

(Caveat: I don't know Angulate, which might affect some of this. Speaking generally, though...)
No, those shared objects shouldn't derive from js.Object or use #ScalaJSDefined -- those are only for objects that are designed to interface with JavaScript itself, and it doesn't sound like that's what you have in mind. Objects that are just for Scala don't need them.
But yes -- in general, you're usually going to need to pickle the communications in one way or another. Which pickling library you use is up to you (there are several), but remember that the communication is simply a stream of bytes -- you have to tell the system how to serialize and deserialize between your domain objects and those bytes.
There isn't anything automatic in Scala.js per se -- that's just a language, and doesn't dictate your library choices. You can use implicits to make the pickling semi-automatic, but I recommend being a bit careful with that. I don't see anything obvious in the Angulate docs that indicate that it does the pickling automatically.

Related

What is the mechanism by which anonymous functions are serializable?

I have read various old StackOverflow discussions on this general topic but there is still one part of the puzzle which appears, to me at least, to be missing.
It is simply this: what is the actual mechanism by which the anonymous function is serialized? And, where could we find its source code?
Or is it all just magic?
Other relevant SO articles (the third of these itself points to some useful articles outside StockOverflow):
Serialization of Scala Functions
Why Scala can serialize...
How to serialize functions in Scala
I'm going to answer my own question with what, I believe is the correct answer. The reason I'm doing it this way is that it seems to me that this aspect of serialization is never explained and it does appear to work just by magic. I essentially confirmed (to my satisfaction) the answer as part of the research I was doing to ensure that my question above was indeed appropriate.
But the main reason I'm offering my own answer is that I invite knowledgeable users either to agree with it, to correct it, to expand upon it, or to destroy it. Here goes...
It's all magic. No, I'm just kidding. But essentially the mechanism, once Scala has taken the step of representing the anonymous function as a Class, is entirely provided for by Java. In addition, we, the programmer, need to ensure that an anonymous function is as much pure code as possible: no references to any objects that might not be serializable. The secret sauce is to be found in the Java class: ObjectStreamClass. Which, in turn, is invoked by the Java serialization classes: ObjectInputStream and ObjectOutputStream.
Essentially the serialized bytes contain the full pathname of the class, its serialVersionUID, and whatever other relevant information is necessary. When deserializing, the system will simply look up the class in the appropriate classpath and return a reference to it. This obviously assumes that the deserializing system has the class in its classpath. The mechanism for that is a little beyond the scope of my research but it's clear that in a system like Spark, it should be easy to arrange.
No (additional) compilation/decompilation of byte code is necessary as the classLoader has everything necessary. I'm slightly surprised to find the ObjectStreamClass in java.io rather than in the reflection package, but I suppose there's an argument for it being there, given the tight coupling with ObjectInputStream and ObjectOutputStream.
One thing to keep in mind is that while we think in terms of serializing/deserializing objects, rather than classes, what we are dealing with here is an object of type Class.
One more thing to note is that in Scala 2.12, anonymous functions are now implemented differently: as Java8 lambdas. This has broken the mechanism described above in a rather serious way. So serious, that Spark is currently having trouble supporting Scala 2.12. The holdup appears to be this issue: SPARK-14540.

By how much is the compiled js reduced in size by using concrete classes instead of interfaces

I have read that for GWT, specifying methods to return a concrete implementation, for example:
public ArrayList<String> getList();
instead of the normally-preferred "abstract interface", for example:
public List<String> getList();
results in GWT producing a smaller compiled javascript file, because the client (ie js) code doesn't have to cater for all known implementations of the interface (in the example of List, the client code would have to be able to handle LinkedList, ArrayList, Vector, etc), so it can optimize the js by not compiling unused implementations.
My closely-related questions are:
Is this true? (the following questions assume it is true)
Is the optimization per-class that uses interfaces, or per application? ie
Do I see a benefit just refactoring up one class? or
Do I only see a benefit once all client classes are refactored to not use interfaces?
The following assumes that you use the interface as part of the signature of GWT RPC service. I think if you do not use the interface in the signature of GWT RPC service, the effect of using classes instead of interfaces should be minimal (e.g. the GWT compiler will only compile the used implementations)
Is this true? (the following questions assume it is true)
Yes, the output of the GWT compiler gets smaller when it 'knows' better which classes might be send from server to client.
Is the optimization per-class that uses interfaces, or per application? ie
In case of GWT RPC, per application.
Do I see a benefit just refactoring up one class?
Yes, one interface replaced by an implementation can reduce generated code size by a few kb, if the interface would require to include code for many classes.
However, apart from using implementations instead of interfaces, also a 'blacklist' of classes can be defined in the module definition file to explicitly circumvent the inclusion of implementations in the generated code: something like
<extend-configuration-property name="rpc.blacklist"
value="-java.util.ArrayList" />
I just did a test based on the sample app generated by webAppCreator, but I added 3 simple services that returned either List<String> or ArrayList<String>, depending on the build.
The results were that having all services use ArrayList<String> saved about 5Kb from the compiled javascript over having any mix of the return types.
That proves the saving is real and per-app (not per-service).
It also show how much it saves (in this case).
This doesn't actual to the GWT-compiler in general. Such approach is applied only for classes used with code generation. For example, when using Remote Procedure Calls. See this question for more detail information. Thus, if you declare an interface instead of a concrete class as the return type, the compiler includes all possible implementations in your compiled code. This increases time of compilation and a amount of generated code.
Actually one might develop application using GWT without RPC. In this case compiled code doesn't bloat when using interfaces.

Why do web development frameworks tend to work around the static features of languages?

I was a little surprised when I started using Lift how heavily it uses reflection (or appears to), it was a little unexpected in a statically-typed functional language. My experience with JSP was similar.
I'm pretty new to web development, so I don't really know how these tools work, but I'm wondering,
What aspects of web development encourage using reflection?
Are there any tools (in statically typed languages) that handle (1) referring to code from a template page (2) object-relational mapping, in a way that does not use reflection?
Please see lift source. It doesn't use reflection for most of the code that I have studied. Almost everything is statically typed. If you are referring to lift views they are processed as Xml nodes, that too is not reflection.
Specifically referring to the <lift:Foo.bar/> issue:
When <lift:Foo.bar/> is encountered in the code, Lift makes a few guesses, how the original name should have been (different naming conventions) and then calls java.lang.Class.forName to get the class. (Relevant code in LiftSession.scala and ClassHelpers.scala.) It will only find classes registered with addToPackages during boot.
Note that it is also possible (and common) to register classes and methods manually. Convention is still that all transformations must be of the form NodeSeq => NodeSeq because that is the only thing which makes sense for an untyped HTML/XHTML output.
So, what you have is Lift‘s internal registry of node transformations on one side, and on the other side the implicit registry of the module. Both types use a simple string lookup to execute a method. I guess it is arguable if one is more reflection based than the other.

Is there a way to GWT compiler/serializer/linker issue?

Lets say I have a class...
com.mycom.app.AbstractMessage
There is another class in
com.mycom.model.QueryResponse
QueryResponse extends AbstractMessage and notice they are in different pacakges
com.mycom.model is a GWT Module and in the module XML
When I compile model there are errors. However when I try to use QueryReponse in another GWT module, I get runtime errors
"No source code is available for type com.mycom.app.AbstractMessage; did you forget to inherit a required module"
This lends me to believe that AbstractMessage was not compiled/compiled right to begin understandably because I DO NOT WANT to have "app" package be a GWT module
In other words, I only want to compile all classes in "model" and not any super classes. How can I tell the GWT compiler/rpc/linker/serializer etc not to do so?
i.e Is there a way to tell GWT not to walk beyond certain classes when it serializing/compiling it
I am doing this a source environment where we have a lot of packages, most of them depend on MODEL only and I DO NOT want to make a GWT module out of every package, just so it compiles.
Thoughts anyone?
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be - I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.
There is one more option : If as I suspect you are using the command pattern - I have seen all the abstract interfaces, super classes for Command and Response etc always go in the client side packages - i.e., those that are GWT compiled. They are referrable and usable and instantiable for the server end of the application - so these source files are compiled twice, once by GWT into javascript for browser usage, and once by javac into bytecode for allowing reference from serverside. Thus in all GWT modules, including gwt-user.jar if you open them with 7Zip or WinZip you will see source and class files JARed together.
I recommend Moving AbstractMessage into the models package - as it is the model QueryResponse's super class.
And also inhertance in models is only a good idea, if you have 0 fields and only methods(behaviour) in the super class.
Lastly, if GWT is to make your QueryResponse into javascript - it needs ALL Types mentioned in the source file, to compile properly. So do not mention any server-only-classes in a source file meant to become javascript.
Have a region that has all the server-side java classes that will be run in a JVM on the server, and another region full of source files that will be compiled into javascript by the GWT compiler. The server-side region code/classes CAN refer to client region code/classes but defenitely NOT the vice versa. Make sure that no code thats gonna become javascript is referring (even an unused import statement) to a server side class.
GWT compiler works with source files only, however you need to compile client code into .class files so your serverside classes can refer to them.
NEW EDIT :
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be -
I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.

How do you go from an abstract project description to actual code?

Maybe its because I've been coding around two semesters now, but the major stumbling block that I'm having at this point is converting the professor's project description and requirements to actual code. Since I'm currently in Algorithms 101, I basically do a bottom-up process, starting with a blank whiteboard and draw out the object and method interactions, then translate that into classes and code.
But now the prof has tossed interfaces and abstract classes into the mix. Intellectually, I can recognize how they work, but am stubbing my toes figuring out how to use these new tools with the current project (simulating a web server).
In my professors own words, mapping the abstract description to Java code is the real trick. So what steps are best used to go from English (or whatever your language is) to computer code? How do you decide where and when to create an interface, or use an abstract class?
So what steps are best used to go from English (or whatever your language is) to computer code?
Experience is what teaches you how to do this. If it's not coming naturally yet (and don't feel bad if it doesn't, because it takes a long time!), there are some questions you can ask yourself:
What are the main concepts of the system? How are they related to each other? If I was describing this to someone else, what words and phrases would I use? These thoughts will help you decide what classes are useful to think about.
What sorts of behaviors do these things have? Are there natural dependencies between them? (For example, a LineItem isn't relevant or meaningful without the context of an Order, nor is an Engine much use without a Car.) How do the behaviors affect the state of the other objects? Do they communicate with each other, and if so, in what way? These thoughts will help you develop the public interfaces of your classes.
That's just the tip of the iceberg, of course. For more about this thought process in general, see Eric Evans's excellent book, Domain-Driven Design.
How do you decide where and when to create an interface, or use an abstract class?
There's no hard and fast prescriptions; again, experience is the best guide here. That said, there's certainly some rules of thumb you can follow:
If several unrelated or significantly different object types all provide the same kind of functionality, use an interface. For example, if the Steerable interface has a Steer(Vector bearing) method, there may be lots of different things that can be steered: Boats, Airplanes, CargoShips, Cars, et cetera. These are completely unrelated things. But they all share the common interface of being able to be steered.
In general, try to favor an interface instead of an abstract base class. This way you can define a single implementation which implements N interfaces. In the case of Java, you can only have one abstract base class, so you're locked into a particular inheritance hierarchy once you say that a class inherits from another one.
Whenever you don't need implementation from a base class, definitely favor an interface over an abstract base class. This would also be handy if you're operating in a language where inheritance doesn't apply. For example, in C#, you can't have a struct inherit from a base class.
In general...
Read a lot of other people's code. Open source projects are great for that. Respect their licenses though.
You'll never get it perfect. It's an iterative process. Don't be discouraged if you don't get it right.
Practice. Practice. Practice.
Research often. Keep tackling more and more challenging projects / designs. Even if there are easy ones around.
There is no magic bullet, or algorithm for good design.
Nowadays I jump in with a design I believe is decent and work from that.
When the time is right I'll implement understanding the result will have to refactored ( rewritten ) sooner rather than later.
Give this project your best shot, keep an eye out for your mistakes and how things should've been done after you get back your results.
Keep doing this, and you'll be fine.
What you should really do is code from the top-down, not from the bottom-up. Write your main function as clearly and concisely as you can using APIs that you have not yet created as if they already existed. Then, you can implement those APIs in similar fashion, until you have functions that are only a few lines long. If you code from the bottom-up, you will likely create a whole lot of stuff that you don't actually need.
In terms of when to create an interface... pretty much everything should be an interface. When you use APIs that don't yet exist, assume that every concrete class is an implementation of some interface, and use a declared type that is indicative of that interface. Your inheritance should be done solely with interfaces. Only create concrete classes at the very bottom when you are providing an implementation. I would suggest avoiding abstract classes and just using delegation, although abstract classes are also reasonable when two different implementations differ only slightly and have several functions that have a common implementation. For example, if your interface allows one to iterate over elements and also provides a sum function, the sum function is a trivial to implement in terms of the iteration function, so that would be a reasonable use of an abstract class. An alternative would be to use the decorator pattern in that case.
You might also find the Google Techtalk "How to Design a Good API and Why it Matters" to be helpful in this regard. You might also be interested in reading some of my own software design observations.
Also, for the coming future, you can keep in pipeline to read the basics on domain driven design to align yourself to the real world scenarios - it gives a solid foundation for requirements mapping to the real classes.