Why is the Scala API documentation missing classes? - scala

For example, the Predef object says it extends LowPriorityImplicits, but there is no documentation for LowPriorityImplicits.
https://www.scala-lang.org/api/current/scala/Predef$.html
More curious than anything else.

There is a part of the Scala language that is privately packaged because things otherwise can go wrong if they were to be used immaturely and compromise the soundness of the language, among other reasons. The docs for those private definitions don't get generated and published. You can see the implementation of LowPriorityImplicits here

Related

What is the mechanism by which anonymous functions are serializable?

I have read various old StackOverflow discussions on this general topic but there is still one part of the puzzle which appears, to me at least, to be missing.
It is simply this: what is the actual mechanism by which the anonymous function is serialized? And, where could we find its source code?
Or is it all just magic?
Other relevant SO articles (the third of these itself points to some useful articles outside StockOverflow):
Serialization of Scala Functions
Why Scala can serialize...
How to serialize functions in Scala
I'm going to answer my own question with what, I believe is the correct answer. The reason I'm doing it this way is that it seems to me that this aspect of serialization is never explained and it does appear to work just by magic. I essentially confirmed (to my satisfaction) the answer as part of the research I was doing to ensure that my question above was indeed appropriate.
But the main reason I'm offering my own answer is that I invite knowledgeable users either to agree with it, to correct it, to expand upon it, or to destroy it. Here goes...
It's all magic. No, I'm just kidding. But essentially the mechanism, once Scala has taken the step of representing the anonymous function as a Class, is entirely provided for by Java. In addition, we, the programmer, need to ensure that an anonymous function is as much pure code as possible: no references to any objects that might not be serializable. The secret sauce is to be found in the Java class: ObjectStreamClass. Which, in turn, is invoked by the Java serialization classes: ObjectInputStream and ObjectOutputStream.
Essentially the serialized bytes contain the full pathname of the class, its serialVersionUID, and whatever other relevant information is necessary. When deserializing, the system will simply look up the class in the appropriate classpath and return a reference to it. This obviously assumes that the deserializing system has the class in its classpath. The mechanism for that is a little beyond the scope of my research but it's clear that in a system like Spark, it should be easy to arrange.
No (additional) compilation/decompilation of byte code is necessary as the classLoader has everything necessary. I'm slightly surprised to find the ObjectStreamClass in java.io rather than in the reflection package, but I suppose there's an argument for it being there, given the tight coupling with ObjectInputStream and ObjectOutputStream.
One thing to keep in mind is that while we think in terms of serializing/deserializing objects, rather than classes, what we are dealing with here is an object of type Class.
One more thing to note is that in Scala 2.12, anonymous functions are now implemented differently: as Java8 lambdas. This has broken the mechanism described above in a rather serious way. So serious, that Spark is currently having trouble supporting Scala 2.12. The holdup appears to be this issue: SPARK-14540.

shared domain with scala.js, how?

Probably a basic question, but I'm confused with the various documentations and examples around scala.js.
I have a domain model I would like to share between scala and scala.js, let's say:
class Estimator(val nickname: String)
... and of course I would like to send objects between the web-client (scala.js with angular via angulate) and the server (scala with spring-mvc on spring-boot).
Should the class extends js.Object? And be annotated with #ScalaJSDefined (not yet deprecated in v0.6.15)?
If yes, it would be an unwanted dependency that comes also in the server part. Neither #scalaJSDefined nor js.Object are in the dummy scalajs-stubs. Or am I missing something?
If no, how to pass them through $http.post which expects a js.Any? I also get some TypeError at other places. Should I picke/unpickle everywhere or is there an automatic way?
EDIT 2017-03-30:
Actually this relates to Angulate, the facade for AngularJS I choose. For 2 features (communications to an http server and displaying model fields in html), the domain classes have to be Javascript classes. In Angulate's example, the domain model is duplicated.
There are also (and sadly) no plan to include js.Object in scalajs-stubs to overcome this problem. Details in https://github.com/scala-js/scala-js/issues/2564 . Perhaps js.Object doesn't hurt so much on the jvm...
So, what web frameworks and facade for scala.js does / doesn't nicely support shared domain? Not angulate1, probably Udash, perhaps react?
(Caveat: I don't know Angulate, which might affect some of this. Speaking generally, though...)
No, those shared objects shouldn't derive from js.Object or use #ScalaJSDefined -- those are only for objects that are designed to interface with JavaScript itself, and it doesn't sound like that's what you have in mind. Objects that are just for Scala don't need them.
But yes -- in general, you're usually going to need to pickle the communications in one way or another. Which pickling library you use is up to you (there are several), but remember that the communication is simply a stream of bytes -- you have to tell the system how to serialize and deserialize between your domain objects and those bytes.
There isn't anything automatic in Scala.js per se -- that's just a language, and doesn't dictate your library choices. You can use implicits to make the pickling semi-automatic, but I recommend being a bit careful with that. I don't see anything obvious in the Angulate docs that indicate that it does the pickling automatically.

Is it possible to implement something akin to Scala's #BeanProperty with macros?

I would like to create an annotation or trait that adds methods to an object at compile time dynamically, based on existing fields. Although I'm interested in something at the class-level, I'd also work with field-level annotations (or something else more granular) as well.
An older stack-overflow question asking about the implementation details of Scala's #BeanProperty was answered with, "It's a compiler plugin, but macros may also allow you to do this". Given the official (if experimental) release of macros in Scala 2.10, is this sort of functionality now possible?
Update: This answer is not valid anymore. See Eugenes comment.
No, it is not yet possible.
In 2.10 there exists only def macros that can't do anything comparable. For 2.11 the world is a bit better, macro annotations and an implementation to introduce members to classes already exists. But they are only some weeks old and therefore will work only for some corner cases. Furthermore the implementation to introduce members to classes lives in a different branch than the implementation for macro annotations, thus it is not yet possible to use them together.

How to determine required parameters from Scala API documentation?

I'm having a hard time deciphering Scala API documentation.
For example, I've defined a timestamp for use in a database.
def postedDate = column[Timestamp]("posted_date", O NotNull, O Default new Timestamp(Calendar.getInstance.getTimeInMillis), O DBType("timestamp"))
If I hadn't read several examples, of which none were in the API doc, how could I construct this statement? From the Column documentation how could I know the parameters?
I guessed it had something to do with TimestampTypeMapperDelegate but it is still not crystal clear how to use it.
The first thing to note from the scaladoc for Column is that it is abstract, so you probably want to deal directly with one if its subclasses. For example, NamedColumn.
Other things to note are that it has a type parameter and the constructor takes an implicit argument of a TypeMapper of the same parameter type. The docs for TypeMapper provide an example of how to create a custom one, but if you look at the subclasses, there are plenty of provided ones (such as timestamp). The fact that the argument is declared as implicit suggests that there could be one in scope, and if so, it will automatically be used as the parameter without explicitly stating that. If there isn't an implicit in scope that satisfies the requirement, you'll have to provide it.
The next think to note is that a TypeMapper is a trait that extends a function with an argument of a BasicProfile and a TypeMapperDelegate result. Basically what's going on here is the definition of a type mapper is separated from the implementation. This is done to support multiple flavors of database. If look at the subclasses of BasicProfile, it will become apparent that ScalaQuery supports quite a few, and as we know, their implementations are sometimes quite different.
If you chase the docs for a while, you end up at the BasicTypeMapperDelegates trait that has a bunch of vals in it with delegates for each of the basic types (including timestamps).
BasicTable defines a method called column (which you've found), and the intent of the column method is to shield you from having to know anything about TypeMappers and Delegates as long as you are using standard types.
So, I guess to answer your question about whether there is enough information in the API docs, I'd personally say yes, but the docs could be enhanced with better descriptions of classes, objects, traits and methods.
All that said, I've always found that leveraging examples, API docs, and even the source code of the project provides a robust way of getting up to speed on most open source projects. To be quite blunt, many of these projects (including ScalaQuery) have saved me countless hours of work, but probably cost the author(s) countless hours of personal time to create and make available. These are not necessarily commercial products, and we as consumers shouldn't hold them to the same standards that we hold for-fee products. If you find docs inadequate, contribute!

What reflection capabilities can we expect from Scala 2.10?

Scala 2.10 brings reflection other than that provided the JVM (or I guess CLR).
What in particular do we have to look forward to, and how will it improve on the platform?
For example, will there be a class that reflects the language's convertibility between fields and accessor methods, so that I can iterate over the properties of an object?
update 2012-07-04:
Daniel SOBRAL (also on SO) details in his blog post "JSON serialization with reflection in Scala! Part 1 - So you want to do reflection?" some of the features coming with reflection:
To recapitulate, Scala 2.10 will come with a Scala reflection library.
That library is used by the compiler itself, but divided into layers through the cake pattern, so different users see different levels of detail, keeping jar sizes adequate to each one's use, and hopefully hiding unwanted detail.
The reflection library also integrates with the upcoming macro facilities, enabling enterprising coders to manipulate code at compile time.
update 2012-06-14. (from Eugene Burmako):
In Scala 2.10.0-M4, we have released the new reflection API that will most likely make it into 2.10.0-final without significant changes.
More details about the API can be found:
SO answer Get companion object instance with new Scala reflection API
Scala Reflection SIP, June 2012 by Martin Odersky (SIP, actually "Scala Improvement Process")
summary and migration route from M3
Extracts:
Universes and mirrors are now separate entities:
universes host reflection artifacts (trees, symbols, types, etc),
mirrors abstract loading of those artifacts (e.g. JavaMirror loads stuff
using a classloader and annotation unpickler, while GlobalMirror uses internal compiler classreader to achieve the same goal).
Public reflection API is split into scala.reflect.base and scala.reflect.api.
The former represents a minimalistic snapshot that is exactly enough to
build reified trees and types.
To build, but not to analyze - everything smart (for example, getting a type signature) is implemented in scala.reflect.api.
Both reflection domains have their own universe: scala.reflect.basis and
scala.reflect.runtime.universe.
The former is super lightweight and doesn't involve any classloaders,
while the latter represents a stripped down compiler.
Initial answer, Sept. 2011:
You can see evolutions of the reflect package in the Scala GitHub repo, with this two very recent commits:
Changes to Liftcode to use new reflection semantics, where a compiler uses type checking.
Started work on compiler toolbox that can compile reflect trees at runtime.
(Liftcode being, according to this thread, aims at simplifying "writing code that writes code")
The class scala/reflect/internal/Importers.scala (created yesterday!) is a good example of using those latest reflection feature.
Two links which should be of interest:
The scala-internals mailing list discussion on the reflection api.
The nightly build api doc for 2.10-SNAPSHOT.
Personally I am hoping to use this for doing runtime discovery of extensions (i.e. a type that extends a known trait), and generating UI forms and a few other things from those.
With current 2.10M4 you already can iterate over members of a class:
reflect.runtime.universe.typeOf[MyClass].members.filter(!_.isMethod)
The above code lists Symbol objects representing members of a class MyClass which are not methods. There are tons of ways you can fine-tune this.