How do lambdas work in Scala, are they functions on top of anonymous classes? - scala

The title might be a little confusing so let me elaborate, I've been reading some criticism regarding Scala. It was an email sent to Tyepsafe regarding some deficiencies in Scala from Coda Hale (Yammer's Infrastructure Architect), so to quote:
we stopped seeing lambdas as free and started seeing them as syntactic sugar on top of anonymous classes and thus acquired the same distaste for them as we did anonymous classes.
So, from this, I have a couple of questions regarding how lambdas work in Scala:
What is the difference between a free function and a function that is bound to an anonymous class (technically, aren't all functions bound to the main singleton object)?
What is the impact on performance of using an anonymous class bound function instead of a free function?

Yes, lambdas are still objects, instances of anonymous classes.
This is how the JVM works, all references are objects. You can have either references or values (primitives) and there's no way around it.
Later versions of Java have MethodHandles. But it's worth noting that MethodHandle is also still just an abstract class - albeit one that the JVM specifically knows how to optimise away at runtime.
Also also worth noting is that the JVM can often perform escape analysis on abstract classes (such as Scala's functions), and optimise these away too.
On top of this, Scala can use any object with an apply method as though it were a Function. In this case, the explicit call to apply is emitted in the bytecode and you're not dealing with anonymous classes any more.
Given all of the above, it's impossible to make a general statement regarding the performance of Scala's function implementation, it depends on your specific code/use case. In general, I wouldn't worry unless you hit a corner case where your profiler pinpoints a problem here (which is very unlikely)

Well, in C for example a function is just a 32 or 64 bit pointer to a place in memory to jump to and the concept of a closure doesn't really apply since you can't declare an anonymous c function. I don't know how the C++ lambdas work, I guess the compiler makes a method and passes the fields you want in the closure along with parameters. Maybe that's what you're looking for. In the JVM you have to wrap your logic in a class so now you have a virtual table of methods, fields, and some methods related to synchronization and the type system.
What is the impact on performance?...I don't know, have you noticed an impact on performance? A lot of that extra Java stuff I described really isn't needed for an anonymous class and might just get optimized out. I imagine there are butterflies that influence the weather more than the extra JVM stuff would effect your software.

Related

What is the mechanism by which anonymous functions are serializable?

I have read various old StackOverflow discussions on this general topic but there is still one part of the puzzle which appears, to me at least, to be missing.
It is simply this: what is the actual mechanism by which the anonymous function is serialized? And, where could we find its source code?
Or is it all just magic?
Other relevant SO articles (the third of these itself points to some useful articles outside StockOverflow):
Serialization of Scala Functions
Why Scala can serialize...
How to serialize functions in Scala
I'm going to answer my own question with what, I believe is the correct answer. The reason I'm doing it this way is that it seems to me that this aspect of serialization is never explained and it does appear to work just by magic. I essentially confirmed (to my satisfaction) the answer as part of the research I was doing to ensure that my question above was indeed appropriate.
But the main reason I'm offering my own answer is that I invite knowledgeable users either to agree with it, to correct it, to expand upon it, or to destroy it. Here goes...
It's all magic. No, I'm just kidding. But essentially the mechanism, once Scala has taken the step of representing the anonymous function as a Class, is entirely provided for by Java. In addition, we, the programmer, need to ensure that an anonymous function is as much pure code as possible: no references to any objects that might not be serializable. The secret sauce is to be found in the Java class: ObjectStreamClass. Which, in turn, is invoked by the Java serialization classes: ObjectInputStream and ObjectOutputStream.
Essentially the serialized bytes contain the full pathname of the class, its serialVersionUID, and whatever other relevant information is necessary. When deserializing, the system will simply look up the class in the appropriate classpath and return a reference to it. This obviously assumes that the deserializing system has the class in its classpath. The mechanism for that is a little beyond the scope of my research but it's clear that in a system like Spark, it should be easy to arrange.
No (additional) compilation/decompilation of byte code is necessary as the classLoader has everything necessary. I'm slightly surprised to find the ObjectStreamClass in java.io rather than in the reflection package, but I suppose there's an argument for it being there, given the tight coupling with ObjectInputStream and ObjectOutputStream.
One thing to keep in mind is that while we think in terms of serializing/deserializing objects, rather than classes, what we are dealing with here is an object of type Class.
One more thing to note is that in Scala 2.12, anonymous functions are now implemented differently: as Java8 lambdas. This has broken the mechanism described above in a rather serious way. So serious, that Spark is currently having trouble supporting Scala 2.12. The holdup appears to be this issue: SPARK-14540.

Where do the names value/expression kept in functional programs?

In C#, all the value fields like int, float are kept in stack and all the reference variables pointers are in stack and the actual values are kept in heap. (hope my understanding is correct here).
1. Since in functional programming model there is no value and reference type, where do the name symbol values are kept?
2.How does the stack and heap come into play on functional programs?
Thanks
You're trying to compare C#, which is one specific language, with functional languages all as a group. This is an apples-to-oranges comparison (or maybe more accurately, apples-to-spices comparison?).
Within imperative languages already you can observe differences between what values are stored in the stack vs. which ones go on the heap. For example, C and C++ (as I understand it) allow the programmer to manually choose which of these two ways they want for any type.
And another subtlety is the difference between what the language guarantees to the programmer vs. the way the language is implemented. One example is that recent versions of Oracle's Java VM have an optimization that they call "escape analysis", which is able to allocate an object on the stack if the VM can prove that the object reference does not escape the method (determined after inlining is performed). So even though Java calls its object types "reference" types, this doesn't mean that it will be allocated in the heap. Quoting this article by Brian Goetz:
The Java language does not offer any way to explicitly allocate an object on the stack, but this fact doesn't prevent JVMs from still using stack allocation where appropriate. JVMs can use a technique called escape analysis, by which they can tell that certain objects remain confined to a single thread for their entire lifetime, and that lifetime is bounded by the lifetime of a given stack frame. Such objects can be safely allocated on the stack instead of the heap. Even better, for small objects, the JVM can optimize away the allocation entirely and simply hoist the object's fields into registers.
Similar considerations apply to functional languages—it all depends on (a) what does the language promise, and (b) how the language implementation works and how sophisticated it is. But we can divide the functional language world into two important camps:
Eager functional languages like Scheme, Scala, Clojure or ML.
Lazy functional languages like Haskell.
There are several types of implementation for eager languages:
Pure stack-based implementations. These work same way as modern imperative languages. Common Lisp works this way. Since JVM functional languages use the same VM as Java does, so do they.
Pure continuation-passing style implementations. These are completely stackless—everything, including activation frames, is allocated on the heap. These make it easy to support tail-call optimization and first-class continuations. This technique I believe was pioneered by Scheme implementations, and is also used by the Standard ML of New Jersey compiler.
Mixed implementations. These typically are trying to be mostly stack-based but also support tail-call optimization, and maybe first-class continuations. Example: a bunch of random Scheme systems.
Lazy languages are another story, because the conventional call-stack implementation does not translate directly to lazy evaluation. The GHC Haskell compiler is based on a model called the "STG Machine", which does use a stack and a heap, but the way the STG stack works is different from imperative languages; an entry in the STG stack does not correspond to a "function call" as conventional stack entries do.
Since functional languages generally use immutable values (meaning: "variables" that you can't modify), it doesn't matter for the user whether the values are stored on the stack or on the heap.
Because of this, typically the compiler will decide how to store the values. For example, it might decide that small values (integers, floats, pairs of integers, 8 byte arrays, etc) are stored on the stack and large values (strings, lists, ...) are stored on the heap. It is fully the compiler's decision.
For languages like Haskell, that supports lazy evaluation, values need to be stored on the heap (except when some tricks are used). This is because a variable needs to either be a pointer to the function/closure that computes the value, or a pointer to the actual already computed value.
Since Scala is mentioned in the tags, I'll add and answer about that language. Scala compiles to JVM bytecode, so, at the end of the day, it works just like any other JVM language (including Java):
references and locally defined primitives go on the stack;
objects (including their primitive fields) go on the heap.
About primitive types, it's worth noting that Scala doesn't actually have primitive types in the language; but value types (like Int or Long) do get compiled to the JVM's primitive types in the bytecode, when possible.
edit: To avoid leaving somethign incorrect in this answer: as mentioned in Luis Casillas' extensive answer, objects may end up stored on the stack (or even not allocated as objects at all) if the JVM can judge that it's safe and efficient to do so.

Side effects in Scala

I am learning Scala right in these days. I have a slight familiarity with Haskell, although I cannot claim to know it well.
Parenthetical remark for those who are not familiar with Haskell
One trait that I like in Haskell is that not only functions are first-class citizens, but side effects (let me call them actions) are. An action that, when executed, will endow you with a value of type a, belongs to a specific type IO a. You can pass these actions around pretty much like any other value, and combine them in interesting ways.
In fact, combining the side effects is the only way in Haskell to do something with them, as you cannot execute them. Rather, the program that will be executed, is the combined action which is returned by your main function. This is a neat trick that allows functions to be pure, while letting your program actually do something other than consuming power.
The main advantage of this approach is that the compiler is aware of the parts of the code where you perform side effects, so it can help you catch errors with them.
Actual question
Is there some way in Scala to have the compiler type check side effects for you, so that - for instance - you are guaranteed not to execute side effects inside a certain function?
No, this is not possible in principle in Scala, as the language does not enforce referential transparency -- the language semantics are oblivious to side effects. Your compiler will not track and enforce freedom from side effects for you.
You will be able to use the type system to tag some actions as being of IO type however, and with programmer discipline, get some of the compiler support, but without the compiler proof.
The ability to enforce referential transparency this is pretty much incompatible with Scala's goal of having a class/object system that is interoperable with Java.
Java code can be impure in arbitrary ways (and may not be available for analysis when the Scala compiler runs) so the Scala compiler would have to assume all foreign code is impure (assigning them an IO type). To implement pure Scala code with calls to Java, you would have to wrap the calls in something equivalent to unsafePerformIO. This adds boilerplate and makes the interoperability much less pleasant, but it gets worse.
Having to assume that all Java code is in IO unless the programmer promises otherwise would pretty much kill inheriting from Java classes. All the inherited methods would have to be assumed to be in the IO type; this would even be true of interfaces, since the Scala compiler would have to assume the existence of an impure implementation somewhere out there in Java-land. So you could never derive a Scala class with any non-IO methods from a Java class or interface.
Even worse, even for classes defined in Scala, there could theoretically be an untracked subclass defined in Java with impure methods, whose instances might be passed back in to Scala as instances of the parent class. So unless the Scala compiler can prove that a given object could not possibly be an instance of a class defined by Java code, it must assume that any method call on that object might call code that was compiled by the Java compiler without respecting the laws of what functions returning results not in IO can do. This would force almost everything to be in IO. But putting everything in IO is exactly equivalent to putting nothing in IO and just not tracking side effects!
So ultimately, Scala encourages you to write pure code, but it makes no attempt to enforce that you do so. As far as the compiler is concerned, any call to anything can have side effects.

In GWT, why shouldn't a method return an interface?

In this video from Google IO 2009, the presenter very quickly says that signatures of methods should return concrete types instead of interfaces.
From what I heard in the video, this has something to do with the GWT Java-to-Javascript compiler.
What's the reason behind this choice ?
What does the interface in the method signature do to the compiler ?
What methods can return interfaces instead of concrete types, and which are better off returning concrete instances ?
This has to do with the gwt-compiler, as you say correctly. EDIT: However, as Daniel noted in a comment below, this does not apply to the gwt-compiler in general but only when using GWT-RPC.
If you declare List instead of ArrayList as the return type, the gwt-compiler will include the complete List-hierarchy (i.e. all types implementing List) in your compiled code. If you use ArrayList, the compiler will only need to include the ArrayList hierarchy (i.e. all types implementing ArrayList -- which usually is just ArrayList itself). Using an interface instead of a concrete class you will pay a penalty in terms of compile time and in the size of your generated code (and thus the amount of code each user has to download when running your app).
You were also asking for the reason: If you use the interface (instead of a concrete class) the compiler does not know at compile time which implementations of these interfaces are going to be used. Thus, it includes all possible implementations.
Regarding your last question: all methods CAN be declared to return interface (that is what you ment, right?). However, the above penalty applies.
And by the way: As I understand it, this problem is not restricted to methods. It applies to all type declarations: variables, parameters. Whenever you use an interface to declare something, the compiler will include the complete hierarchy of sub-interfaces and implementing classes. (So obviously if you declare your own interface with only one or two implementing classes then you are not incurring a big penalty. That is how I use interfaces in GWT.)
In short: use concrete classes whenever possible.
(Small suggestion: it would help if you gave the time stamp when you refer to a video.)
This and other performance tips were presented at Google IO 2011 - High-performance GWT.
At about the 7 min point the speak addresses 'RPC Type Explosion':
For some reason I thought the GWT compiler would optimize it away again but it appears I was mistaken.

Debunking Scala myths [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are the most commonly held misconceptions about the Scala language, and what counter-examples exist to these?
UPDATE
I was thinking more about various claims I've seen, such as "Scala is dynamically typed" and "Scala is a scripting language".
I accept that "Scala is [Simple/Complex]" might be considered a myth, but it's also a viewpoint that's very dependent on context. My personal belief is that it's the very same features that can make Scala appear either simple or complex depending oh who's using them. Ultimately, the language just offers abstractions, and it's the way that these are used that shapes perceptions.
Not only that, but it has a certain tendency to inflame arguments, and I've not yet seen anyone change a strongly-held viewpoint on the topic...
Myth: That Scala’s “Option” and Haskell’s “Maybe” types won’t save you from null. :-)
Debunked: Why Scala's "Option" and Haskell's "Maybe" types will save you from null by James Iry.
Myth: Scala supports operator overloading.
Actually, Scala just has very flexible method naming rules and infix syntax for method invocation, with special rules for determining method precedence when the infix syntax is used with 'operators'. This subtle distinction has critical implications for the utility and potential for abuse of this language feature compared to true operator overloading (a la C++), as explained more thoroughly in James Iry's answer to this question.
Myth: methods and functions are the same thing.
In fact, a function is a value (an instance of one of the FunctionN classes), while a method is not. Jim McBeath explains the differences in greater detail. The most important practical distinctions are:
Only methods can have type parameters
Only methods can take implicit arguments
Only methods can have named and default parameters
When referring to a method, an underscore is often necessary to distinguish method invocation from partial function application (e.g. str.length evaluates to a number, while str.length _ evaluates to a zero-argument function).
I disagree with the argument that Scala is hard because you can use very advanced features to do hard stuff with it. The scalability of Scala means that you can write DSL abstractions and high-level APIs in Scala itself that otherwise would need a language extension. So to be fair you need to compare Scala libraries to other languages compilers. People don't say that C# is hard because (I assume, don't have first hand knowledge on this) the C# compiler is pretty impenetrable. For Scala it's all out in the open. But we need to get to a point where we make clear that most people don't need to write code on this level, nor should they do it.
I think a common misconception amongst many scala developers, those at EPFL (and yourself, Kevin) is that "scala is a simple language". The argument usually goes something like this:
scala has few keywords
scala reuses the same few constructs (e.g. PartialFunction syntax is used as the body of a catch block)
scala has a few simple rules which allow you to create library code (which may appear as if the language has special keywords/constructs). I'm thinking here of implicits; methods containing colons; allowed identifier symbols; the equivalence of X(a, b) and a X b with extractors. And so on
scala's declaration-site variance means that the type system just gets out of your way. No more wildcards and ? super T
My personal opinion is that this argument is completely and utterly bogus. Scala's type system taken together with implicits allows one to write frankly impenetrable code for the average developer. Any suggestion otherwise is just preposterous, regardless of what the above "metrics" might lead you to think. (Note here that those who I've seen scoffing at the non-complexity of Java on Twitter and elsewhere happen to be uber-clever types who, it sometimes seems, had a grasp of monads, functors and arrows before they were out of short pants).
The obvious arguments against this are (of course):
you don't have to write code like this
you don't have to pander to the average developer
Of these, it seems to me that only #2 is valid. Whether or not you write code quite as complex as scalaz, I think it's just silly to use the language (and continue to use it) with no real understanding of the type system. How else can one get the best out of the language?
There is a myth that Scala is difficult because Scala is a complex language.
This is false--by a variety of metrics, Scala is no more complex than Java. (Size of grammar, lines of code or number of classes or number of methods in the standard API, etc..)
But it is undeniably the case that Scala code can be ferociously difficult to understand. How can this be, if Scala is not a complex language?
The answer is that Scala is a powerful language. Unlike Java, which has many special constructs (like enums) that accomplish one particular thing--and requires you to learn specialized syntax that applies just to that one thing, Scala has a variety of very general constructs. By mixing and matching these constructs, one can express very complex ideas with very little code. And, unsurprisingly, if someone comes along who has not had the same complex idea and tries to figure out what you're doing with this very compact code, they may find it daunting--more daunting, even, than if they saw a couple of pages of code to do the same thing, since then at least they'd realize how much conceptual stuff there was to understand!
There is also an issue of whether things are more complex than they really need to be. For example, some of the type gymnastics present in the collections library make the collections a joy to use but perplexing to implement or extend. The goals here are not particularly complicated (e.g. subclasses should return their own types), but the methods required (higher-kinded types, implicit builders, etc.) are complex. (So complex, in fact, that Java just gives up and doesn't try, rather than doing it "properly" as in Scala. Also, in principle, there is hope that this will improve in the future, since the method can evolve to more closely match the goal.) In other cases, the goals are complex; list.filter(_<5).sorted.grouped(10).flatMap(_.tail.headOption) is a bit of a mess, but if you really want to take all numbers less than 5, and then take every 2nd number out of 10 in the remaining list, well, that's just a somewhat complicated idea, and the code pretty much says what it does if you know the basic collections operations.
Summary: Scala is not complex, but it allows you to compactly express complex ideas. Compact expression of complex ideas can be daunting.
There is a myth that Scala is non-deployable, whereas a wide range of third-party Java libraries can be deployed without a second thought.
To the extent that this myth exists, I suspect it exists among people who are not accustomed to separating a virtual machine and API from a language and compiler. If java == javac == Java API in your mind, you might get a little nervous if someone suggests using scalac instead of javac, because you see how nicely your JVM runs.
Scala ends up as JVM bytecode, plus its own custom library. There's no reason to be any more worried about deploying Scala on a small scale or as part of some other large project as there is in deploying any other library that may or may not stay compatible with whichever JVM you prefer. Granted, the Scala development team is not backed by quite as much force as the Google collections, or Apache Commons, but its got at least as much weight behind it as things like the Java Advanced Imaging project.
Myth:
def foo() = "something"
and
def bar = "something"
is the same.
It is not; you can call foo(), but bar() tries to call the apply method of StringLike with no arguments (results in an error).
Some common misconceptions related to Actors library:
Actors handle incoming messages in a parallel, in multiple threads / against a thread pool (in fact, handling messages in multiple threads is contrary to the actors concept and may lead to racing conditions - all messages are sequentially handled in one thread (thread-based actors use one thread both for mailbox processing and execution; event-based actors may share one VM thread for execution, using multi-threaded executor to schedule mailbox processing))
Uncaught exceptions don't change actor's behavior/state (in fact, all uncaught exceptions terminate the actor)
Myth: You can replace a fold with a reduce when computing something like a sum from zero.
This is a common mistake/misconception among new users of Scala, particularly those without prior functional programming experience. The following expressions are not equivalent:
seq.foldLeft(0)(_+_)
seq.reduceLeft(_+_)
The two expressions differ in how they handle the empty sequence: the fold produces a valid result (0), while the reduce throws an exception.
Myth: Pattern matching doesn't fit well with the OO paradigm.
Debunked here by Martin Odersky himself. (Also see this paper - Matching Objects with Patterns - by Odersky et al.)
Myth: this.type refers to the same type represented by this.getClass.
As an example of this misconception, one might assume that in the following code the type of v.me is B:
trait A { val me: this.type = this }
class B extends A
val v = new B
In reality, this.type refers to the type whose only instance is this. In general, x.type is the singleton type whose only instance is x. So in the example above, the type of v.me is v.type. The following session demonstrates the principle:
scala> val s = "a string"
s: java.lang.String = a string
scala> var v: s.type = s
v: s.type = a string
scala> v = "another string"
<console>:7: error: type mismatch;
found : java.lang.String("another string")
required: s.type
v = "another string"
Scala has type inference and refinement types (structural types), whereas Java does not.
The myth is busted by James Iry.
Myth: that Scala is highly scalable, without qualifying what forms of scalability.
Scala may indeed be highly scalable in terms of the ability to express higher-level denotational semantics, and this makes it a very good language for experimentation and even for scaling production at the project-level scale of top-down coordinated compositionality.
However, every referentially opaque language (i.e. allows mutable data structures), is imperative (and not declarative) and will not scale to WAN bottom-up, uncoordinated compositionality and security. In other words, imperative languages are compositional (and security) spaghetti w.r.t. uncoordinated development of modules. I realize such uncoordinated development is perhaps currently considered by most to be a "pipe dream" and thus perhaps not a high priority. And this is not to disparage the benefit to compositionality (i.e. eliminating corner cases) that higher-level semantic unification can provide, e.g. a category theory model for standard library.
There will possibly be significant cognitive dissonance for many readers, especially since there are popular misconceptions about imperative vs. declarative (i.e. mutable vs. immutable), (and eager vs. lazy,) e.g. the monadic semantic is never inherently imperative yet there is a lie that it is. Yes in Haskell the IO monad is imperative, but it being imperative has nothing to with it being a monad.
I explained this in more detail in the "Copute Tutorial" and "Purity" sections, which is either at the home page or temporarily at this link.
My point is I am very grateful Scala exists, but I want to clarify what Scala scales and what is does not. I need Scala for what it does well, i.e. for me it is the ideal platform to prototype a new declarative language, but Scala itself is not exclusively declarative and afaik referential transparency can't be enforced by the Scala compiler, other than remembering to use val everywhere.
I think my point applies to the complexity debate about Scala. I have found (so far and mostly conceptually, since so far limited in actual experience with my new language) that removing mutability and loops, while retaining diamond multiple inheritance subtyping (which Haskell doesn't have), radically simplifies the language. For example, the Unit fiction disappears, and afaics, a slew of other issues and constructs become unnecessary, e.g. non-category theory standard library, for comprehensions, etc..