Is there a method_missing in scala? - scala

similar to the one in Ruby

Yes, as of Scala 2.9 with the -Xexperimental option, one can use the Dynamic trait
(scaladoc). Classes that extend Dynamic get the magical method applyDynamic(methodName, args) which behaves like Ruby's method_missing.
Among other things, the Dynamic trait can be useful for interfacing with dynamic languages on the JVM.

The following is no longer strictly true with the Dynamic trait found in [experimental] Scala 2.9. See the answer from Kipton Barros, for example.
However, Dynamic is still not quite like method_missing, but rather employs compiler magic to effectively rewrite method calls to "missing" methods, as determined statically, to a proxy (applyDynamic). It is the approach of statically-determining the "missing" methods that differentiates it from method_missing from a polymorphism viewpoint: one would need to try and dynamically forward (e.g. with reflection) methods to get true method_missing behavior. (Of course this can be avoided by avoiding sub-types :-)
No. Such a concept does not exist in Java or Scala.
Like Java, all the methods in Scala are 'bound' at compile time (this also determines what method is used for overloading, etc). If a program does compile, said method exists (or did according to the compiler), otherwise it does not. This is why you can get the NoSuchMethodError if you change a class definition without rebuilding all affected classes.
If you are just worried about trying to call a method on an object which conforms to a signature ("typed duck typing"), then perhaps you may be able to get away with structural typing. Structural typing in Scala is magical access over reflection -- thus it defers the 'binding' until runtime and a runtime error may be generated. Unlike method_missing this does not allow the target to handle the error, but it does allow the caller to (and the caller could theoretically call a defined methodMissing method on the target... but this is probably not the best way to approach Scala. Scala is not Ruby :-)

Not really. It doesn't make sense. Scala is a statically-typed language in which methods are bound at compile time; Ruby is a dynamically-typed language in which messages are passed to objects, and these messages are evaluated at runtime, which allows Ruby to handle messages that it doesn't directly respond to, à la method_missing.
You can mimic method_missing in a few ways in Scala, notably by using the Actors library, but it's not quite the same (or nearly as easy) as Ruby's method_missing.

No, this is not possible in Scala 2.8 and earlier.

Related

Explanation for - No Reflection involved

I have a very simple question. This is not only true with spray-json but I have read similar claims with argonaut and circe. So please enlighten me.
In spray-json, I have come across the statement saying There is no reflection involved. I understand for type class based approach, if the user provides JsonFormat then all is well. But is this claim also true when it comes to using DefaultJsonProtocol?
Because when we you look at this, you can see the usage of clazz.getMethods, clazz.getDeclaredFields, etc. Isn't this the usage of reflection? Though of course thanks to object#apply that we do not need to worry about setting unlike in Java world using reflection. But at least for reading the field names, I do not understand on how reflection can be overlooked.
I'm not very familiar with spray-json, so I won't defend its claims about reflection, which definitely seem to be at odds with the parts of ProductFormats you point to.
I do know more about circe and Argonaut and argonaut-shapeless and Play JSON, all of which do use a kind of reflection to derive codecs for case classes and other user-defined types. The important point is that these libraries don't use runtime reflection—they determine the field names and other information they need at compile time through Scala's macro system.
Generally when people talk about "reflection" in the context of Java or Scala, they mean runtime reflection, but macros also support a kind of reflection, so when I personally talk about how derivation works in these libraries, I try to be careful to specify that there's no runtime reflection involved.
You can argue that compile-time reflection (or metaprogramming, or whatever you want to call it) is much less bad than runtime reflection. It may make your code more complex, and it's very easy to abuse, but it doesn't introduce the same kinds of fragility as runtime reflection, and it doesn't undermine your ability to reason about your code in the same ways that runtime reflection does. If you understand what the macro does (which is a big if), you'll never be surprised at runtime.
Types are fundamentally about rejecting bad potential programs before you run them, and introspection on types at runtime muddles this all up (as Erik Osheim says, "If you meet a Type in the Runtime, kill it"). On the other hand, introspection on types at compile-time is exactly what compilers do, and macros just give you as the programmer a clean way of getting involved in that process (or at least relatively clean, compared to writing compiler plugins, etc.).
There may also be performance benefits to avoiding runtime reflection, but for me personally that's generally a secondary concern—I hate runtime reflection because I've wasted too much of my life debugging horrible Java code that uses horrible Java libraries that depend heavily on runtime reflection—not because runtime reflection might make my programs marginally slower.
That's all a very long-winded way to say that you should read "there is no reflection involved" in this context as "there is no runtime reflection involved" (and even then you shouldn't take the author at their word, I guess, given all that getMethods stuff in spray-json).

How do lambdas work in Scala, are they functions on top of anonymous classes?

The title might be a little confusing so let me elaborate, I've been reading some criticism regarding Scala. It was an email sent to Tyepsafe regarding some deficiencies in Scala from Coda Hale (Yammer's Infrastructure Architect), so to quote:
we stopped seeing lambdas as free and started seeing them as syntactic sugar on top of anonymous classes and thus acquired the same distaste for them as we did anonymous classes.
So, from this, I have a couple of questions regarding how lambdas work in Scala:
What is the difference between a free function and a function that is bound to an anonymous class (technically, aren't all functions bound to the main singleton object)?
What is the impact on performance of using an anonymous class bound function instead of a free function?
Yes, lambdas are still objects, instances of anonymous classes.
This is how the JVM works, all references are objects. You can have either references or values (primitives) and there's no way around it.
Later versions of Java have MethodHandles. But it's worth noting that MethodHandle is also still just an abstract class - albeit one that the JVM specifically knows how to optimise away at runtime.
Also also worth noting is that the JVM can often perform escape analysis on abstract classes (such as Scala's functions), and optimise these away too.
On top of this, Scala can use any object with an apply method as though it were a Function. In this case, the explicit call to apply is emitted in the bytecode and you're not dealing with anonymous classes any more.
Given all of the above, it's impossible to make a general statement regarding the performance of Scala's function implementation, it depends on your specific code/use case. In general, I wouldn't worry unless you hit a corner case where your profiler pinpoints a problem here (which is very unlikely)
Well, in C for example a function is just a 32 or 64 bit pointer to a place in memory to jump to and the concept of a closure doesn't really apply since you can't declare an anonymous c function. I don't know how the C++ lambdas work, I guess the compiler makes a method and passes the fields you want in the closure along with parameters. Maybe that's what you're looking for. In the JVM you have to wrap your logic in a class so now you have a virtual table of methods, fields, and some methods related to synchronization and the type system.
What is the impact on performance?...I don't know, have you noticed an impact on performance? A lot of that extra Java stuff I described really isn't needed for an anonymous class and might just get optimized out. I imagine there are butterflies that influence the weather more than the extra JVM stuff would effect your software.

Side effects in Scala

I am learning Scala right in these days. I have a slight familiarity with Haskell, although I cannot claim to know it well.
Parenthetical remark for those who are not familiar with Haskell
One trait that I like in Haskell is that not only functions are first-class citizens, but side effects (let me call them actions) are. An action that, when executed, will endow you with a value of type a, belongs to a specific type IO a. You can pass these actions around pretty much like any other value, and combine them in interesting ways.
In fact, combining the side effects is the only way in Haskell to do something with them, as you cannot execute them. Rather, the program that will be executed, is the combined action which is returned by your main function. This is a neat trick that allows functions to be pure, while letting your program actually do something other than consuming power.
The main advantage of this approach is that the compiler is aware of the parts of the code where you perform side effects, so it can help you catch errors with them.
Actual question
Is there some way in Scala to have the compiler type check side effects for you, so that - for instance - you are guaranteed not to execute side effects inside a certain function?
No, this is not possible in principle in Scala, as the language does not enforce referential transparency -- the language semantics are oblivious to side effects. Your compiler will not track and enforce freedom from side effects for you.
You will be able to use the type system to tag some actions as being of IO type however, and with programmer discipline, get some of the compiler support, but without the compiler proof.
The ability to enforce referential transparency this is pretty much incompatible with Scala's goal of having a class/object system that is interoperable with Java.
Java code can be impure in arbitrary ways (and may not be available for analysis when the Scala compiler runs) so the Scala compiler would have to assume all foreign code is impure (assigning them an IO type). To implement pure Scala code with calls to Java, you would have to wrap the calls in something equivalent to unsafePerformIO. This adds boilerplate and makes the interoperability much less pleasant, but it gets worse.
Having to assume that all Java code is in IO unless the programmer promises otherwise would pretty much kill inheriting from Java classes. All the inherited methods would have to be assumed to be in the IO type; this would even be true of interfaces, since the Scala compiler would have to assume the existence of an impure implementation somewhere out there in Java-land. So you could never derive a Scala class with any non-IO methods from a Java class or interface.
Even worse, even for classes defined in Scala, there could theoretically be an untracked subclass defined in Java with impure methods, whose instances might be passed back in to Scala as instances of the parent class. So unless the Scala compiler can prove that a given object could not possibly be an instance of a class defined by Java code, it must assume that any method call on that object might call code that was compiled by the Java compiler without respecting the laws of what functions returning results not in IO can do. This would force almost everything to be in IO. But putting everything in IO is exactly equivalent to putting nothing in IO and just not tracking side effects!
So ultimately, Scala encourages you to write pure code, but it makes no attempt to enforce that you do so. As far as the compiler is concerned, any call to anything can have side effects.

For Scala are there any advantages to type erasure?

I've been hearing a lot about different JVM languages, still in vaporware mode, that propose to implement reification somehow. I have this nagging half-remembered (or wholly imagined, don't know which) thought that somewhere I read that Scala somehow took advantage of the JVM's type erasure to do things that it wouldn't be able to do with reification. Which doesn't really make sense to me since Scala is implemented on the CLR as well as on the JVM, so if reification caused some kind of limitation it would show up in the CLR implementation (unless Scala on the CLR is just ignoring reification).
So, is there a good side to type erasure for Scala, or is reification an unmitigated good thing?
See Ola Bini's blog. As we all know, Java has use-site covariance, implemented by having little question marks wherever you think variance is appropriate. Scala has definition-site covariance, implemented by the class designer. He says:
Generics is a complicated language feature. It becomes even more
complicated when added to an existing language that already has
subtyping. These two features don’t play very well together in the
general case, and great care has to be taken when adding them to a
language. Adding them to a virtual machine is simple if that machine
only has to serve one language - and that language uses the same
generics. But generics isn’t done. It isn’t completely understood how
to handle correctly and new breakthroughs are happening (Scala is a
good example of this). At this point, generics can’t be considered
“done right”. There isn’t only one type of generics - they vary in
implementation strategies, feature and corner cases.
...
What this all means is that if you want to add reified generics to the
JVM, you should be very certain that that implementation can encompass
both all static languages that want to do innovation in their own
version of generics, and all dynamic languages that want to create a
good implementation and a nice interfacing facility with Java
libraries. Because if you add reified generics that doesn’t fulfill
these criteria, you will stifle innovation and make it that much
harder to use the JVM as a multi language VM.
i.e. If we had reified generics in the JVM, most likely those reified generics wouldn't be suitable for the features we really like about Scala, and we'd be stuck with something suboptimal.

Why do dynamic languages like Ruby and Python not have the concept of interfaces like in Java or C#?

To my surprise as I am developing more interest towards dynamic languages like Ruby and Python. The claim is that they are 100% object oriented but as I read on several basic concepts like interfaces, method overloading, operator overloading are missing. Is it somehow in-built under the cover or do these languages just not need it? If the latter is true are, they 100% object oriented?
EDIT: Based on some answers I see that overloading is available in both Python and Ruby, is it the case in Ruby 1.8.6 and Python 2.5.2 ??
Dynamic languages use duck typing.
Any code can call methods on any object that support those methods, so the concept
of interfaces is extraneous.
Python does in fact support operator overloading(check - 3.3. Special method names) , as does Ruby.
Anyway, you seem to be focusing on aspects that are not essential to object oriented programming. The main focus is on concepts like encapsulation, inheritance, and polymorphism, which are 100% supported in Python and Ruby.
Thanks to late binding, they do not need it. In Java/C#, interfaces are used to declare that some class has certain methods and it is checked during compile time; in Python, whether a method exists is checked during runtime.
Method overloading in Python does work:
>>> class A:
... def foo(self):
... return "A"
...
>>> class B(A):
... def foo(self):
... return "B"
...
>>> B().foo()
'B'
Are they object-oriented? I'd say yes. It's more of an approach thing rather than if any concrete language has feature X or feature Y.
I can only speak for python, but there have been proposals for interfaces as well as home-written interface examples in the past.
However, the way python works with objects dynamically tends to reduce the need for (and the benefit of) interfaces to some extent.
With a dynamic language, your type binding happens at runtime - interfaces are mostly used for compile time constraints on objects - if this happens at runtime, it eliminates some of the need for interfaces.
name based polymorphism
"For those of you unfamiliar with Python, here's a quick intro to name-based polymorphism. Python objects have an internal dictionary that contains a string for every attribute and method. When you access an attribute or method in Python code, Python simply looks up the string in the dict. Therefore, if what you want is a class that works like a file, you don't need to inherit from file, you just create a class that has the file methods that are needed.
Python also defines a bunch of special methods that get called by the appropriate syntax. For example, a+b is equivalent to a.add(b). There are a few places in Python's internals where it directly manipulates built-in objects, but name-based polymorphism works as you expect about 98% of the time. "
Python does provide operator overloading, e.g. you can define a method __add__ if you want to overload +.
You typically don't need to provide method overloading, since you can pass arbitrary parameters into a single method. In many cases, that single method can have a single body that works for all kinds of objects in the same way. If you want to have different code for different parameter types, you can inspect the type, or double-dispatch.
Interfaces are mostly unnecessary because of duck typing, as rossfabricant points out. A few remaining cases are covered in Python by ABCs (abstract base classes) or Zope interfaces.