Why it doesn't allowed to overload methods inside methods (e.g. overloaded closures)? - scala

A minimized example is the following:
object Main extends App {
def f = {
def giveMeBigDecimal(x: String) = BigDecimal(x)
def giveMeBigDecimal(x: Double) = BigDecimal(x)
(giveMeBigDecimal("1.0"), giveMeBigDecimal(1.0))
}
}
Scala 2.9.2 compiler keep saying me that method giveMeBigDecimal is defined twice
I know how can I workaround this, but curious why such limitation exists.

It's a Scala's implementation detail which (unfortunately) made its way to the spec. Scala implements local methods as variables with closure type and it isn't allowed to have two variables with the same name in the same method.

Related

Understanding companion object in scala

While learning Scala, I came across interesting concept of companion object. Companion object can used to define static methods in Scala. Need few clarifications in the below Spark Scala code in regard of companion object.
class BballStatCounter extends Serializable {
val stats: StatCounter = new StatCounter()
var missing: Long = 0
def add(x: Double): BballStatCounter = {
if (x.isNaN) {
missing += 1
} else {
stats.merge(x)
}
this
}
}
object BballStatCounter extends Serializable {
def apply(x: Double) = new BballStatCounter().add(x)
}
Above code is invoked using val stat3 = stats1.map(b=>BballStatCounter(b)).
What is nature of variables stats and missing declared in the
class? Is it similar to class attributes of Python?
What is the significance of apply method in here?
Here stats and missing are class attributes and each instance of BballStatCounter will have their own copy of them just like in Python.
In Scala the method apply serves a special purpose, if any object has a method apply and if that object is used as function calling notation like Obj() then the compiler replaces that with its apply method calling, like Obj.apply() .
The apply method is generally used as a constructor in a Class Companion object.
All the collection Classes in Scala has a Companion Object with apply method, thus you are able to create a list like : List(1,2,3,4)
Thus in your above code BballStatCounter(b) will get compiled to BballStatCounter.apply(b)
stats and missing are members of the class BcStatCounter. stats is a val so it cannot be changed once it has been defined. missing is a var so it is more like a traditional variable and can be updated, as it is in the add method. Every instance of BcStatCounter will have these members. (Unlike Python, you can't add or remove members from a Scala object)
The apply method is a shortcut that makes objects look like functions. If you have an object x with an apply method, you write x(...) and the compiler will automatically convert this to x.apply(...). In this case it means that you can call BballStatCounter(1.0) and this will call the apply method on the BballStatCounter object.
Neither of these questions is really about companion objects, this is just the normal Scala class framework.
Please note the remarks in the comments about asking multiple questions.

Overriding the method of a generic trait in Scala

I defined a generic Environment trait:
trait Environment[T]
For which I provide this implementation:
class MyEnvironment extends Environment[Integer] {
val specific: Integer = 0
}
Furthermore, I defined a generic Event trait that has one method that accepts a generic Environment as parameter:
trait Event[T] {
def exec(e: Environment[T])
}
For this Event trait, I provided the following implementation, where the exec() method accepts a parameter of the type MyEnvironment, to enable me to access the specific value of MyEnvironment.
class MyEvent extends Event[Integer] {
override def exec(e: MyEnvironment): Unit = {
println(e.specific)
}
}
However, the Scala compilers outputs an error, from where it seems that
MyEnvironment is not recognized as an Environment[Integer]:
Error: method exec overrides nothing.
Note: the super classes of class MyEvent contain the following, non final members named exec: def exec(t: main.vub.lidibm.test.Environment[Integer]): Unit
Is it possible to make this work, or are there patterns to circumvent this problem.
You can't narrow down the signature of a method; it's not the same method any more. In your case, you can't override
def exec(e: Environment[T]): Unit
with
override def exec(e: MyEnvironment): Unit
Second method is more specific than the first one. It's conceptually the same as e.g. overriding def foo(a: Any) with def foo(s: String).
If you want it to work, you need to use the same type in both signatures (note that if you use an upper bound such as T <: Environment[_], that means that a method that accepts T actually accepts any subclass of Environment, so overriding using MyEnvironment will work OK in that case).
Because overriding is not polymorphic in the method's parameter types. It works the same way as in Java. In the example what you have done is overloaded the method in reality.That is they are treated as different methods.
For overriding the method names ie. the name , signature , types has to be the same.

Scala Akka - generics type in receive handler

I am trying to get my head around on what is best way to code this implementation. To give you example, here is my DAO handler code looks like
trait IDAOHandler[+T] {
def create[U <: AnyRef: Manifest](content: U): Try[String]
}
class MongoDAOHAndler extends IDAOHandler[+T]...
So I am creating actor that will handle all my persistence task that includes serializing the content and updating MongoDB database.
So I am using akka and the trick is in receive method, how do i handle generics type parameter. Even though my actor code is non-generic, but the messages it is going to receive will be generic type and based on content type in createDAO I was planning to get appropriate DAO handler (described aboe) and invoke the method.
case class createDAO[T](content: T) (implicit val metaInfo:TypeTag[T])
class CDAOActor(daofactory: DAOFactory) extends BaseActor {
def wrappedReceive = {
case x: createDAO[_] => pmatch(x)
}
def pmatch[A](c: createDAO[A]) {
//getting dao handler which will not work because it needs manifest
}
}
Let me know if there are any other ways to re-write this implementation.
You might already know this, but a little background just to be sure: In Scala (and Java) we have what is called type erasure, this means that the parametric types are used to verify the correctness of the code during compile time but is then removed (and "does not give a runtime cost", http://docs.oracle.com/javase/tutorial/java/generics/erasure.html). Pattern matching happens during runtime so the parametric types are already erased.
The good news is that you can make the Scala compiler keep the erased type by using TypeTag like you have done in your case class or ClassTag which contains less information but also keeps the erased type. You can get the erased type from the method .erasure (.runtimeClass in Scala 2.11) which will return the Java Class of the T type. You still wont be able to use that as the type parameter for a method call as that again happens compile time and you are now looking at that type in runtime, but what you can do is to compare this type during runtime with if/else or patternmatching.
So for example you could implement a method on your daofactory that takes a Class[_] parameter and returns a DAO instance for that class. In pmatch you would then take the erased type out of the tag and pass on to it.
Here is some more info about the tags, why they exist and how they work:
http://docs.scala-lang.org/overviews/reflection/typetags-manifests.html
I took a bit different approach, kind of dispatcher pattern, so here is the revised code
trait IDAOProcess
{
def process(daofactory:IDAOFactory,sender:ActorRef)
}
case class createDAO[T <: AnyRef : Manifest](content:T) (implicit val metaInfo:TypeTag[T]) extends IDAOProcess
{
def process(daofactory:IDAOFactory,sender:ActorRef)
{
for ( handler <- daofactory.getDAO[T] )
{
handler.create(content)
}
}
}
class DAOActor(daofactory:IDAOFactory) extends BaseActor
{
def wrappedReceive =
{
case x:IDAOProcess =>
{
x.process(daofactory,sender)
}
}
}

Specifying the requirements for a generic type

I want to call a constructor of a generic type T, but I also want it to have a specific constructor with only one Int argument:
class Class1[T] {
def method1(i: Int) = {
val instance = new T(i) //ops!
i
}
}
How do I specify this requirement?
UPDATE:
How acceptable (flexible, etc) is it to use something like this? That's a template method pattern.
abstract class Class1[T] {
def creator: Int => T
def method1(i: Int) = {
val instance = creator(i) //seems ok
i
}
}
Scala doesn't allow you to specify the constructor's signature in a type constraint (as e.g. C#).
However Scala does allow you to achieve something equivalent by using the type class pattern. This is more flexible, but requires writing a bit more boilerplate code.
First, define a trait which will be an interface for creating a T given an Int.
trait Factory[T] {
def fromInt(i: Int): T
}
Then, define an implicit instance for any type you want. Let's say you have some class Foo with an appropriate constructor.
implicit val FooFactory = new Factory[Foo] {
def fromInt(i: Int) = new Foo(i)
}
Now, you can specify a context bound for the type parameter T in the signature of Class1:
class Class1[T : Factory] {
def method1(i: Int) = {
val instance = implicitly[Factory[T]].fromInt(i)
// ...
}
}
The constraint T : Factory says that there must be an implicit Factory[T] in scope. When you need to use the instance, you grab it from implicit scope using the implicitly method.
Alternatively, you could specify the factory as an implicit parameter to the method that requires it.
class Class1[T] {
def method1(i: Int)(implicit factory: Factory[T]) = {
val instance = factory.fromInt(i)
// ...
}
}
This is more flexible than putting the constraint in the class signature, because it means you could have other methods on Class1 that don't require a Factory[T]. In that case, the compiler will not enforce that there is a Factory[T] unless you call one of the methods that requires it.
In response to your update (with the abstract creator method), this is a perfectly reasonable way to do it, as long as you don't mind creating a subtype of Class1 for every T. Also note that T will need to be a concrete type at any point that you want to create an instance of Class1, because you will need to provide a concrete implementation for the abstract method.
Consider trying to create an instance of Class1 inside another generic method. When using the type class pattern, you can extend the necessary type constraint to the type signature of that method, in order to make this compile:
def instantiateClass1[T : Factory] = new Class1[T]
If you don't need to do this, then you might not need the full power of the type class pattern.
When you create a generic class or trait, the class does not gain special access to the methods of whatever actual class you might parameterise it with. When you say
class Class1[T]
You are saying
This is a class which will work with unspecified type T.
Most of its methods will take instances of type T as a parameter or return T.
Any variance annotations or type bounds attached to the type parameter will be applied whenever it appears as a parameter of one of Class1's methods.
There is no such thing as type "Class1" but there may be an arbitrary number of derived classes of type "Class1[something]"
That's all. You get no special access to T from within Class1, because Scala does not know what T is. If you wanted Class1 to have access to T's fields and methods, you should have extended it or mixed it in.
If you want access to the methods of T (without using reflection), you can only do that from within one of Class1's methods which accepts a parameter of type T. And then you will get whichever version of the method belongs to the specific type of the actual object which is passed.
(You can work around this with reflection, but that is a runtime solution and absolutely not typesafe).
Look at what you are trying to do in your original code snippet...
You have specified that Class1 can be parameterised with any arbitrary type.
You want to invoke T with a constructor which takes a single Int parameter
But what have you done to promise the Scala compiler that T will have such a constructor? Nothing at all. So how can the compiler trust this? Well, it can't.
Even if you added an upper type bound, requiring that T be a subclass of some class which does have such a constructor, that doesn't help; T might be a subclass which has a more complex constructor, which calls back to the simpler constructor. So at the point where Class1 is defined, the compiler can have no confidence about the safety of constructing T with that simple method. So that call cannot be type-safe.
Class-based OO isn't about conjuring unknown types out of the ether; it doesn't let you plunge your hand into a top-hat-shaped class loader and pull out a surprise. It allows you to handle arbitrary already-created instances of some general type without knowing their specific type. At the point where those objects are created, there's no ambiguity at all.

How can I add new methods to a library object?

I've got a class from a library (specifically, com.twitter.finagle.mdns.MDNSResolver). I'd like to extend the class (I want it to return a Future[Set], rather than a Try[Group]).
I know, of course, that I could sub-class it and add my method there. However, I'm trying to learn Scala as I go, and this seems like an opportunity to try something new.
The reason I think this might be possible is the behavior of JavaConverters. The following code:
class Test {
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
does not compile, because there is no asScala method on Java's ArrayList. But if I import some new definitions:
class Test {
import collection.JavaConverters._
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
then suddenly there is an asScala method. So that looks like the ArrayList class is being extended transparently.
Am I understanding the behavior of JavaConverters correctly? Can I (and should I) duplicate that methodology?
Scala supports something called implicit conversions. Look at the following:
val x: Int = 1
val y: String = x
The second assignment does not work, because String is expected, but Int is found. However, if you add the following into scope (just into scope, can come from anywhere), it works:
implicit def int2String(x: Int): String = "asdf"
Note that the name of the method does not matter.
So what usually is done, is called the pimp-my-library-pattern:
class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
implicit def foo2Better(x: Foo) = new BetterFoo(x)
That allows you to call coolMethod on Foo. This is used so often, that since Scala 2.10, you can write:
implicit class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
which does the same thing but is obviously shorter and nicer.
So you can do:
implicit class MyMDNSResolver(x: com.twitter.finagle.mdns.MDNSResolver) = {
def awesomeMethod = { ... }
}
And you'll be able to call awesomeMethod on any MDNSResolver, if MyMDNSResolver is in scope.
This is achieved using implicit conversions; this feature allows you to automatically convert one type to another when a method that's not recognised is called.
The pattern you're describing in particular is referred to as "enrich my library", after an article Martin Odersky wrote in 2006. It's still an okay introduction to what you want to do: http://www.artima.com/weblogs/viewpost.jsp?thread=179766
The way to do this is with an implicit conversion. These can be used to define views, and their use to enrich an existing library is called "pimp my library".
I'm not sure if you need to write a conversion from Try[Group] to Future[Set], or you can write one from Try to Future and another from Group to Set, and have them compose.