Scala ambiguous case class generated and companion apply method - scala

Given following code:
case class Foo(bar: String)
object Foo{
def apply(bar: String): Foo = Foo(bar)
}
If I was to call Foo("foo") I would end up with an infinite recursive call to def apply(bar: String). Of course, I could fix this problem by changing my apply implementation to def apply(bar: String): Foo = new Foo(bar). However, if I understand correctly, an apply method is generated for case classes with all their constructor parameters. My question then is two-fold:
1) If I myself wrote and then automatically generated a Foo.apply(bar: String): Foo, why do I not get a compilation error complaining about duplicate method definitions?
and
2) If generated method has a different signature, how can I call it?

why do I not get a compilation error complaining about duplicate method definitions?
Because your apply() code replaces the case class auto-generated code. They don't exist at the same time.
This can be demonstrated by compiling your code but dumping the intermediate state after the "typer" phase (phase 4) of the compilation.
%%> cat so.sc
case class Foo(bar: String)
object Foo{
def apply(bar: String): Foo = Foo(bar)
}
%%> scalac -Xprint:4 so.sc | less
The resulting output has only one object with only one apply() method.
object Foo extends scala.AnyRef with Serializable {
def <init>(): Foo.type = {
Foo.super.<init>();
()
};
def apply(bar: String): Foo = Foo.apply(bar);
case <synthetic> def unapply(x$0: Foo): Option[String] = if (x$0.==(null))
scala.None
else
Some.apply[String](x$0.bar);
<synthetic> private def readResolve(): Object = Foo
}
As you can see, the recursive apply() method resides amidst the auto-generated <synthetic> code.

Related

how to use play format serialization/deserialization macro for nested case objects

So I have a sealed trait like such:
sealed trait Foo {
override def toString: String = {
...
}
}
and a companion object like:
object Foo {
case object Bar extends Foo {}
case object Bat extends Foo {}
case object Bah extends Foo {}
def parse(s: String): Foo = {
...
}
implicit val format: Format[Foo] = Json.format[Foo]
def apply(f: String): Foo = parse(f)
def unapply(f: Foo): String = f.toString
}
I would have expected the macro to work since it meets all of the requirements stated here:
https://www.playframework.com/documentation/2.8.x/ScalaJsonAutomated#Requirements
However, I get a compile time error that states that there are no subclasses for Foo (even though there are Bar, Bat and Bah) .
If I move Bar, Bat and Bah outside of Foo (to the base package level), the complaint goes away.
Just wondering if this is an oversight in the macro (where is cannot find nested subclasses), or if I am doing something incorrect here?

How to reflectively parameterise a generic type in Scala?

How can I implement the following psuedocode in Scala using reflection?
I require this for the purposes of looking-up a generic type from Guice:
trait Foo[A]
class FooInt extends Foo[Int]
class FooString extends Foo[String]
bind(new TypeLiteral<Foo<Int>>() {}).to(FooInt.class);
def fooTypeLiteral(paramA: Class[_]): TypeLiteral[_] = ???
val foo = injector.getInstance(fooTypeLiteral(classOf[Int])
// foo: FooInt
Note: I do not have access to the type of A at compile time, hence the _. The entire solution needs to be performed reflectively (e.g. I cannot have parameterizeFoo[A : ClassTag](...)).
You could try to create a ParameterizedType and pass it to the factory method of the TypeLiteral:
def fooTypeLiteral(paramA: Class[_]): TypeLiteral[_] = {
TypeLiteral.get(new java.lang.reflect.ParameterizedType() {
def getRawType = classOf[Foo[_]]
def getOwnerType = null
def getActualTypeArguments = Array(paramA)
})
}
If you have only a finite number of Foo implementations, you could try this:
trait Foo[A]
class FooInt extends Foo[Int]
class FooString extends Foo[String]
val TLFI = new TypeLiteral[Foo[Int]](){}
val TLFS = new TypeLiteral[Foo[String]](){}
bind(TLFI).to(FooInt.class);
bind(TLFS).to(FooString.class);
def fooTypeLiteral(c: Class[_]): TypeLiteral[_] = {
if (c == classOf[Int]) TLFI
else if (c == classOf[String]) TLFS
else throw new Error
}
Both Scala and Java compilers implement generics with type erasure. This means that all type information for sub-types of generics is lost when the source code is converted to JVM byte code. If the generic class itself does not hold ClassTag or similar, embedded information, then you cannot get the class at run time.

Using mixin composition with functions in scala

I'm trying to use mixin composition using functions, but I have an error in the apply method of obj object:
Overriding method apply in trait t of type (s: String)String; method apply needs abstract override modifiers.
How to solve this error and which is the correct implementacion?
trait t extends Function1[String,String] {
abstract override def apply(s: String): String = {
super.apply(s)
println("Advice" + s)
s
}
}
object MixinComp {
def main(args: Array[String]) {
val obj = new Function1[String, String] with t {
override def apply(s: String) = s
}
println(obj.apply("Hi"))
}
}
Your immediate problem (the reason it complains about the error) is that you can't have an abstract call in your linearization flow (your t.apply calls super.apply, which is abstract).
Also, the apply method you define in the top level anonymous class overrides everything, and does not call super, making the t being mixed in completely irrelevant.
Something like this would solve both problems:
trait t extends Function1[String,String] {
abstract override def apply(s: String): String = {
println("Advice" + s)
super.apply(s) // I rearranged this a little, because it kinda makes more sense this wat
}
}
// Note, this extends `Function1`, not `t`, it, just a "vanilla" Function1
class foo extends Function1[String, String] {
def apply(s: String): String = s
}
// Now I am mixing in the t. Note, that the apply definition
// from foo is now at the bottom of the hierarchy, so that
// t.apply overrides it and calls it with super
val obj = new foo with t
obj("foo")
You won't need to use the abstract modifier in your t trait definition, if you don't call the super.apply. And in this particular case I dont see any need for calling super.apply as Function1's apply is abstract. You probably need custom apply implementations. The following code should work.
trait t extends Function1[String, String] {
override def apply(s: String): String = {
// super.apply(s)
println("Advice" + s)
s
}
}
Case1: use the overridden apply method in t trait:
val obj = new Function1[String, String] with t {}
obj.apply("hello") // prints: Advicehello
Case 2: override the apply method in t trait in an anonymous class:
val obj = new Function1[String, String] with t {
override def apply(s: String): String = s
}
obj.apply("hello") // prints hello

A case class as a "wrapper" class for a collection. What about map/foldLeft/

What I try to do is to come up with a case class which I can use in pattern matching which has exactly one field, e.g. an immutable set. Furthermore, I would like to make use of functions like map, foldLeft and so on which should be passed down to the set. I tried it as in the following:
case class foo(s:Set[String]) extends Iterable[String] {
override def iterator = s.iterator
}
Now if I try to make use of e.g. the map function, I get an type error:
var bar = foo(Set() + "test1" + "test2")
bar = bar.map(x => x)
found : Iterable[String]
required: foo
bar = bar.map(x => x)
^
The type error is perfectly fine (in my understanding). However, I wonder how one would implement a wrapper case class for a collection such that one can call map, foldLeft and so on and still receive an object of the case class. Would one need to override all these functions or is there some other way around?
Edit
I'm inclined to accept the solution of RĂ©gis Jean-Gilles which works for me. However, after Googling for hours I found another interesting Scala trait named SetProxy. I couldn't find any trivial examples so I'm not sure if this trait does what I want:
come up with a custom type, i.e. a different type than Set
the type must be a case class (we want to do pattern matching)
we need "delegate" methods map, foldLeft and so on which should pass the call to our actual set and return the resulting set wrapped arround in our new type
My first idea was to extend Set but my custom type Foo already extends another class. Therefore, the second idea was to mixin the trait Iterable and IterableLike. Now I red about the trait SetProxy which made me think about which is "the best" way to go. What are your thoughts and experiences?
Since I started learning Scala three days ago, any pointers are highly appreciated!
Hmm this sounds promissing to me but Scala says that variable b is of type Iterable[String] and not of type Foo, i.e. I do not see how IterableLike helps in this situation
You are right. Merely inheriting from IterableLike as shown by mpartel will make the return type of some methods more precise (such as filter, which will return Foo), but for others such as map of flatMap you will need to provide an appopriate CanBuildFrom implicit.
Here is a code snippet that does just that:
import collection.IterableLike
import collection.generic.CanBuildFrom
import collection.mutable.Builder
case class Foo( s:Set[String] ) extends Iterable[String] with IterableLike[String, Foo] {
override def iterator = s.iterator
override protected[this] def newBuilder: scala.collection.mutable.Builder[String, Foo] = new Foo.FooBuilder
def +(elem: String ): Foo = new Foo( s + elem )
}
object Foo {
val empty: Foo = Foo( Set.empty[String] )
def apply( elems: String* ) = new Foo( elems.toSet )
class FooBuilder extends Builder[String, Foo] {
protected var elems: Foo = empty
def +=(x: String): this.type = { elems = elems + x; this }
def clear() { elems = empty }
def result: Foo = elems
}
implicit def canBuildFrom[T]: CanBuildFrom[Foo, String, Foo] = new CanBuildFrom[Foo, String, Foo] {
def apply(from: Foo) = apply()
def apply() = new FooBuilder
}
}
And some test in the repl:
scala> var bar = Foo(Set() + "test1" + "test2")
bar: Foo = (test1, test2)
scala> bar = bar.map(x => x) // compiles just fine because map now returns Foo
bar: Foo = (test1, test2)
Inheriting IterableLike[String, Foo] gives you all those methods such that they return Foo. IterableLike requires you to implement newBuilder in addition to iterator.
import scala.collection.IterableLike
import scala.collection.mutable.{Builder, SetBuilder}
case class Foo(stuff: Set[String]) extends Iterable[String] with IterableLike[String, Foo] {
def iterator: Iterator[String] = stuff.iterator
protected[this] override def newBuilder: Builder[String, Foo] = {
new SetBuilder[String, Set[String]](Set.empty).mapResult(Foo(_))
}
}
// Test:
val a = Foo(Set("a", "b", "c"))
val b = a.map(_.toUpperCase)
println(b.toList.sorted.mkString(", ")) // Prints A, B, C

Vapourise Predef.any2stringadd in interpreter

I am having problem with Predef.any2stringadd that unfortunately is officially considered not a PITA. I changed my API from
trait Foo {
def +(that: Foo): Foo
}
to a type class approach
object Foo {
implicit def fooOps(f: Foo): Ops = new Ops(f)
final class Ops(f: Foo) {
def +(that: Foo): Foo = ???
}
}
trait Foo
Now I can hide that horrible method in compiled code like this:
import Predef.{any2stringadd => _}
However, this fails in my REPL/interpreter environment.
val in = new IMain(settings, out)
in.addImports("Predef.{any2stringadd => _}") // has no effect?
How can I tell the interpreter to vapourise this annoying method?
A workaround seems to be to take the implicit conversion out of Foo's companion object, and place it in the top hierarchy (package object in my real case):
object Foo {
// implicit def fooOps(f: Foo): Ops = new Ops(f)
final class Ops(f: Foo) {
def +(that: Foo): Foo = ???
}
}
trait Foo
implicit def fooOps(f: Foo): Foo.Ops = new Foo.Ops(f)
While I don't know why that should make any difference, it appears it does enough to make the interpreter forget about any2stringadd.
(Still, I think a new ticket should be opened in an attempt to remove that method, also given that string interpolation in Scala 2.10 will make it more superfluous.)