I have been working on an issue with implicit conversion for days now, but somehow I just cannot figure out what I am doing wrong. I read through all the other questions on SO that deal with implicits but I still don't understand what the problem is.
As an example, let's consider a Java interface like this(T extends Object for brevity):
public interface JPersistable<T extends Object> {
public T persist(T entity);
}
In scala, I do the following:
case class A()
case class B() extends A
case class C()
case class D() extends C
trait Persistable[DTOType <: A, EntityType <: C] {
// this would be implemented somewhere else
private def doPersist(source: EntityType): EntityType = source
// this does not implement the method from the Java interface
private def realPersist(source: DTOType)(implicit view: DTOType => EntityType): EntityType = doPersist(source)
// this DOES implement the method from the Java interface, however it throws:
// error: No implicit view available from DTOType => EntityType.
def persist(source: DTOType): EntityType = realPersist(source)
}
case class Persister() extends Persistable[B, D] with JPersistable[B]
object Mappings {
implicit def BToD(source: B): D = D()
}
object Test {
def main(args: Array[String]) {
import Mappings._
val persisted = Persister().persist(B())
}
}
As stated in the comment, I get an exception at compile time. I guess my questions are:
1) Why do I need to specify the implicit conversion on the doRealPersist explicitly? I expected the conversion to happen even if I do the following:
trait Persistable[DTOType <: A, EntityType <: C] {
// this would be implemented somewhere else
private def doPersist(source: EntityType): EntityType = source
def persist(source: DTOType): EntityType = doPersist(source)
}
However, this does not compile either.
2) Why does compilation fail at persist and not at the actual method call (val persisted = Persister().persist(B()))? That should be the first place where the actual type of EntityType and DTOType are known, right?
3) Is there a better way to do what I am trying to achieve? Again, this is not the actual thing I am trying to do, but close enough.
Apologies in advance if this question is ignorant and thanks a lot in advance for your help.
You need to make the conversion available within the trait. You can't pass it in from the outside implicitly because the outside doesn't know that persist secretly requires realPersist which requires an implicit conversion. This all fails even without considering JPersistable.
You can for example add
implicit def view: DTOType => EntityType
as a method in the trait and it will then compile. (You can drop realPersist then also.)
Then you need a way to get that view set. You can
case class Persister()(implicit val view: B => D) extends Persistable[B,D]
and then you're all good. (The implicit val satisfies the implicit def of the trait.)
But now you have bigger problems: your Java interface signature doesn't match your Scala signature. The equivalent Scala is
trait JPersistable[T <: Object] { def persist(t: T): T }
See how persist takes and returns the same type? And see how it does not in your Scala class? That's not going to work, nor should it! So you have to rethink exactly what you're trying to accomplish here. Maybe you just want to make the implicit conversion available--not pass it to the method!--and have Scala apply the implicit conversion for you so that you think you've got a persist that maps from DTOType to EntityType, but you really just have the EntityType to EntityType transform that the Java interface requires.
Edit: for example, here's a working version of what you posted just using standard implicit conversion:
trait JPer[T] { def persist(t: T): T }
class A
case class B() extends A
class C
case class D() extends C
trait Per[Y <: C] extends JPer[Y] {
private def doIt(y: Y): Y = y
def persist(y: Y) = doIt(y)
}
case class Perer() extends Per[D] // "with JPer" wouldn't add anything!
object Maps { implicit def BtoD(b: B): D = D() }
object Test extends App {
import Maps._
val persisted = Perer().persist(B())
}
Pay attention to which types are used where! (Who takes B and who takes D and which direction do you need a conversion?)
Related
I'm writing a type-safe code and want to replace apply() generated for case classes with my own implementation. Here it is:
import shapeless._
sealed trait Data
case object Remote extends Data
case object Local extends Data
case class SomeClass(){
type T <: Data
}
object SomeClass {
type Aux[TT] = SomeClass { type T = TT }
def apply[TT <: Data](implicit ev: TT =:!= Data): SomeClass.Aux[TT] = new SomeClass() {type T = TT}
}
val t: SomeClass = SomeClass() // <------------------ still compiles, bad
val tt: SomeClass.Aux[Remote.type] = SomeClass.apply[Remote.type] //compiles, good
val ttt: SomeClass.Aux[Data] = SomeClass.apply[Data] //does not compile, good
I want to prohibit val t: SomeClass = SomeClass() from compiling. Is it possible to do somehow except do not SomeClass to be case class?
There is a solution that is usually used if you want to provide some smart constructor and the default one would break your invariants. To make sure that only you can create the instance you should:
prevent using apply
prevent using new
prevent using .copy
prevent extending class where a child could call the constructor
This is achieved by this interesing patten:
sealed abstract case class MyCaseClass private (value: String)
object MyCaseClass {
def apply(value: String) = {
// checking invariants and stuff
new MyCaseClass(value) {}
}
}
Here:
abstract prevents generation of .copy and apply
sealed prevents extending this class (final wouldn't allow abstract)
private constructor prevents using new
While it doesn't look pretty it's pretty much bullet proof.
As #LuisMiguelMejíaSuárez pointed out this is not necessary in your exact case, but in general that could be used to deal with edge cases of case class with a smart constructor.
UPDATE:
In Scala 3 you only need to do
case class MyCaseClass private (value: String)
and it will prevent usage of: apply, new and copy from outside of this class and its companion.
This behavior was ported to Scala 2.13 with option -Xsource:3 enabled. You have to use at least 2.13.2 as in 2.13.1 this flag doesn't fix the issue.
So you can make the constructor private and ensure that T is also something different to Nothing.
I believe the best way to ensure the constructor is private (as well as many other things as #MateuszKubuszok show) is to use a (sealed) trait instead of a class:
(if you can not use a trait for whatever reasons, please refer to Mateusz's answer)
import shapeless._
sealed trait Data
final case object Remote extends Data
final case object Local extends Data
sealed trait SomeClass {
type T <: Data
}
object SomeClass {
type Aux[TT] = SomeClass { type T = TT }
def apply[TT <: Data](implicit ev1: TT =:!= Data, ev2: TT =:!= Nothing): Aux[TT] =
new SomeClass { override final type T = TT }
}
Which works like this:
SomeClass() // Does not compile.
SomeClass.apply[Remote.type] // Compiles.
SomeClass.apply[Data] // Does not compile.
You can see it running here.
If you want to prohibit using some of auto-generated methods of a case class you can define the methods (with proper signature) manually (then they will not be generated) and make them private (or private[this]).
Try
object SomeClass {
type Aux[TT] = SomeClass { type T = TT }
def apply[TT <: Data](implicit ev: TT =:!= Data): SomeClass.Aux[TT] = new SomeClass() {type T = TT}
private def apply(): SomeClass = ??? // added
}
val t: SomeClass = SomeClass() // doesn't compile
val tt: SomeClass.Aux[Remote.type] = SomeClass.apply[Remote.type] //compiles
val ttt: SomeClass.Aux[Data] = SomeClass.apply[Data] //doesn't compile
In principle, the methods (apply, unapply, copy, hashCode, toString) can be generated not by compiler itself but with macro annotations. Then you can choose any subset of them and modify their generation as you want.
Generate apply methods creating a class
how to efficiently/cleanly override a copy method
Also the methods can be generated using Shapeless case classes a la carte. Then you can switch on/off the methods as desired too.
https://github.com/milessabin/shapeless/blob/master/examples/src/main/scala/shapeless/examples/alacarte.scala
https://github.com/milessabin/shapeless/blob/master/core/src/test/scala/shapeless/alacarte.scala
Let's imagine I have the following base trait and case classes
sealed trait BaseTrait
case class Foo(x: Integer = 1) extends BaseTrait
case class Bar(x: String = "abc") extends BaseTrait
I would like to create a generic interface for classes which can process BaseTrait instances, something like the following
class FooProcessor(val model: FooModel) extends BaseProcessor[Foo] {
val v: Option[Foo] = model.baseVar
}
class BarProcessor(val model: BarModel) extends BaseProcessor[Bar] {
val v: Option[Bar] = model.baseVar
}
For this I have the following traits
trait BaseModel[T <: BaseTrait] {
var baseVar: Option[T] = None
}
trait BaseProcessor[T <: BaseTrait] {
def model: BaseModel[T]
def process(x: T): Unit = model.baseVar = Option(x)
}
The model definitions are the following
class FooModel extends BaseModel[Foo]
class BarModel extends BaseModel[Bar]
Now lets imagine I have the following processors somewhere in my app
val fooProcessor = new FooProcessor(new FooModel)
val barProcessor = new BarProcessor(new BarModel)
I would like to handle them in a somewhat generic way, like this
def func[T <: BaseTrait](p: T) {
val c/*: BaseProcessor[_ >: Foo with Bar <: BaseTrait with Product with Serializable]*/ = p match {
case _: Foo => fooProcessor
case _: Bar => barProcessor
c.process(p)
}
The compiler is not really happy about the last line, it says
type mismatch;
found : T
required: _1
If I understand correctly this is basically the compiler trying to prevent barProcessor.process(Foo()) from happening. I've tried a couple of solutions to get around this and achieve the desired behavior:
the simplest way around this is calling the proper *Processor.process with the proper BaseTrait instance inside the match case, which seems to defy the whole point of handling them in a somewhat generic way
use an abstract type in the BaseModel and BaseProcessor, which one hand got rid of the somewhat unneeded type parameter in BaseModel but the compilers complaint is still valid and I was not able to figure out if it's possible to get that to work
get rid of the type parameter and contraint from the BaseModel, and just do a type cast in the processor to get the proper type, but the explicit type cast also isn't really what I was hoping for
Like so:
trait BaseModel {
var baseVar: Option[BaseTrait] = None
}
trait BaseProcessor[T <: BaseTrait] {
def model: BaseModel
def process(x: T): Unit = model.baseVar = Some(x)
def getBaseValue: T = model.baseVar.map(_.asInstanceOf[T])
}
I guess one could also somehow convince the compiler that the two types (T of the Processor and T of the func parameter p) are equivalent, but that also seems like an overkill (and I'm also not really sure how it can be done).
So my question is the following: is it possible to do what I'm trying to achieve here (managing processors in a uniform way + each processor knows their specific type of BaseTrait) in a somewhat easy fashion? Is there a better model for this which I'm missing?
Update
As per Tim's answer making the controllers implicit solves the problem, however if you want to have a class where you define your controllers + have 'func' on it's interface the compiler no longer seems to properly resolve the implicits. So if I try to do something like this
class ProcessorContainer {
implicit val fooProcessor = new FooProcessor(new FooModel)
implicit val barProcessor = new BarProcessor(new BarModel)
def func[T <: BaseTrait](p: T) = typedFunc(p)
private def typedFunc[T <: BaseTrait](p: T)(implicit processor: BaseProcessor[T]) =
processor.process(p)
}
class Test {
val processorContainer = new ProcessorContainer
processorContainer.func(Foo())
processorContainer.func(Bar())
}
I get the following compile error (one for Foo and one for Bar):
could not find implicit value for parameter processor: BaseProcessor[Foo]
not enough arguments for method
Is there a way around this? I could of course expose the controllers so they can be passed in implicitly, however I'd prefer not doing that.
You can create a simple typeclass by making the processors implicit and passing them as an extra argument to func:
implicit val fooProcessor = new FooProcessor(new FooModel)
implicit val barProcessor = new BarProcessor(new BarModel)
def func[T <: BaseTrait](p: T)(implicit processor: BaseProcessor[T]) =
processor.process(p)
If you pass a Foo to func it will call FooProcessor.process on it, and if you pass a Bar to func it will call BarProcessor on it.
Say I've a base abstract class and 2 case classes extending it.
sealed abstract class Base extends Product with Serializable
case class A(d: String) extends Base
case class B(d: Int) extends Base
And I have also a type class on A and B, for example
trait Show[T] {
def show(t: T): String
}
object Show {
def apply[T](t: T)(implicit show: Show[T]): String = show.show(t)
implicit val showA: Show[A] = new Show[A] {
def show(t: A): String = "A"
}
implicit val showB: Show[B] = new Show[B] {
def show(t: B): String = "B"
}
}
The problem I have is that, in my code I get A and B from deserialization and they have type Base. In this case scala fail to resolve the typeclasses because there's not type classes defined on Base.
I could solve this problem by defining an instance on Base and do a pattern match but IMO in this way we'd better not use typeclasses at all.
Is there any tricks that we can make scala resolve type classes for a base class?
Thanks.
No, there is no such trick. Type has to be statically known because it is the compiler who resolves the typeclass instance.
As you said an instance for Base that will resolve proper instance has to be created. It can be done manually but there might be better ways to do it. Maybe someone can provide a better answer on how to do it nicely, but fundamentally the instance for Base is needed.
What I would like to do is this:
trait Addable[T]{
def plus(x: T): T
}
trait AddableWithBounds[T] extends Addable[T] {
abstract override def plus(x: T): T = limitToBounds(super.plus(x))
def limitToBounds(x: T): T = ... //Some bounds checking
}
class Impl(num: Int) extends AddableWithBounds[Impl] {
override def plus(x: Impl) = new Impl(num + x.num)
}
Reading through various posts it would seem that the reason this is not possible is that after class linearization the stacked trait is only looking in classes to its right for implementations of super.plus(x: T)
That is a technical argument though and I'm interested in the question: Is there a fundamental reason why the implementation could not be taken from the base class? Because as you can see in this example it is not possible to implement the plus method before knowing the actual data type that needs to be added but afaic implementing bounds checking in a trait seems reasonable.
Just as an idea: If the problem is that in this case it is unclear which plus is being overridden in the base class maybe a scope identifier like
def Addable[Impl].plus(x: Impl) = new Impl(num + x.num)
would help.
I would avoid overriding and reliance on some super/sub-type invocation. Why not just implement plus and ask sub-types to provide an auxiliary abstract method:
trait Addable[T] {
def plus(x: T): T
}
trait AddableWithBounds[T] extends Addable[T] {
final def plus(x: T): T = limitToBounds(plusImpl(x))
protected def plusImpl(x: T): T
def limitToBounds(x: T): T = ??? //Some bounds checking
}
class Impl(num: Int) extends AnyRef with AddableWithBounds[Impl] {
protected def plusImpl(x: Impl): Impl = ???
}
You're asking: if super.f is abstract (or "incomplete") after mixin, please just pick a concrete f to serve. Maybe you're saying, only if there's exactly one such f, and it's defined in the current template.
One problem is that if the abstract f is called during initialization, and calls the concrete f in the subclass, the initializer for the subclass hasn't run yet.
This is common gotcha for template methods. I think the Josh Bloch book says don't do that.
But the person writing the stackable trait has no way of knowing that a super was transformed in this way.
In Scala v 2.7.7
I have a file with
class Something[T] extends Other
object Something extends OtherConstructor[Something]
This throws the error:
class Something takes type parameters
object Something extends OtherConstructor[Something] {
However, I can't do this
object Something[T] extends OtherConstructor[Something[T]]
It throws an error:
error: ';' expected but '[' found.
Is it possible to send type parameters to object? Or should I change and simply use Otherconstructor
You could use:
object Something extends OtherConstructor[Something[_]]
You will of course be restricted by having an existential type with no upper bound in place instead of a concrete type. This solution may not make sense and you might need one object per concrete type T, for those T's which you care about, e.g.
object StringSomething extends OtherConstructor[Something[String]]
But then this has the (possible) disadvantage that StringSomething is not the companion object of Something.
However, my advice would be don't start messing about designing generic APIs (especially self-referential ones like the above) unless you really, really know what you are doing. It will almost certainly end in tears and there are plenty of CORE Java API's which are terrible because of the way generics have been added (the RowSorter API on JTable being one example)
You can solve the general problem of needing object Foo[T] by moving the type parameter to the methods in object Foo:
class Foo[T](t1: T, t2: T)
object Foo {
def apply[T](x: T): Foo[T] = new Foo(x, x)
def apply[T](x: T, y: T): Foo[T] = new Foo(x, y)
}
If you really need one object per T, you can make a class, and have the type-free companion return it from apply.
class Foo[T](t1: T, t2: T)
class FooCompanion[T] {
def apply(x: T): Foo[T] = new Foo(x, x)
def apply(x: T, y: T): Foo[T] = new Foo(x, y)
}
object Foo {
def apply[T] = new FooCompanion[T]
}
object demo extends App {
val x: Foo[Double] = Foo.apply.apply(1.23) // this is what is really happening
val y: Foo[Int] = Foo[Int](123) // with the type both apply calls are automatic
}
Note this will re-construct the Foo[T] companion on each call so you would want to keep it light and stateless.
An explicit solution the the problem above:
class Other
class OtherConstructor[O <: Other] {
def apply(o: O): O = o // constructor 1 in base class
}
class Something[T](value: T) extends Other
class SomethingConstructor[T] extends OtherConstructor[Something[T]] {
def apply(o: T, s: String) = new Something[T](o) // constructor 2 in subclass
}
object Something {
def apply[T] = new SomethingConstructor[T] // the "constructor constructor" method
}
object demoX extends App {
val si = new Something(123)
val sd = new Something(1.23)
val si1: Something[Int] = Something[Int](si) // OtherConstructor.apply
val sd1: Something[Double] = Something[Double](1.23, "hello") // SomethingConstructor[Double].apply
}
An object has to have a concrete type. The Scala object contruct is not a exception to this rule.
A valid definition is
object Something extends OtherConstructor[Something[T]] { }
where T is some concrete type.
Thanks for the answers
object Something extends OtherConstructor[Something[_]]
seems to be compiling (although I have yet to run/test that :-))
#oxbow_lakes, I've followed your advice - of avoiding the type system - till now but I've got to do it!!!
I've been studying existential types, type-erasure and all that but its still not in my grasp :-(