What is the preferred way to call functions which constructs instances implementing certain type classes e.g. deserialize in the following example:
// Simulacrum annotation
#typeclass trait Encodable[A] {
def deserialize(bytes: Seq[Byte]): Try[A] // Constructor
def serialize(proof: A): Seq[Byte]
}
// implementation Encodable for Int
implicit val IntEncodable: Encodable[Int] = new Encodable[Int] {
def deserialize(bytes: Seq[Byte]): Try[Int] = Success(bytes.head)
def serialize(value: Int): Seq[Byte] = List(value.toByte)
}
// import Simulacrum generated ops
import Encodable.ops._
// is it best practise to define a function like this for all constructor-like functions in the typeclass?
def deserialize[A:Encodable](bytes: Vector[Byte])(implicit instance: Encodable[A]): Try[A] = instance.deserialize(bytes)
// or def deserialize[A:Encodable](bytes: Vector[Byte]): Try[A] = Encodable[A].deserialize(bytes)
// call constructor
val value: Int = deserialize(Vector(1)).get
// call method
println(value.serialize)
To me, this is overkill. My preferred way, and what I have seen in other code bases, is just:
val value: Int = Encodable[Int].deserialize(Vector(1)).get
I don't think much is gained by that additional deserialize method. I have also seen:
val value: Int = implicitly[Encodable[Int]].deserialize(Vector(1)).get
Which might be more familiar to those who haven't seen simulacrum before, but I don't think it's that necessary.
Related
Hi I have class which does some json extraction and want to do a safe/unsafe version.
Currently I have a class definition like this
class Safe {
def getA: Option[String] = ...
def getB: Option[Int] = ...
... etc ...
}
And then a Unsafe version of that which just delegates to the Safe class:
class Unsafe(delegate: Safe) {
def getA: String = delegate.getA.get
def getB: Int = delegate.getB.get
... etc ...
}
this works but obviously the main problem is that the delegation is maintained by hand, and that if we ever change anything about the Safe interface, someone has to manually also make sure that is reflected in the Unsafe class as well.
Is there a more idomatic and less manual pattern in scala to do this?
Here is an implementation of my proposal.
Define Extractor interface parameterized by a type constructor:
trait Extractor[F[_]] { outer =>
def getA: F[String]
def getB: F[Int]
/* insert `transform` here */
}
Implement a transform method once and for all, which takes an arbitrary F ~> G:
def transform[G[_]](natTrafo: F ~> G): Extractor[G] =
new Extractor[G] {
def getA: G[String] = natTrafo[String](outer.getA)
def getB: G[Int] = natTrafo[Int](outer.getB)
}
Here, the F ~> G is a type of polymorphic functions that can transform any F[A] into G[A] for arbitrary type A (String, Int, or thousand other types you want to get in your extractor):
trait ~>[F[_], G[_]] {
def apply[A](fa: F[A]): G[A]
}
This interface is quite ubiquitous, it's available in Scalaz and Cats (it's called FunctionK there), sometimes called "natural transformation".
Implement SafeExtractor:
class SafeExtractor extends Extractor[Option] {
def getA: Option[String] = None /* do sth. more clever here? */
def getB: Option[Int] = None
}
Get UnsafeExtractor for free by providing a simple Option ~> Id implementation to transform:
type Id[X] = X
val safe: Extractor[Option] = new SafeExtractor()
val unsafe: Extractor[Id] = safe.transform(
new ~>[Option, Id] {
def apply[A](x: Option[A]): Id[A] = x.get
}
)
You can now also easily reuse the same transform function to convert Extractor[Future] to Extractor[Id] by Awaiting results, or Extractor[Id] to Extractor[Try] by catching all errors etc.
Full code
import scala.language.higherKinds
trait Extractor[F[_]] { outer =>
def getA: F[String]
def getB: F[Int]
def transform[G[_]](natTrafo: F ~> G): Extractor[G] =
new Extractor[G] {
def getA: G[String] = natTrafo[String](outer.getA)
def getB: G[Int] = natTrafo[Int](outer.getB)
}
}
/** A polymorphic function that can transform any
* `F[A]` into a `G[A]` for all possible `A`.
*/
trait ~>[F[_], G[_]] {
def apply[A](fa: F[A]): G[A]
}
class SafeExtractor extends Extractor[Option] {
def getA: Option[String] = None
def getB: Option[Int] = None
}
type Id[X] = X
val safe: Extractor[Option] = new SafeExtractor()
val unsafe: Extractor[Id] = safe.transform(
new ~>[Option, Id] {
def apply[A](x: Option[A]): Id[A] = x.get
}
)
The answer by Andrey Tyukin is a good way to address this specific problem, but there is perhaps a bigger design issue.
Your code assumes that the structure of the JSON (as defined in Safe) matches the structure of the data in your code (as defined by Unsafe). This causes problems when you want to change the structure of the data in your code, or when the JSON format changes, or when you want to import data from a different source (e.g. XML), or when you want to do more complex validation of the data.
So the correct way to handle this is to design the application data structures (your Unsafe class) in the way that fits your application. You then provide a JSON-reading library that converts incoming data to that format. This library uses your Safe class internally. This can perform any validation/data conditioning that may be required, and can adjust to changes in the JSON format without affecting the rest of the system. Your unit testing framework will make sure that your two classes stay in sync.
This design pattern is an example of separation of concerns
Here is some sample code:
In the main application:
// Data structure for use by application
case class Unsafe(a: String, b: Int)
// Abstract interface for loading data
trait Loader {
def load(): Unsafe
}
In the data loading library:
// JSON implementation of loading interface
object JsonLoader extends Loader {
protected case class Safe(
a: Option[String],
b: Option[Int]
)
def load(): Unsafe = {
val json = rawLibrary.readData() // Load data in JSON format
val safe: Safe = jsonLibrary.extract[Safe](json)
// Validate/condition the raw data here
val a = safe.a.getOrElse("")
val b = safe.b.getOrElse(0)
// Return the application data
Unsafe(a, b)
}
}
The mapping from JSON to application data is hidden inside the JsonLoader object. This makes it easier to make sure that they are in sync allows the JSON format to change without affecting the wider code.
I'm trying to remove some of the boilerplate in an API I am writing.
Roughly speaking, my API currently looks like this:
def toEither[E <: WrapperBase](priority: Int)(implicit factory: (String, Int) => E): Either[E, T] = {
val either: Either[String, T] = generateEither()
either.left.map(s => factory(s, priority))
}
Which means that the user has to generate an implicit factory for every E used. I am looking to replace this with a macro that gives a nice compile error if the user provided type doesn't have the correct ctor parameters.
I have the following:
object GenericFactory {
def create[T](ctorParams: Any*): T = macro createMacro[T]
def createMacro[T](c: blackbox.Context)(ctorParams: c.Expr[Any]*)(implicit wtt: WeakTypeType[T]): c.Expr[T] = {
import c.universe._
c.Expr[T](q"new $wtt(..$ctorParams)")
}
}
If I provide a real type to this GenericFactory.create[String]("hey") I have no issues, but if I provide a generic type: GenericFactory.create[E]("hey") then I get the following compile error: class type required by E found.
Where have I gone wrong? Or if what I want is NOT possible, is there anything else I can do to reduce the effort for the user?
Sorry but I don't think you can make it work. The problem is that Scala (as Java) uses types erasure. It means that there is only one type for all generics kinds (possibly except for value-type specializations which is not important now). It means that the macro is expanded only once for all E rather then one time for each E specialization provided by the user. And there is no way to express a restriction that some generic type E must have a constructor with a given signature (and if there were - you wouldn't need you macro in the first place). So obviously it can not work because the compiler can't generate a constructor call for a generic type E. So what the compiler says is that for generating a constructor call it needs a real class rather than generic E.
To put it otherwise, macro is not a magic tool. Using macro is just a way to re-write a piece of code early in the compiler processing but then it will be processed by the compiler in a usual way. And what your macro does is rewrites
GenericFactory.create[E]("hey")
with something like
new E("hey")
If you just write that in your code, you'll get the same error (and probably will not be surprised).
I don't think you can avoid using your implicit factory. You probably could modify your macro to generate those implicit factories for valid types but I don't think you can improve the code further.
Update: implicit factory and macro
If you have just one place where you need one type of constructors I think the best you can do (or rather the best I can do ☺) is following:
Sidenote the whole idea comes from "Implicit macros" article
You define StringIntCtor[T] typeclass trait and a macro that would generate it:
import scala.language.experimental.macros
import scala.reflect.macros._
trait StringIntCtor[T] {
def create(s: String, i: Int): T
}
object StringIntCtor {
implicit def implicitCtor[T]: StringIntCtor[T] = macro createMacro[T]
def createMacro[T](c: blackbox.Context)(implicit wtt: c.WeakTypeTag[T]): c.Expr[StringIntCtor[T]] = {
import c.universe._
val targetTypes = List(typeOf[String], typeOf[Int])
def testCtor(ctor: MethodSymbol): Boolean = {
if (ctor.paramLists.size != 1)
false
else {
val types = ctor.paramLists(0).map(sym => sym.typeSignature)
(targetTypes.size == types.size) && targetTypes.zip(types).forall(tp => tp._1 =:= tp._2)
}
}
val ctors = wtt.tpe.decl(c.universe.TermName("<init>"))
if (!ctors.asTerm.alternatives.exists(sym => testCtor(sym.asMethod))) {
c.abort(c.enclosingPosition, s"Type ${wtt.tpe} has no constructor with signature <init>${targetTypes.mkString("(", ", ", ")")}")
}
// Note that using fully qualified names for all types except imported by default are important here
val res = c.Expr[StringIntCtor[T]](
q"""
(new so.macros.StringIntCtor[$wtt] {
override def create(s:String, i: Int): $wtt = new $wtt(s, i)
})
""")
//println(res) // log the macro
res
}
}
You use that trait as
class WrapperBase(val s: String, val i: Int)
case class WrapperChildGood(override val s: String, override val i: Int, val float: Float) extends WrapperBase(s, i) {
def this(s: String, i: Int) = this(s, i, 0f)
}
case class WrapperChildBad(override val s: String, override val i: Int, val float: Float) extends WrapperBase(s, i) {
}
object EitherHelper {
type T = String
import scala.util._
val rnd = new Random(1)
def generateEither(): Either[String, T] = {
if (rnd.nextBoolean()) {
Left("left")
}
else {
Right("right")
}
}
def toEither[E <: WrapperBase](priority: Int)(implicit factory: StringIntCtor[E]): Either[E, T] = {
val either: Either[String, T] = generateEither()
either.left.map(s => factory.create(s, priority))
}
}
So now you can do:
val x1 = EitherHelper.toEither[WrapperChildGood](1)
println(s"x1 = $x1")
val x2 = EitherHelper.toEither[WrapperChildGood](2)
println(s"x2 = $x2")
//val bad = EitherHelper.toEither[WrapperChildBad](3) // compilation error generated by c.abort
and it will print
x1 = Left(WrapperChildGood(left,1,0.0))
x2 = Right(right)
If you have many different places where you want to ensure different constructors exists, you'll need to make the macro much more complicated to generate constructor calls with arbitrary signatures passed from the outside.
I am trying to support an abstraction of an ID type for a framework. Example here:
object AmINecessary {
case class StringConverter[T](op: String => T)
implicit val toInt = new StringConverter[Int](_.toInt)
implicit val toLong = new StringConverter[Long](_.toLong)
}
class Foo[ID] {
// ID can be String, Long, or Int
import AmINecessary._
// If ID is string, return string, otherwise convert to ID
def getID(id: String)(implicit c: StringConverter[ID] = null): ID = if (c == null) id.asInstanceOf[ID] else c.op(id)
}
This is then used as:
val fooString = new Foo[String]
val fooLong = new Foo[Long]
val fooInt = new Foo[Int]
fooString.getID("asdf") // "asdf":String
fooLong.getID("1234") // 1234:Long
fooInt.getID("1234") // 1234:Int
fooInt.getID("asdf") // java.lang.NumberFormatException
This works as expected. My questions are:
using an optional implicit by defaulting it to null then branching on it feels bad. What is the scala way to accomplish that?
Is it really necessary to write implicit conversions for a string to long or int?
I think the best option would be to simply add an implicit StringConverter[String] and remove the default null value.
That way your fooString works without risking a ClassCastException for every other type.
object AmINecessary {
case class StringConverter[T](op: String => T)
implicit val toInt = new StringConverter[Int](_.toInt)
implicit val toLong = new StringConverter[Long](_.toLong)
implicit val idConverter = new StringConverter[String](identity)
}
class Foo[ID] {
import AmINecessary.StringConverter
def getID(id: String)(implicit c: StringConverter[ID]): ID = c.op(id)
}
Regarding your question 2, the type class approach is not really necessary (but note that there are no implicit conversions here). You can also do it like this:
abstract class Foo[ID] {
def getID(id: String): ID
}
class FooInt extends Foo[Int] {
def getID(id: String) = id.toInt
}
class FooLong extends Foo[Long] {
def getID(id: String) = id.toLong
}
class FooString extends Foo[String] {
def getID(id: String) = id
}
1) About implicit defaulting to null, you could just:
object Unsafe {
implicit val toT[T] = new StringConverter[T](_.asInstanceOf[T])
}
2) It doesn't seem as a good idea. First, because you're hiding asInstanceOf, which is unsafe operation (potential runtime exception). Second, the more explicit conversion is - the better.
If you expect some complex transformations - it's better to return option from your getID method:
def getId[T](id: String)(converter: Option[StringConverter] = None) = converter.map(_.op(id))
However, default parameters aren't the best approach either. I'd stick with compile-time error requiring user to either write own converter or explicitly do an asInstanceOf, in general.
In your concrete case asInstanceOf doesn't make much sense as the only type it would work for is String, like getId[String], so what's the point of calling getId then?
Ideally I'd like to be able to do the following in Scala:
import Builders._
val myBuilder = builder[TypeToBuild] // Returns instance of TypeToBuildBuilder
val obj = myBuilder.methodOnTypeToBuildBuilder(...).build()
In principle the goal is simply to be able to 'map' TypeToBuild to TypeToBuildBuilder using external mapping definitions (i.e. assume no ability to change these classes) and leverage this in type inferencing.
I got the following working with AnyRef types:
import Builders._
val myBuilder = builder(TypeToBuild)
myBuilder.methodOnTypeToBuildBuilder(...).build()
object Builders {
implicit val typeToBuildBuilderFactory =
new BuilderFactory[TypeToBuild.type, TypeToBuildBuilder]
def builder[T, B](typ: T)(implicit ev: BuilderFactory[T, B]): B = ev.create
}
class BuilderFactory[T, B: ClassTag] {
def create: B = classTag[B].runtimeClass.newInstance().asInstanceOf[B]
}
Note that the type is passed as a function argument rather than a type argument.
I'd be supremely happy just to find out how to get the above working with Any types, rather than just AnyRef types. It seems this limitation comes since Singleton types are only supported for AnyRefs (i.e. my use of TypeToBuild.type).
That being said, an answer that solves the original 'ideal' scenario (using a type argument instead of a function argument) would be fantastic!
EDIT
A possible solution that requires classOf[_] (would really love not needing to use classOf!):
import Builders._
val myBuilder = builder(classOf[TypeToBuild])
myBuilder.methodOnTypeToBuildBuilder(...).build()
object Builders {
implicit val typeToBuildBuilderFactory =
new BuilderFactory[classOf[TypeToBuild], TypeToBuildBuilder]
def builder[T, B](typ: T)(implicit ev: BuilderFactory[T, B]): B = ev.create
}
class BuilderFactory[T, B: ClassTag] {
def create: B = classTag[B].runtimeClass.newInstance().asInstanceOf[B]
}
Being able to just use builder(TypeToBuild) is really just a win in elegance/brevity. Being able to use builder[TypeToBuild] would be cool as perhaps this could one day work (with type inference advancements in Scala):
val obj: TypeToBuild = builder.methodOnTypeToBuildBuilder(...).build();
Here is a complete, working example using classOf: http://ideone.com/94rat3
Yes, Scala supports return types based on the parameters types. An example of this would be methods in the collections API like map that use the CanBuildFrom typeclass to return the desired type.
I'm not sure what you are trying to do with your example code, but maybe you want something like:
trait Builder[-A, +B] {
def create(x: A): B
}
object Builders {
implicit val int2StringBuilder = new Builder[Int, String] {
def create(x: Int) = "a" * x
}
def buildFrom[A, B](x: A)(implicit ev: Builder[A, B]): B = ev.create(x)
}
import Builders._
buildFrom(5)
The magic with newInstance only works for concrete classes that have a constructor that takes no parameters, so it probably isn't generic enough to be useful.
If you're not afraid of implicit conversions, you could do something like this:
import scala.language.implicitConversions
trait BuilderMapping[TypeToBuild, BuilderType] {
def create: BuilderType
}
case class BuilderSpec[TypeToBuild]()
def builder[TypeToBuild] = BuilderSpec[TypeToBuild]
implicit def builderSpecToBuilder[TypeToBuild, BuilderType]
(spec: BuilderSpec[TypeToBuild])
(implicit ev: BuilderMapping[TypeToBuild, BuilderType]) = ev.create
case class Foo(count: Int)
case class FooBuilder() {
def translate(f: Foo) = "a" * f.count
}
implicit val FooToFooBuilder = new BuilderMapping[Foo, FooBuilder] {
def create = FooBuilder()
}
val b = builder[Foo]
println(b.translate(Foo(3)))
The implicit conversions aren't too bad, since they're constrained to these builder-oriented types. The conversion is needed to make b.translate valid.
It looked like wingedsubmariner's answer was most of what you wanted, but you didn't want to specify both TypeToBuild and BuilderType (and you didn't necessarily want to pass a value). To achieve that, we needed to break up that single generic signature into two parts, which is why the BuilderSpec type exists.
It might also be possible to use something like partial generic application (see the answers to a question that I asked earlier), though I can't put the pieces together in my head at the moment.
I'll resort to answering my own question since a Redditor ended up giving me the answer I was looking for and they appear to have chosen not to respond here.
trait Buildable[T] {
type Result
def newBuilder: Result
}
object Buildable {
implicit object ABuildable extends Buildable[A] {
type Result = ABuilder
override def newBuilder = new ABuilder
}
implicit object BBuildable extends Buildable[B] {
type Result = BBuilder
override def newBuilder = new BBuilder
}
}
def builder[T](implicit B: Buildable[T]): B.Result = B.newBuilder
class ABuilder {
def method1() = println("Call from ABuilder")
}
class BBuilder {
def method2() = println("Call from BBuilder")
}
Then you will get:
scala> builder[A].method1()
Call from ABuilder
scala> builder[B].method2()
Call from BBuilder
You can see the reddit post here: http://www.reddit.com/r/scala/comments/2542x8/is_it_possible_to_define_a_function_return_type/
And a full working version here: http://ideone.com/oPI7Az
class foo(val x:Int){
def convertToInt(z:string) = {do somthing to convert a string to an integer}
def this(y:string) = this(convertToInt(y))
}
calling convertToInt in auxiliary constructor (this(y:string)) causes this error:
error: not found: value convertToInt
I know I can use a singleton object and pack all static functions like convertToInt into it, but is it a good solution?
object foo{
def convertToInt(z:string) = {do somthing to convert a string to an integer}
}
class foo(val x:Int){
def this(y:string) = this(foo.convertToInt(y))
}
I think in this case the best solution would be to use factory methods instead of public constructor.
So you can define your constructor private and provide factory apply methods in companion object:
class Foo private (val x:Int)
object Foo {
def apply(i: Int) = new Foo(i)
def apply(s: String) = new Foo(convertToInt(s))
def convertToInt(s: String) = s.toInt
}
println(Foo(512).x)
println(Foo("256").x)
You can find more information about constructor vs factory method here:
Constructors vs Factory Methods
It's the same for Scala.
Update
As an example of alternative solution I made very generic solution. Foo class can now work with any class that ever existed or can be created in future, assuming, that this type can be converted (you can define how it should be converted) to/from Int:
trait Convertable[From, To] {
def convert(from: From): To
}
object Convertable {
implicit val intString = new Convertable[Int, String] {
def convert(from: Int) = from toString // your logic here
}
implicit val stringInt = new Convertable[String, Int] {
def convert(from: String) = from toInt // your logic here
}
implicit def self[T] = new Convertable[T, T] {
def convert(from: T) = from
}
}
case class Foo[T](original: T)(implicit toInt: Convertable[T, Int], fromInt: Convertable[Int, T]) {
val x: Int = toInt convert original
def toOriginal = fromInt convert x
}
println(Foo(512) x)
println(Foo("256") x)
(I could define toOriginal by just returning = original, but it would be too boring :)
As you can see, this solution is generic and more complicated. But as far as I saw, many application need some kind of conversion between different primitive values and/or classes. So in many cases it's sutable (and may be event considered very good) solution for many cases and may be for your also. But it's often impossible to tell what's "the best" solution for all possible cases.
by using angle's offer about factory method instead of Auxiliary Constructor:
class Foo(val s:String) {
val s = ""
def bar2:String = s+s
def bar3:List[Char] = s.toList
}
object Foo extends{
def bar1(y:List[Char]):String =y.mkString
def apply(s:String)= new Foo(s)
def apply(y:List[Char])= new Foo(bar1(y))
}
client code:
val foo1 = Foo(List('a','b'))
println(foo1.s)
println(foo1.bar2)
println(foo1.bar3)
Your solution isn't that bad. After all, convertToInt is similar to a static method in Java. Personally I don't like auxiliary constructors, so I'd normally prefer Easy Angels solution as well. However if you plan to inherit from your class later, the companion object approach won't "scale" for the derived class, you would have to reimplement that method. In that case you should stick with your solution.
Theoretically you could put that method in a separate trait and extend it, but I wouldn't recommend this. Using inheritance should be limited to cases when there is a real dependency, not just for "convenience".