Consider a trait that performs "encoding" of arbitrary objects:
trait Encoder[R] {
def encode(r: R): Array[Byte]
}
Assuming encoding of primitive types is known and custom types can be encoded by defining up a "serializer":
trait Serializer[T] {
def serialize(r: T): Array[Byte]
}
we can implement a macro for case class encoding by simply looping the fields and looking up type serializers implicitly. Here's a dummy implementation that looks up the serializer for R itself (in reality we look up serializers for case class field types):
object Encoder {
implicit def apply[R <: Product]: Encoder[R] = macro applyImpl[R]
def applyImpl[R: c.WeakTypeTag](c: blackbox.Context): c.Expr[Encoder[R]] = {
import c.universe._
c.Expr[Encoder[R]](q"""
new ${weakTypeOf[Encoder[R]]} {
override def encode(r: ${weakTypeOf[R]}): Array[Byte] =
implicitly[_root_.Serializer[${weakTypeOf[R]}]].serialize(r)
}
""")
}
}
Now define a base "processor":
abstract class BaseProcessor[R: Encoder] {
def process(r: R): Unit = {
println(implicitly[Encoder[R]].encode(r).length)
}
}
And try to use it:
case class Record(i: Int)
object Serializers {
implicit def recordSerializer: Serializer[Record] =
(r: Record) => Array.emptyByteArray
}
import Serializers._
class Processor extends BaseProcessor[Record]
This fails to compile with:
// [error] Loader.scala:10:22: could not find implicit value for parameter e: Serializer[Record]
// [error] class Processor extends BaseProcessor[Record]
// [error] ^
// [error] one error found
However the following do compile:
class Processor extends BaseProcessor[Record]()(Encoder[Record]) // Compiles!
object x { class Processor extends BaseProcessor[Record] } // Compiles!
I can't really understand why this happens, looks like it has something to do with the Processor definition being a top level definition, since as soon as I move it inside a class/object everything works as expected. Here's one more example that fails to compile:
object x {
import Serializers._ // moving the import here also makes it NOT compile
class Processor extends BaseProcessor[Record]
}
Why does this happen and is there any way to make it work?
To make it work you can try one of the following:
1) add context bound
implicit def apply[R <: Product : Serializer]: Encoder[R] = macro applyImpl[R]
def applyImpl[R: c.WeakTypeTag](c: blackbox.Context)(serializer: c.Expr[Serializer[R]]): c.Expr[Encoder[R]] = {
import c.universe._
c.Expr[Encoder[R]](q"""
new Encoder[${weakTypeOf[R]}] {
override def encode(r: ${weakTypeOf[R]}): Array[Byte] =
$serializer.serialize(r)
}
""")
}
2) make macro whitebox
3) put implicit instance recordSerializer: Serializer[Record] to the companion object of Record rather than some object Serializers to be imported
4) maybe your example is easier than actual use case but now it seems you don't need macros
implicit def apply[R <: Product : Serializer]: Encoder[R] = new Encoder[R] {
override def encode(r: R): Array[Byte] = implicitly[Serializer[R]].serialize(r)
}
Related
I have this minimal example, I want to create encoders/decoders with circe semi-automatic derivation for the generic case class A[T]
import io.circe.{Decoder, Encoder}
import io.circe.generic.semiauto._
import io.circe.syntax._
sealed trait MyTrait
object MyTrait {
implicit val encoder: Encoder[MyTrait] = deriveEncoder
implicit val decoder: Decoder[MyTrait] = deriveDecoder
}
case class A[T](value: T) extends MyTrait
object A {
implicit def encoder[T: Encoder]: Encoder[A[T]] = deriveEncoder
implicit def decoder[T: Decoder]: Decoder[A[T]] = deriveDecoder
}
This codes does not compile and instead outputs this error
could not find Lazy implicit value of type io.circe.generic.encoding.DerivedAsObjectEncoder[A]
And the same for the decoder
What Am I doing wrong here and how can I get it working?
There are few issues. One is that MyTrait Encoder/Decoder will try to dispatch encodeing/decoding into codecs for the subtypes - since e thtrait is sealed all possible traits are known, so such list can be obtained by compiler.
BUT
While MyTrait trait does not take type parameters, its only implementation A takes. Which basically turns it into an existential type.
val myTrait: MyTrait = A(10)
myTrait match {
case A(x) =>
// x is of unknown type, at best you could use runtime reflection
// but codecs are generated with compile time reflection
}
Even if you wanted to make these codecs manually, you have no way of doing it
object Scope {
def aEncoder[T: Encoder]: Encoder[A[T]] = ...
val myTraitEncoder: Encoder[MyTrait] = {
case A(value) =>
// value is of unknown type, how to decide what Encoder[T]
// pass into aEncoder?
}
}
For similar reasons you couldn't manually implement Decoder.
To be able to implement codec manually (which is kind of prerequisite to being able to generate it), you can only remove type parameters as you go into subclasses, never add them.
sealed trait MyTrait[T]
case class A[T](value: T) extends MyTrait[T]
This would make it possible to know what kind of T is inside A since it would be passed from MyTrait, so you could write your codec manually and they would work.
Another problem is that reliable generation of ADT's usually require some configuration, e.g. whether or not to use discimination field. And that is provided by circe-generic-extras. Once you use it, (semi)automatic derivation is possible:
import io.circe.{Decoder, Encoder}
import io.circe.generic.extras.Configuration
import io.circe.generic.extras.semiauto._
import io.circe.syntax._
// can be used to tweak e.g. discriminator field name
implicit val config: Configuration = Configuration.default
sealed trait MyTrait[T]
object MyTrait {
implicit def encoder[T: Encoder]: Encoder[MyTrait[A]] = deriveEncoder
implicit def decoder[T: Decoder]: Decoder[MyTrait[A]] = deriveDecoder
}
case class A[T](value: T) extends MyTrait[T]
object A {
implicit def encoder[T: Encoder]: Encoder[A[T]] = deriveEncoder
implicit def decoder[T: Decoder]: Decoder[A[T]] = deriveDecoder
}
Among other solutions to the problem, if you really didn't want to use type parameter in MyTrait but have polymorphism in A would be to replace umbound, generic A with another ADT (or sum type in Scala 3), so that list of all possible implementations would be known and enumerated.
// x is of unknown type, at best you could use runtime reflection
I'll just comment that in principle you can define Decoder with runtime reflection (runtime compilation) based on the class of value in case class A[T](value: T)
sealed trait MyTrait
case class A[T](value: T) extends MyTrait
object A {
// custom codecs
implicit val intEnc: Encoder[A[Int]] = new Encoder[A[Int]] {
override def apply(a: A[Int]): Json = Json.obj("A" -> Json.obj("value" -> Json.fromInt(a.value)))
}
implicit val strEnc: Encoder[A[String]] = new Encoder[A[String]] {
override def apply(a: A[String]): Json = Json.obj("value" -> Json.fromString(a.value))
}
implicit val boolEnc: Encoder[A[Boolean]] = new Encoder[A[Boolean]] {
override def apply(a: A[Boolean]): Json = Json.fromBoolean(a.value)
}
}
val Integer = classOf[Integer]
val Boolean = classOf[java.lang.Boolean]
def primitiveClass[T](runtimeClass: Class[_]): Class[_] = runtimeClass match {
case `Integer` => classOf[Int]
case `Boolean` => classOf[Boolean]
//...
case _ => runtimeClass
}
object MyTrait {
implicit val encoder: Encoder[MyTrait] = new Encoder[MyTrait] {
override def apply(a: MyTrait): Json = a match {
case a1#A(t) =>
tb.eval(q"implicitly[io.circe.Encoder[A[${rm.classSymbol(primitiveClass(t.getClass)).toType}]]]")
.asInstanceOf[Encoder[A[_]]]
.apply(a1)
}
}
}
(A(1): MyTrait).asJson.noSpaces // {"A":{"value":1}}
(A("a"): MyTrait).asJson.noSpaces //{"value":"a"}
(A(true): MyTrait).asJson.noSpaces //true
But for case class A[T](value: Int) such trick would be impossible.
I have a trait for which I wanted to write Typeclasses for. This trait actually is a contract to do JSON to Case class conversion and vice versa. The definition of the trait is as below:
trait Converter[T] {
def convertFromJson(msg: String): Either[ConverterError, T]
def convertToJson(msg: T): String
}
Now, for one of the case classes that I have, I have defined the implementation like this:
object Converter extends DefaultJsonProtocol {
implicit object DefaultMessageConversions extends Converter[DefaultMessage] {
implicit val json = jsonFormat(DefaultMessage, "timestamp")
def convertFromJson(msg: String): Either[ConverterError, DefaultMessage] = {
try {
Right(msg.parseJson.convertTo[DefaultMessage])
}
catch {
case _: Exception => Left(ConverterError("Shit happens"))
}
}
def convertToJson(msg: DefaultMessage) = {
implicit val writes = Json.writes[DefaultMessage]
Json.toJson(msg).toString
}
}
def apply[T: Converter] : Converter[T] = implicitly
}
But I run into some compiler errors when I tried to build my project. I'm not sure what I did wrong?
[error] /Users/joesan/ingestion/src/main/scala/com/my/project/common/JsonMessageConversion.scala:28: could not find implicit value for evidence parameter of type com.my.project.common.JsonMessageConversion.Converter.JF[org.joda.time.DateTime]
[error] implicit val json = jsonFormat(DefaultMessage, "timestamp")
Here is how my case class look like:
case class DefaultMessage(timestamp: DateTime) extends KafkaMessage {
def this() = this(DateTime.now(DateTimeZone.UTC))
}
Your DefaultMessage uses org.joda.time.DateTime and spray-json does not know how to serialize/deserialize it out of the box.
Therefore you need to define a RootJsonFormat[DateTime] and bring it in implicit scope.
Here is an example implementation.
The use cases for implicit macros is supposed to be the so-called "materialisation" of type class instances.
Unfortunately, the example in the documentation is a bit vague on how that is achieved.
Upon being invoked, the materializer can acquire a representation of T and generate the appropriate instance of the Showable type class.
Let's say I have the following trait ...
trait PrettyPrinter[T]{
def printed(x:T) : String
}
object PrettyPrinter{
def pretty[T](x:T)(implicit pretty:PrettyPrinter[T]) = pretty printed x
implicit def prettyList[T](implicit pretty :PrettyPrinter[T]) = new PrettyPrinter[List[T]] {
def printed(x:List[T]) = x.map(pretty.printed).mkString("List(",", ",")")
}
}
and three test classes
class A(val x:Int)
class B(val x:Int)
class C(val x:Int)
Now I understand that instead of writing the following boilerplate
implicit def aPrinter = new PrettyPrinter[A] {def printed(a:A) = s"A(${a.x})"}
implicit def bPrinter = new PrettyPrinter[B] {def printed(b:B) = s"B(${b.x})"}
implicit def cPrinter = new PrettyPrinter[C] {def printed(c:C) = s"C(${c.x})"}
we should be able to add
implicit def materialise[T] : PrettyPrinter[T] = macro implMaterialise[T]
def implMaterialise[T](c:blackbox.Context):c.Expr[PrettyPrinter[T]] = {
import c.universe._
???
}
to the object PrettyPrinter{...} which then generates the corresponding PrettyPrinters on demand ... how? How do I actually get that "representation of T"?
If I try c.typeOf[T], for example, "No TypeTag available for T".
UPDATE
Trying to use class tags doesn't seem to work either.
implicit def materialise[T:ClassTag] : PrettyPrinter[T] = macro implMaterialise[T]
def implMaterialise[T:ClassTag](c:blackbox.Context):c.Expr[PrettyPrinter[T]] = {
import c.universe._
???
}
results in
Error:(17, 69) macro implementations cannot have implicit parameters other than WeakTypeTag evidences
implicit def materialise[T:ClassTag] : PrettyPrinter[T] = macro implMaterialise[T]
^
update2
Interestingly, using WeakTypeTags doesn't really change anything as
implicit def materialise[T:WeakTypeTag]: PrettyPrinter[T] = macro implMaterialise[T]
def implMaterialise[T](c:blackbox.Context)(implicit evidence : WeakTypeTag[T]):c.Expr[PrettyPrinter[T]]
= {
import c.universe._
???
}
will result in
Error:(18, 71) macro implementations cannot have implicit parameters other than WeakTypeTag evidences
implicit def materialise[T:WeakTypeTag]: PrettyPrinter[T] = macro implMaterialise[T]
^
How do I actually get that "representation of T"?
You need to use c.WeakTypeTag, as hinted at by the compiler message you found in your "UPDATE" section.
This project has a working example that you can adapt: https://github.com/underscoreio/essential-macros/blob/master/printtype/lib/src/main/scala/PrintType.scala
object PrintTypeApp extends App {
import PrintType._
printSymbol[List[Int]]
}
import scala.language.experimental.macros
import scala.reflect.macros.blackbox.Context
import scala.util.{ Try => ScalaTry }
object PrintType {
// Macro that generates a `println` statement to print
// declaration information of type `A`.
//
// This only prints meaningful output if we can inspect
// `A` to get at its definition:
def printSymbol[A]: Unit =
macro PrintTypeMacros.printTypeSymbolMacro[A]
}
class PrintTypeMacros(val c: Context) {
import c.universe._
def printTypeSymbolMacro[A: c.WeakTypeTag]: c.Tree =
printSymbol(weakTypeOf[A].typeSymbol, "")
}
Just when I thought I understood the basics of Scala's type system... :/
I'm trying to implement a class that reads the contents of a file and outputs a set of records. A record might be a single line, but it could also be a block of bytes, or anything. So what I'm after is a structure that allows the type of Reader to imply the type of the Record, which in turn will imply the correct Parser to use.
This structure works as long as MainApp.records(f) only returns one type of Reader. As soon as it can return more, I get this error:
could not find implicit value for parameter parser
I think the problem lies with the typed trait definitions at the top, but I cannot figure out how to fix the issue...
// Core traits
trait Record[T]
trait Reader[T] extends Iterable[Record[T]]
trait Parser[T] {
def parse(r: Record[T]): Option[Int]
}
// Concrete implementations
class LineRecord[T] extends Record[T]
class FileReader[T](f:File) extends Reader[T] {
val lines = Source.fromFile(f).getLines()
def iterator: Iterator[LineRecord[T]] =
new Iterator[LineRecord[T]] {
def next() = new LineRecord[T]
def hasNext = lines.hasNext
}
}
trait TypeA
object TypeA {
implicit object TypeAParser extends Parser[TypeA] {
def parse(r: Record[TypeA]): Option[Int] = ???
}
}
trait TypeB
object TypeB {
implicit object TypeBParser extends Parser[TypeB] {
def parse(r: Record[TypeB]): Option[Int] = ???
}
}
// The "app"
object MainApp {
def process(f: File) =
records(f) foreach { r => parse(r) }
def records(f: File) = {
if(true)
new FileReader[TypeA](f)
else
new FileReader[TypeB](f)
}
def parse[T](r: Record[T])(implicit parser: Parser[T]): Option[Int] =
parser.parse(r)
}
First off you must import the implicit object in order to use them:
import TypeA._
import TypeB._
That's not enough though. It seems like you're trying to apply implicits dynamically. That's not possible; they have to be found compile time.
If you import the objects as above and change the records so that the compiler finds the correct generic it will run fine:
def records(f: File) = new FileReader[TypeA](f)
But then it may not be what you were looking for ;)
The problem is that the return type of your records method is basically FileReader[_] (since it can return either FileReader[TypeA] or FileReader[TypeB]), and you don't provide an implicit argument of type Parser[Any]. If you remove the if-expression the return type is inferred to FileReader[TypeA], which works fine. I'm not sure what you're trying to do, but obviously the compiler can't select implicit argument based upon a type that is only known at runtime.
1) Using type with implicit inside as type parameter - doesn't bind this implicit to the host type, to do this change objects to the traits and mix them instead of generalizing (type-parametrizing):
def records(f: File) = {
if(true)
new FileReader(f) with TypeA
else
new FileReader(f) with TypeB
}
2) The parser should be in scope of function that calls parse. So you may try smthg like that:
def process(f: File) = {
val reader = records(f);
import reader._
reader foreach { r => parse(r) }
}
PlanB) Simpler alternative is to define type-parameter specific implicit methods inside the AppMain (or some trait mixed in), but it will work only if TypeA/TypeB is known on compile time, so records method can return concrete type:
implicit class TypeAParser(r: Record[TypeA]) {
def parse: Option[Int] = ???
}
implicit class TypeBParser(r: Record[TypeB]) {
def parse: Option[Int] = ???
}
def process[T <: TypeAorB](f: File) =
records[T](f).foreach(_.parse)
def recordsA[T <: TypeAorB](f: File) = new FileReader[T](f)
Here is, I think, the full set of modifications you need to do to get where I think you want to go.
import scala.io.Source
import java.io.File
import reflect.runtime.universe._
// Core traits
trait Record[+T]
trait Reader[+T] extends Iterable[Record[T]]
trait Parser[-T] {
def parse(r: Record[T]): Option[Int]
}
// Concrete implementations [unmodified]
class LineRecord[T] extends Record[T]
class FileReader[T](f:File) extends Reader[T] {
val lines = Source.fromFile(f).getLines()
def iterator: Iterator[LineRecord[T]] =
new Iterator[LineRecord[T]] {
def next() = new LineRecord[T]
def hasNext = lines.hasNext
}
}
sealed trait Alternatives
case class TypeA() extends Alternatives
object TypeA {
implicit object TypeAParser extends Parser[TypeA] {
def parse(r: Record[TypeA]): Option[Int] = ???
}
}
case class TypeB() extends Alternatives
object TypeB {
implicit object TypeBParser extends Parser[TypeB] {
def parse(r: Record[TypeB]): Option[Int] = ???
}
}
class ParseAlternator(parserA: Parser[TypeA], parserB: Parser[TypeB]) extends Parser[Alternatives] {
def parse(r: Record[Alternatives]): Option[Int] = r match {
case x: Record[TypeA #unchecked] if typeOf[Alternatives] =:= typeOf[TypeA] => parserA.parse(x)
case x: Record[TypeB #unchecked] if typeOf[Alternatives] =:= typeOf[TypeB] => parserB.parse(x)
}
}
object ParseAlternator {
implicit def parseAlternator(implicit parserA: Parser[TypeA], parserB: Parser[TypeB]): Parser[Alternatives] = new ParseAlternator(parserA, parserB)
}
// The "app"
object MainApp {
import ParseAlternator._
def process(f: File) =
records(f) foreach { r => parse(r) }
def records(f: File): Reader[Alternatives] = {
if(true)
new FileReader[TypeA](f)
else
new FileReader[TypeB](f)
}
def parse[T](r: Record[T])(implicit parser: Parser[T]): Option[Int] =
parser.parse(r)
}
The gist of it is: all of this would be completely classsical if only your parse instance did not have to pattern-match on a generic type but dealt directly with an Alternative instead.
It's this limitation (inherited from the JVM) that scala can't properly pattern-match on an object of a parametric type that requires the reflection & typeOf usage. Without it, you would just have type alternatives for your content (TypeA, TypeB), which you would add to a sealed trait, and which you would dispatch on, in an implicit that produces a Parser for their supertype.
Of course this isn't the only solution, it's just what I think is the meeting point of what's closest to what you're trying to do, with what's most idiomatic.
I'm trying to add an implicit value to (what I believe is) the companion object of a case class, but this implicit value is not found.
I'm trying to achieve something like the following:
package mypackage
object Main {
def main(args: Array[String]): Unit = {
val caseClassInstance = MyCaseClass("string")
val out: DataOutput = ...
serialize(out, caseClassInstance)
// the above line makes the compiler complain that there is no
// Serializer[MyCaseClass] in scope
}
def serialize[T : Serializer](out: DataOutput, t: T): Unit = {
...
}
}
object MyCaseClass {
// implicits aren't found here
implicit val serializer: Serializer[MyCaseClase] = ...
}
case class MyCaseClass(s: String) {
// some other methods
}
I've explicitly added the package here to show that both the MyCaseClass case class and object should be in scope. I know that the object is actually being constructed because I can get this to compile if I add
implicit val serializer = MyCaseClass.serializer
to main (though notably not if I add import MyCaseClass.serializer).
I'm concerned that the MyCaseClass object is not actually a companion of the case class, because if I explicitly define apply and unapply on the object and then attempt to call MyCaseClass.apply("string") in main, the compiler gives the following error:
ambiguous reference to overloaded definition,
both method apply in object MyCaseClass of type (s: String)mypackage.MyCaseClass
and method apply in object MyCaseClass of type (s: String)mypackage.MyCaseClass
match argument types (String)
val a = InputRecord.apply("string")
^
If it's not possible to take this approach, is there a way to use type classes with case classes without creating an implicit value every time it must be brought into scope?
EDIT: I'm using scala 2.10.3.
EDIT 2: Here's the example fleshed out:
package mypackage
import java.io.{DataOutput, DataOutputStream}
object Main {
def main(args: Array[String]): Unit = {
val caseClassInstance = MyCaseClass("string")
val out: DataOutput = new DataOutputStream(System.out)
serialize(out, caseClassInstance)
// the above line makes the compiler complain that there is no
// Serializer[MyCaseClass] in scope
}
def serialize[T : Serializer](out: DataOutput, t: T): Unit = {
implicitly[Serializer[T]].write(out, t)
}
}
object MyCaseClass {
// implicits aren't found here
implicit val serializer: Serializer[MyCaseClass] = new Serializer[MyCaseClass] {
override def write(out: DataOutput, t: MyCaseClass): Unit = {
out.writeUTF(t.s)
}
}
}
case class MyCaseClass(s: String) {
// some other methods
}
trait Serializer[T] {
def write(out: DataOutput, t: T): Unit
}
This actually compiles, though. I am getting this issue when using Scoobi's WireFormat[T] instead of Serializer, but can't provide a concise, runnable example due to complexity and the Scoobi dependency. I will try to create a more relevant example, but it seems as though the issue is not as general as I thought.
It turns out that the type class instances actually need to be implicit values, rather than objects. The MyCaseClass object above works because its serializer is assigned to an implicit value. However, this implementation
object MyCaseClass {
implicit object MyCaseClassSerializer extends Serializer[MyCaseClass] {
override def write(out: DataOutput, t: MyCaseClass): Unit = {
out.writeUTF(t.s)
}
}
}
fails with the error
Main.scala:9: error: could not find implicit value for evidence parameter of type mypackage.Serializer[mypackage.MyCaseClass]
serialize(out, caseClassInstance)
^
In my real code, I was using an auxiliary function to generate the Serializer[T] (see https://github.com/NICTA/scoobi/blob/24f48008b193f4e87b9ec04d5c8736ce0725d006/src/main/scala/com/nicta/scoobi/core/WireFormat.scala#L137). Despite the function having its own explicit return type, the type of the assigned value was not being inferred correctly by the compiler.
Below is the full example from the question with such a Serializer-generator.
package mypackage
import java.io.{DataOutput, DataOutputStream}
object Main {
import Serializer._
def main(args: Array[String]): Unit = {
val caseClassInstance = MyCaseClass("string")
val out: DataOutput = new DataOutputStream(System.out)
serialize(out, caseClassInstance)
}
def serialize[T : Serializer](out: DataOutput, t: T): Unit = {
implicitly[Serializer[T]].write(out, t)
}
}
object MyCaseClass {
import Serializer._
// does not compile without Serializer[MyCaseClass] type annotation
implicit val serializer: Serializer[MyCaseClass] =
mkCaseSerializer(MyCaseClass.apply _, MyCaseClass.unapply _)
}
case class MyCaseClass(s: String)
trait Serializer[T] {
def write(out: DataOutput, t: T): Unit
}
object Serializer {
// does not compile without Serializer[String] type annotation
implicit val stringSerializer: Serializer[String] = new Serializer[String] {
override def write(out: DataOutput, s: String): Unit = {
out.writeUTF(s)
}
}
class CaseClassSerializer[T, A : Serializer](
apply: A => T, unapply: T => Option[A]) extends Serializer[T] {
override def write(out: DataOutput, t: T): Unit = {
implicitly[Serializer[A]].write(out, unapply(t).get)
}
}
def mkCaseSerializer[T, A : Serializer]
(apply: A => T, unapply: T => Option[A]): Serializer[T] =
new CaseClassSerializer(apply, unapply)
}
This related, simple code below prints 1.
object A{
implicit def A2Int(a:A)=a.i1
}
case class A(i1:Int,i2:Int)
object Run extends App{
val a=A(1,2)
val i:Int=a
println(i)
}