I want to generate a bunch of objects at compile time that follow a simple pattern, so I wrote the following macro:
object MyMacro {
def readWrite[T](taName: String, readParse: String => T, label: String, format: T => String): Any = macro readWriteImpl[T]
def readWriteImpl[T: c.WeakTypeTag](c: Context)(taName: c.Expr[String], readParse: c.Expr[String => T], label: c.Expr[String], format: c.Expr[T => String]): c.Expr[Any] = {
import c.universe._
def termName(s: c.Expr[String]): TermName = s.tree match {
case Literal(Constant(s: String)) => TermName(s)
case _ => c.abort(c.enclosingPosition, "Not a string literal")
}
c.Expr[Any](q"""
object ${termName(taName)} extends TypeAdapter.=:=[${implicitly[c.WeakTypeTag[T]].tpe}] {
def read[WIRE](path: Path, reader: Transceiver[WIRE], isMapKey: Boolean = false): ${implicitly[c.WeakTypeTag[T]].tpe} =
reader.readString(path) match {
case null => null.asInstanceOf[${implicitly[c.WeakTypeTag[T]].tpe}]
case s => Try( $readParse(s) ) match {
case Success(d) => d
case Failure(u) => throw new ReadMalformedError(path, "Failed to parse "+${termName(label)}+" from input '"+s+"'", List.empty[String], u)
}
}
def write[WIRE](t: ${implicitly[c.WeakTypeTag[T]].tpe}, writer: Transceiver[WIRE], out: Builder[Any, WIRE]): Unit =
t match {
case null => writer.writeNull(out)
case _ => writer.writeString($format(t), out)
}
}
""")
}
}
I'm not sure I have the return value for readWrite and readWriteImpl correct--the compiler complains mightily about some assertion failure!
I'm also not sure how to actually consume this macro. First I tried (in a separate compilation unit):
object TimeFactories {
MyMacro.readWrite[Duration](
"DurationTypeAdapterFactory",
(s: String) => Duration.parse(s),
"Duration",
(t: Duration) => t.toString)
}
Didn't work. If I tried to reference TimeFactories.DurationTypeAdapterFactory I got an error saying it wasn't found. Next I thought I'd try assigning it to a val...didn't work either:
object Foo {
val duration = MyMacro.readWrite[Duration](
"DurationTypeAdapterFactory",
(s: String) => Duration.parse(s),
"Duration",
(t: Duration) => t.toString).asInstanceOf[TypeAdapterFactory]
}
How can I wire this up so I get generated code compiled like this:
object TimeFactories{
object DurationTypeAdapterFactory extends TypeAdapter.=:=[Duration] {
def read[WIRE](path: Path, reader: Transceiver[WIRE], isMapKey: Boolean = false): Duration =
reader.readString(path) match {
case null => null.asInstanceOf[Duration]
case s => Try( Duration.parse(s) ) match {
case Success(d) => d
case Failure(u) => throw new ReadMalformedError(path, "Failed to parse Duration from input 'Duration'", List.empty[String], u)
}
}
def write[WIRE](t: Duration, writer: Transceiver[WIRE], out: Builder[Any, WIRE]): Unit =
t match {
case null => writer.writeNull(out)
case _ => writer.writeString(t.toString, out)
}
}
// ... More invocations of the readWrite macro with other types for T
}
I don't think, that you can generate new identifiers using macros and than use them publicly.
Instead, try to replace object ${termName(taName)} extends TypeAdapter simply with new TypeAdapter and assign invocation of the macro to a val (as in your second example). You will then reference an anonymous (and generated) class stored in a val. Parameter taName becomes redundant.
Related
i am trying to make some scala functions that would help making flink map and filter operations that redirect their error to a dead letter queue.
However, i'm struggling with scala's type erasure which prevents me from making them generic. The implementation of mapWithDeadLetterQueue below does not compile.
sealed trait ProcessingResult[T]
case class ProcessingSuccess[T,U](result: U) extends ProcessingResult[T]
case class ProcessingError[T: TypeInformation](errorMessage: String, exceptionClass: String, stackTrace: String, sourceMessage: T) extends ProcessingResult[T]
object FlinkUtils {
// https://stackoverflow.com/questions/1803036/how-to-write-asinstanceofoption-in-scala
implicit class Castable(val obj: AnyRef) extends AnyVal {
def asInstanceOfOpt[T <: AnyRef : ClassTag] = {
obj match {
case t: T => Some(t)
case _ => None
}
}
}
def mapWithDeadLetterQueue[T: TypeInformation,U: TypeInformation](source: DataStream[T], func: (T => U)): (DataStream[U], DataStream[ProcessingError[T]]) = {
val mapped = source.map(x => {
val result = Try(func(x))
result match {
case Success(value) => ProcessingSuccess(value)
case Failure(exception) => ProcessingError(exception.getMessage, exception.getClass.getName, exception.getStackTrace.mkString("\n"), x)
}
} )
val mappedSuccess = mapped.flatMap((x: ProcessingResult[T]) => x.asInstanceOfOpt[ProcessingSuccess[T,U]]).map(x => x.result)
val mappedFailure = mapped.flatMap((x: ProcessingResult[T]) => x.asInstanceOfOpt[ProcessingError[T]])
(mappedSuccess, mappedFailure)
}
}
I get:
[error] FlinkUtils.scala:35:36: overloaded method value flatMap with alternatives:
[error] [R](x$1: org.apache.flink.api.common.functions.FlatMapFunction[Product with Serializable with ProcessingResult[_ <: T],R], x$2: org.apache.flink.api.common.typeinfo.TypeInformation[R])org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator[R] <and>
[error] [R](x$1: org.apache.flink.api.common.functions.FlatMapFunction[Product with Serializable with ProcessingResult[_ <: T],R])org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator[R]
[error] cannot be applied to (ProcessingResult[T] => Option[ProcessingSuccess[T,U]])
[error] val mappedSuccess = mapped.flatMap((x: ProcessingResult[T]) => x.asInstanceOfOpt[ProcessingSuccess[T,U]]).map(x => x.result)
Is there a way to make this work ?
Ok, i'm going to answer my own question. I made a couple of mistakes:
First of all, i accidentically included the java DataStream class instead of the scala DataStream class (this happens all the time). The java variant obviously doesn't accept a scala lambda for map/filter/flatmap
Second, sealed traits are not supported by flinks serialisation. There is a project that should solve it but I didn't try it yet.
Solution: first i didn't use sealed trait but a simple case class with two Options (bit less expressive, but still works):
case class ProcessingError[T](errorMessage: String, exceptionClass: String, stackTrace: String, sourceMessage: T)
case class ProcessingResult[T: TypeInformation, U: TypeInformation](result: Option[U], error: Option[ProcessingError[T]])
Then, i could have everything working like so:
object FlinkUtils {
def mapWithDeadLetterQueue[T: TypeInformation: ClassTag,U: TypeInformation: ClassTag]
(source: DataStream[T], func: (T => U)):
(DataStream[U], DataStream[ProcessingError[T]]) = {
implicit val typeInfo = TypeInformation.of(classOf[ProcessingResult[T,U]])
val mapped = source.map((x: T) => {
val result = Try(func(x))
result match {
case Success(value) => ProcessingResult[T, U](Some(value), None)
case Failure(exception) => ProcessingResult[T, U](None, Some(
ProcessingError(exception.getMessage, exception.getClass.getName,
exception.getStackTrace.mkString("\n"), x)))
}
} )
val mappedSuccess = mapped.flatMap((x: ProcessingResult[T,U]) => x.result)
val mappedFailure = mapped.flatMap((x: ProcessingResult[T,U]) => x.error)
(mappedSuccess, mappedFailure)
}
}
the flatMap and filter functions look very similar, but they use a ProcessingResult[T,List[T]] and a ProcessingResult[T,T] respectively.
I use the functions like this:
val (result, errors) = FlinkUtils.filterWithDeadLetterQueue(input, (x: MyMessage) => {
x.`type` match {
case "something" => throw new Exception("how how how")
case "something else" => false
case _ => true
}
})
I'm using spray-json and I need to parse the given request body (PATCH, POST), request body attributes can have following possibilities represented by Either[Unit.type, Option[A]]
value Not given Left[Unit.type]
value=null Null Right[None]
value=XXX Some value is provided Right[Some(value)]
Using the above possibilities I need to create a entity from the request body. While parsing I need to validate each field with some business logic (String length, integer range ...).
I have a following function for the business logic validation.
def validateValue[T](fieldName: String,
maybeValue: Try[T],
businessValidation: T => Boolean): Option[T] = {
maybeValue match {
case Success(value) if businessValidation(value) => Some(value)
case _ => None
}
}
Similarly another function readFieldWithValidation, here I will be parsing each attribute based on the input type and apply the business validation.
def readFieldWithValidation[S, T](fields: Map[String, JsValue], fieldName: String, businessValidation: T => Boolean)(
parse: S => T
): Option[T] = {
fields.get(fieldName) match {
case None => None
case Some(jsValue) =>
jsValue match {
case jsString: JsString =>
validateValue(fieldName, Try(parse(jsString.value)), businessValidation)
case JsNumber(jsNumber) =>
validateValue(fieldName, Try(parse(jsNumber.intValue)), businessValidation)
case _ => None
}
}
}
I have S ( Source ) and T ( Target ) which is used for given a JsValue returns T type. Here I only care about JsString and JsNumber.
The above lines of code is giving type mismatch error,
<console>:112: error: type mismatch;
found : jsString.value.type (with underlying type String)
required: S
validateValue(fieldName, Try(parse(jsString.value)), businessValidation)
^
<console>:114: error: type mismatch;
found : Int
required: S
validateValue(fieldName, Try(parse(jsNumber.intValue)), businessValidation)
Can someone help me how to overcome this error?
This is how I can use above function
val attributes = Map("String" -> JsString("ThisIsString"), "Int" -> JsNumber(23))
def stringLengthConstraint(min: Int, max: Int)(value: String) = value.length > min && value.length < max
readFieldWithValidation[JsString, String](attributes, "String", stringLengthConstraint(1, 10))(_.toString)
Your example is still not quite clear because it does not show the role of parse and actually looks contradictory to the other code: particularly you specify the generic parameter S as JsString in readFieldWithValidation[JsString, String] but given current (borken) readFieldWithValidation implementation your parse argument is probably expected to be of type String => String because jsString.value is String.
Anyway here is a piece of code that seem to implement something that is hopefully sufficiently close to what you want:
trait JsValueExtractor[T] {
def getValue(jsValue: JsValue): Option[T]
}
object JsValueExtractor {
implicit val decimalExtractor = new JsValueExtractor[BigDecimal] {
override def getValue(jsValue: JsValue) = jsValue match {
case JsNumber(jsNumber) => Some(jsNumber)
case _ => None
}
}
implicit val intExtractor = new JsValueExtractor[Int] {
override def getValue(jsValue: JsValue) = jsValue match {
case JsNumber(jsNumber) => Some(jsNumber.intValue)
case _ => None
}
}
implicit val doubleExtractor = new JsValueExtractor[Double] {
override def getValue(jsValue: JsValue) = jsValue match {
case JsNumber(jsNumber) => Some(jsNumber.doubleValue)
case _ => None
}
}
implicit val stringExtractor = new JsValueExtractor[String] {
override def getValue(jsValue: JsValue) = jsValue match {
case JsString(string) => Some(string)
case _ => None
}
}
}
def readFieldWithValidation[S, T](fields: Map[String, JsValue], fieldName: String, businessValidation: T => Boolean)(parse: S => T)(implicit valueExtractor: JsValueExtractor[S]) = {
fields.get(fieldName)
.flatMap(jsValue => valueExtractor.getValue(jsValue))
.flatMap(rawValue => Try(parse(rawValue)).toOption)
.filter(businessValidation)
}
and usage example:
def test(): Unit = {
val attributes = Map("String" -> JsString("ThisIsString"), "Int" -> JsNumber(23))
def stringLengthConstraint(min: Int, max: Int)(value: String) = value.length > min && value.length < max
val value = readFieldWithValidation[String, String](attributes, "String", stringLengthConstraint(1, 10))(identity)
println(value)
}
Your current code uses Option[T] as your return type. If I were using a code like this I'd probably added some error logging and/or handling for a case where the code contains a bug and attributes do contain a value for key fieldName but of some different, unexpected type (like JsNumber instead of JsString).
Update
It is not clear from your comment whether you are satisfied with my original answer or want to add some error handling. If you want to report the type mismatch errors, and since you are using cats, something like ValidatedNel is an obvious choice:
type ValidationResult[A] = ValidatedNel[String, A]
trait JsValueExtractor[T] {
def getValue(jsValue: JsValue, fieldName: String): ValidationResult[T]
}
object JsValueExtractor {
implicit val decimalExtractor = new JsValueExtractor[BigDecimal] {
override def getValue(jsValue: JsValue, fieldName: String): ValidationResult[BigDecimal] = jsValue match {
case JsNumber(jsNumber) => jsNumber.validNel
case _ => s"Field '$fieldName' is expected to be decimal".invalidNel
}
}
implicit val intExtractor = new JsValueExtractor[Int] {
override def getValue(jsValue: JsValue, fieldName: String): ValidationResult[Int] = jsValue match {
case JsNumber(jsNumber) => Try(jsNumber.toIntExact) match {
case scala.util.Success(intValue) => intValue.validNel
case scala.util.Failure(e) => s"Field $fieldName is expected to be int".invalidNel
}
case _ => s"Field '$fieldName' is expected to be int".invalidNel
}
}
implicit val doubleExtractor = new JsValueExtractor[Double] {
override def getValue(jsValue: JsValue, fieldName: String): ValidationResult[Double] = jsValue match {
case JsNumber(jsNumber) => jsNumber.doubleValue.validNel
case _ => s"Field '$fieldName' is expected to be double".invalidNel
}
}
implicit val stringExtractor = new JsValueExtractor[String] {
override def getValue(jsValue: JsValue, fieldName: String): ValidationResult[String] = jsValue match {
case JsString(string) => string.validNel
case _ => s"Field '$fieldName' is expected to be string".invalidNel
}
}
}
def readFieldWithValidation[S, T](fields: Map[String, JsValue], fieldName: String, businessValidation: T => Boolean)
(parse: S => T)(implicit valueExtractor: JsValueExtractor[S]): ValidationResult[T] = {
fields.get(fieldName) match {
case None => s"Field '$fieldName' is required".invalidNel
case Some(jsValue) => valueExtractor.getValue(jsValue, fieldName)
.andThen(rawValue => Try(parse(rawValue).validNel).getOrElse("".invalidNel))
.andThen(parsedValue => if (businessValidation(parsedValue)) parsedValue.validNel else s"Business validation for field '$fieldName' has failed".invalidNel)
}
}
And the test example remains the same. Probably in your real code you want to use something more specific than just String for errors but that's up to you.
I'm following the tutorial here: http://typelevel.org/cats/datatypes/freemonad.html and trying to modify it to work with a cache in front of the key value store. This is what I've come up with so far but I'm getting a compiler error with valueGetOperation. I understand why I get the compile error, I just don't understand how to work around it. What's the best practice for conditional behavior when using a free monad?
import cats.data.Coproduct
import cats.free.{Free, Inject}
object KvStore {
sealed trait KvOp[A]
case class Get[T](key: String) extends KvOp[Option[T]]
case class Put[T](key: String, value: T) extends KvOp[Unit]
case class Delete[T](key: String) extends KvOp[Unit]
}
object CacheStore {
sealed trait CacheOp[A]
case class Get[T](key: String) extends CacheOp[Option[T]]
case class Put[T](key: String, value: T) extends CacheOp[Unit]
case class Delete[T](key: String) extends CacheOp[Unit]
}
type WriteThruCache[A] = Coproduct[KvStore.KvOp, CacheStore.CacheOp, A]
class KvOps[F[_]](implicit I: Inject[KvStore.KvOp, F]) {
import KvStore._
def get[T](key: String): Free[F, Option[T]] = Free.inject[KvOp, F](Get(key))
def put[T](key: String, value: T): Free[F, Unit] = Free.inject[KvOp, F](Put(key, value))
def delete[T](key: String): Free[F, Unit] = Free.inject[KvOp, F](Delete(key))
}
object KvOps {
implicit def kvOps[F[_]](implicit I: Inject[KvStore.KvOp, F]): KvOps[F] = new KvOps[F]
}
class CacheOps[F[_]](implicit I: Inject[CacheStore.CacheOp, F]) {
import CacheStore._
def get[T](key: String): Free[F, Option[T]] = Free.inject[CacheOp, F](Get(key))
def put[T](key: String, value: T): Free[F, Unit] = Free.inject[CacheOp, F](Put(key, value))
def delete[T](key: String): Free[F, Unit] = Free.inject[CacheOp, F](Delete(key))
}
object CacheOps {
implicit def cacheOps[F[_]](implicit I: Inject[CacheStore.CacheOp, F]): CacheOps[F] = new CacheOps[F]
}
def valueWriteOperation[T](implicit Kv: KvOps[WriteThruCache], Cache: CacheOps[WriteThruCache]): ((String, T) => Free[WriteThruCache, Unit]) = {
(key: String, value: T) =>
for {
_ <- Kv.put(key, value)
_ <- Cache.put(key, value)
} yield ()
}
// This is where I'm stuck
// desired behavior: If the value isn't in the cache, load it from the kv store and put it in the cache
def valueGetOperation[T](implicit Kv: KvOps[WriteThruCache], Cache: CacheOps[WriteThruCache]): ((String) => Free[WriteThruCache, Option[T]]) = {
(key: String) =>
for {
cacheOption <- Cache.get[T](key)
kvOption <- Kv.get[T](key) if cacheOption.isEmpty // value withFilter is not a member of cats.free.Free[A$A39.this.WriteThruCache,Option[T]]
} yield cacheOption.orElse(kvOption)
}
As you know in for comprehension, when you use if it is desugared by compiler to calling withFilter method, and if it's not accessible it falls back to filter method. If they are not implemented you will receive compiler error.
However you can simply use if else!
for {
booleanValue <- myfreeAlbebra.checkCondidtion(arg1, arg2)
valueToReturn <- if (booleanValue) {
myfreeAlbebra.someValue
} else {
myfreeAlbebra.someOtherValue
}
} yield valueToReturn
alternatively you can do something like:
for {
booleanValue <- myfreeAlbebra.checkCondidtion(arg1, arg2)
valueToReturnOpt <- myfreeAlbebra.someValue
fallbackValue <- myfreeAlbebra.someOtherValue
} yield valueToReturnOpt.getOrElse(fallbackValue)
The formar one will assign value to valueToReturn depending on booleanValue. As such only one branch will be interpreted. The later will evaluate both values and return one of them depending on whether or not valueToReturnOpt will be empty.
Personally I would try something like:
def valueGetOperation[T](implicit Kv: KvOps[WriteThruCache], Cache: CacheOps[WriteThruCache]): ((String) => Free[WriteThruCache, Option[T]]) = {
(key: String) =>
for {
cacheOption <- Cache.get[T](key)
returnedValue <- if (cacheOption.isEmpty) Cache.get[T](key) else Kv.get[T](key)
} yield returnedValue
}
Following Mateusz' suggestions, this is what I came up with:
def withFallback[A[_], T](loadedValue: Option[T], fallback: => Free[A, Option[T]]): Free[A, Option[T]] = {
if(loadedValue.isDefined) {
Free.pure[A, Option[T]](loadedValue)
} else {
fallback
}
}
def valueGetOperation[T](implicit Kv: KvOps[WriteThruCache], Cache: CacheOps[WriteThruCache]): ((String) => Free[WriteThruCache, Option[T]]) = {
(key: String) =>
for {
cachedOption <- Cache.get[T](key)
actualValue <- withFallback[WriteThruCache, T](cachedOption, fallback = Kv.get[T](key))
} yield actualValue
}
If there's a standard construct to implement withFallback I'd be glad to know about it.
You could also use OptionT#orElse.
import cats.data.OptionT
type KV[A] = Free[WriteThruCache, A]
def valueGetOperation[T](
implicit
Kv: KvOps[WriteThruCache],
Cache: CacheOps[WriteThruCache]
): String => KV[Option[T]] =
key => OptionT[KV, T](Cache.get[T](key)).orElse(OptionT[KV, T](Kv.get[T](key))).value
Or OptionT#orElseF :
def valueGetOperation[T](
implicit
Kv: KvOps[WriteThruCache],
Cache: CacheOps[WriteThruCache]
): String => KV[Option[T]] =
key => OptionT[KV, T](Cache.get[T](key)).orElseF(Kv.get[T](key)).value
Note that with the -Ypartial-unification flag in Scala 2.12 you don't need the KV type alias and you can write OptionT(...) instead of OptionT[KV, T](...).
I'm using joda DateTime and UUID in my Play project. I'm struggling trying to put and get them from Postgresql:
import org.joda.time.DateTime
case class MyClass(id: Pk[UUID], name: String, addedAt: DateTime)
object MyClass {
val simple =
SqlParser.get[Pk[UUID]]("id") ~
SqlParser.get[String]("name") ~
SqlParser.get[DateTime]("added_at") map {
case id ~ name ~ addedAt => MyClass(id, name, addedAt)
}
implicit def rowToId = Column.nonNull[UUID] { (value, meta) =>
maybeValueToUUID(value) match {
case Some(uuid) => Right(uuid)
case _ => Left(TypeDoesNotMatch( s"Cannot convert $value: ${value.asInstanceOf[Any].getClass} to UUID"))
}
}
implicit def idToStatement = new ToStatement[UUID] {
def set(s: PreparedStatement, index: Int, aValue: UUID): Unit = s setObject(index, toByteArray(aValue))
}
def getSingle(id: UUID): Option[MyClass] = {
DB withConnection {
implicit con =>
SQL("SELECT my_table.id, my_table.name, my_table.added_at FROM my_table WHERE id = {id}")
.on('id -> id)
.as(MyClass.simple.*)
} match {
case List(x) => Some(x)
case _ => None
}
}
Implicit functions for joda DateTime are omited because they don't cause any error at this point. What causes an error is a getSingle(...) - conversion from and to UUID. The error is
org.postgresql.util.PSQLException: operator does not exist: uuid = bytea
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
4 helper functions:
private def maybeValueToUUID(value: Any): Option[UUID] = maybeValueToByteArray(value) match {
case Some(bytes) => Some(fromByteArray(bytes))
case _ => None
}
private def maybeValueToByteArray(value: Any): Option[Array[Byte]] =
try {
value match {
case bytes: Array[Byte] => Some(bytes)
case clob: Clob => None //todo
case blob: Blob => None //todo
case _ => None
}
} catch {
case e: Exception => None
}
def toByteArray(uuid: UUID) = {
val buffer = ByteBuffer.wrap(new Array[Byte](16))
buffer putLong uuid.getMostSignificantBits
buffer putLong uuid.getLeastSignificantBits
buffer.array
}
def fromByteArray(b: Array[Byte]) = {
val buffer = ByteBuffer.wrap(b)
val high = buffer.getLong
val low = buffer.getLong
new UUID(high, low)
}
Note that a record I'm trying to retrieve exists and has a correct format.
I have put all my Json converters in one file JsonUtil, and then have a convertToJson method which tries to convert any object passed to json.
Basically a structure like this:
implicit val format_A = format[A]
implicit val format_B = format[B]
def convertToJson(o: Any): JsValue =
o match {
case a: A => Json.toJson(a)
case b: B => Json.toJson(b)
case a: Any => Logger.warn("could not convert to json: "+a); JsNull
}
but with alot more formatters / cases. I don't want to import all these implicits when I need conversions (for various reasons). Is there a way to match if there exists a valid toJson conversion, so I wouldn't have to write all the cases?
like:
case o: ExistJsonConversion => Json.toJson(o)
This works.
def convertToJson[A](a: A)(implicit ev: Format[A] = null): JsValue = {
if (ev != null) {
Json.toJson(a)(ev)
} else {
println("oops")
JsNull
}
}
A bit better version below (perhaps ;)
case class Perhaps[E](value: Option[E]) {
def fold[F](ifAbsent: => F)(ifPresent: E => F): F =
value.fold(ifAbsent)(ifPresent)
}
implicit def perhaps[E](implicit ev: E = null) = Perhaps(Option(ev))
def convertToJson[A](a: A)(implicit p: Perhaps[Format[A]]): JsValue = {
p.fold[JsValue] {
println("oops")
JsNull
} { implicit ev =>
Json.toJson(a)
}
}