Return Row with schema defined at runtime in Spark UDF - scala

I've dulled my sword on this one, some help would be greatly appreciated!
Background
I am building an ETL pipeline that takes GNMI Protobuf update messages off of a Kafka queue and eventually breaks them out into a bunch of delta tables based on the prefix and parameters of the paths to values (e.g. DataBricks runtime).
Without going into the gory details, each prefix corresponds roughly to a schema for a table, with the caveat that the paths can change (usually new subtrees) upstream, so the schema is not fixed. This is similar to a nested JSON structure .
I first break out the updates by prefix, so all of the updates have roughly the same schema. I defined some transformations so that when the schema does not match exactly, I can coerce them into a common schema.
I'm running into trouble when I try to create a struct column with the common schema.
Attempt 1
I first tried just returning an Array[Any] from my udf, and providing a schema in the UDF definition (I know this is deprecated):
import org.apache.spark.sql.{functions => F, Row, types => T}
def mapToRow(deserialized: Map[String, ParsedValueV2]): Array[Any] = {
def getValue(key: String): Any = {
deserialized.get(key) match {
case Some(value) => value.asType(columns(key))
case None => None
}
}
columns.keys.toArray.map(getValue).toArray
}
spark.conf.set("spark.sql.legacy.allowUntypedScalaUDF", "true")
def mapToStructUdf = F.udf(mapToRow _, account.sparkSchemas(prefix))
This snippet creates an Array object with the typed values that I need. Unfortunately when I try to use the UDF, I get this error:
java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema cannot be cast to $line8760b7c10da04d2489451bb90ca42c6535.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$ParsedValueV2
I'm not sure what's not matching, but I did notice that the type of the values are Java types, not scala, so perhaps that is related?
Attempt 2
Maybe I can use the Typed UDF interface after all? Can I create a case class at runtime for each schema, and then use that as the return value from my udf?
I've tried to get this to work using various stuff I found like this:
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val test = tb.eval(tb.parse("object Test; Test"))
but I can't even get an instance of test, and can't figure out how to use it as the return value of a UDF. I presume I need to use a generic type somehow, but my scala-fu is too weak to figure this one out.
Finally, the question
Can some help me figure out which approach to take, and how to proceed with that approach?
Thanks in advance for your help!!!
Update - is this a Spark bug?
I've distilled the problem down to this code:
import org.apache.spark.sql.{functions => F, Row, types => T}
// thanks #Dmytro Mitin
val spark = SparkSession.builder
.master ("local")
.appName ("Spark app")
.getOrCreate ()
spark.conf.set("spark.sql.legacy.allowUntypedScalaUDF", "true")
def simpleFn(foo: Any): Seq[Any] = List("hello world", "Another String", 42L)
// def simpleFn(foo: Any): Seq[Any] = List("hello world", "Another String")
def simpleUdf = F.udf(
simpleFn(_),
dataType = T.StructType(
List(
T.StructField("a_string", T.StringType),
T.StructField("another_string", T.StringType),
T.StructField("an_int", T.IntegerType),
)
)
)
Seq(("bar", "foo"))
.toDF("column", "input")
.withColumn(
"array_data",
simpleUdf($"input")
)
.show(truncate=false)
which results in this error message
IllegalArgumentException: The value (List(Another String, 42)) of the type (scala.collection.immutable.$colon$colon) cannot be converted to the string type
Hmm... that's odd. Where does that list come from, missing the first element of the row?
Two valued version (e.g. "hello world", "Another String") has the same problem, but if I only have one value in my struct, then its happy:
// def simpleFn(foo: Any): Seq[Any] = List("hello world", "Another String")
def simpleFn(foo: Any): Seq[Any] = List("hello world")
def simpleUdf = F.udf(
simpleFn(_),
dataType = T.StructType(
List(
T.StructField("a_string", T.StringType),
// T.StructField("another_string", T.StringType),
// T.StructField("an_int", T.IntegerType),
)
)
)
and my query gives me
+------+-----+-------------+
|column|input|array_data |
+------+-----+-------------+
|bar |foo |{hello world}|
+------+-----+-------------+
It looks like its giving me the first element of my Sequence as the first field of the struct, the rest of it as the second element of the struct, and then the third one is null (seen in other cases), and causes an exception.
This looks like a bug to me. Anyone else have any experience with UDFs with schemas built on the fly like this?
Spark 3.3.1, scala 2.12, DBR 12.0
Reflection struggles
A stupid way to accomplish what I want to do would be to take the schema's I've inferred, generate a bunch of scala code that implements case classes that I can use as return types from my UDFs, then compile the code, package up a JAR, load it into my databricks runtime, and then use the case classes as return results from the UDFs.
This seems like a very convoluted way to do things. It would be great if I could just generate the case classes, and then do something like
def myUdf[CaseClass](input: SomeInputType): CaseClass =
CaseClass(input.giveMeResults: _*)
The problem is that I can't figure out how to get the type I've created using eval into the current "context" (I don't know the right word here).
This code:
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val test = tb.eval(tb.parse("object Test; Test"))
give me this:
...
test: Any = __wrapper$1$bb89c0cde37c48929fa9d8cdabeeb0f8.__wrapper$1$bb89c0cde37c48929fa9d8cdabeeb0f8$Test$1$#492531c0
test is, I think, an instance of Test, but the type system in the REPL doesn't know about any type named Test, so I can't use test.asInstanceOf[Test] or something like that
I know this is a frequently asked question, but I can't seem to find an answer anywhere about how to actually accomplish what I described above.

Regarding "Reflection struggles". It's not clear for me whether: 1) you already have def myUdf[T] = ... from somewhere and you're trying just to call it for generated case class: myUdf[GeneratedClass] or 2) you're trying to define def myUdf[T] = ... based on the generated class.
In the former case you should use:
tb.define to generate an object (or case class), it returns a class symbol (or module symbol), you can use it further (e.g. in a type position)
tb.eval to call the method (myUdf)
object Main extends App {
def myUdf[T](): Unit = println("myUdf")
import scala.reflect.runtime.universe
import universe.Quasiquote
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val testSymbol = tb.define(q"object Test")
val test = tb.eval(q"$testSymbol")
tb.eval(q"Main.myUdf[$testSymbol]()") // myUdf
}
In this example I changed the signature (and body) of myUdf, you should use your actual ones.
In the latter case you can define myUdf at runtime too:
object Main extends App {
import scala.reflect.runtime.universe
import universe.Quasiquote
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val testSymbol = tb.define(q"object Test")
val test = tb.eval(q"$testSymbol")
val xSymbol = tb.define(
q"""
object X {
def myUdf[T](): Unit = println("myUdf")
}
"""
)
tb.eval(q"$xSymbol.myUdf[$testSymbol]()") //myUdf
}
You should try to write myUdf for ordinary case and we'll translate it for runtime-generated one.
so I can't use test.asInstanceOf[Test] or something like that
Yeah, type Test doesn't exist at compile time so you can't use it like that. It exists at runtime so you should use it inside quasiquotes q"..." (or tb.parse("..."))
object Main extends App {
import scala.reflect.runtime.universe
import universe.Quasiquote
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val testSymbol = tb.define(q"object Test")
val test = tb.eval(q"$testSymbol")
tb.eval(q"Main.test.asInstanceOf[${testSymbol.asModule.moduleClass.asClass.toType}]") // no exception, so test is an instance of Test
tb.eval(q"Main.test.asInstanceOf[$testSymbol.type]") // no exception, so test is an instance of Test
println(
tb.eval(q"Main.test.getClass").asInstanceOf[Class[_]]
) // class __wrapper$1$0bbb246b633b472e8df54efc3e9ff9d9.Test$
println(
tb.eval(q"scala.reflect.runtime.universe.typeOf[$testSymbol.type]").asInstanceOf[universe.Type]
) // __wrapper$1$0bbb246b633b472e8df54efc3e9ff9d9.Test.type
}
Regarding ClassCastException or IllegalArgumentException. I noticed that the exception disappears if you change UDF return type
def simpleUdf = F.udf (
simpleFn (_),
dataType = T.StructType (
List (
T.StructField ("a_string", T.StringType),
T.StructField ("tail1", T.StructType (
List (
T.StructField ("another_string", T.StringType),
T.StructField ("tail2", T.StructType (
List (
T.StructField ("an_int", T.IntegerType),
)
)),
)
)),
)
)
)
//+------+-----+-------------------------------------+
//|column|input|array_data |
//+------+-----+-------------------------------------+
//|bar |foo |{hello world, {Another String, {42}}}|
//+------+-----+-------------------------------------+
I guess this makes sense because a List is :: (aka $colon$colon) of its head and tail, then the tail is :: of its head and tail etc.

#Dmytro Mitin gets the majority of the credit for this answer. Thanks a ton for your help!
The solution I came to uses approach 1) using the untyped APIs. The key is to do two things:
Return a Row (e.g. untyped) from the unwrapped udf
Create the UDF using the untyped API
Here is the toy example
spark.conf.set("spark.sql.legacy.allowUntypedScalaUDF", "true")
def simpleFn(foo: Any): Row = Row("a_string", "hello world", 42L)
def simpleUdf = F.udf(
simpleFn(_),
dataType = T.StructType(
List(
T.StructField("a_string", T.StringType),
T.StructField("another_string", T.StringType),
T.StructField("an_int", T.LongType),
)
)
)
Now I can use it like this:
Seq(("bar", "foo"))
.toDF("column", "input")
.withColumn(
"struct_data",
simpleUdf($"input")
)
.withColumn(
"field_data",
$"struct_data.a_string"
)
.show(truncate=false)
Output:
+------+-----+---------------------------+----------+
|column|input|struct_data |field_data|
+------+-----+---------------------------+----------+
|bar |foo |{a_string, hello world, 42}|a_string |
+------+-----+---------------------------+----------+

Related

Avoid specifying schema twice (Spark/scala)

I need to iterate over data frame in specific order and apply some complex logic to calculate new column.
Also my strong preference is to do it in generic way so I do not have to list all columns of a row and do df.as[my_record] or case Row(...) => as shown here. Instead, I want to access row columns by their names and just add result column(s) to source row.
Below approach works just fine but I'd like to avoid specifying schema twice: first time so that I can access columns by name while iterating and second time to process output.
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
import org.apache.spark.sql.catalyst.encoders.RowEncoder
import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema
val q = """
select 2 part, 1 id
union all select 2 part, 4 id
union all select 2 part, 3 id
union all select 2 part, 2 id
"""
val df = spark.sql(q)
def f_row(iter: Iterator[Row]) : Iterator[Row] = {
if (iter.hasNext) {
def complex_logic(p: Int): Integer = if (p == 3) null else p * 10;
val head = iter.next
val schema = StructType(head.schema.fields :+ StructField("result", IntegerType))
val r =
new GenericRowWithSchema((head.toSeq :+ complex_logic(head.getAs("id"))).toArray, schema)
iter.scanLeft(r)((r1, r2) =>
new GenericRowWithSchema((r2.toSeq :+ complex_logic(r2.getAs("id"))).toArray, schema)
)
} else iter
}
val schema = StructType(df.schema.fields :+ StructField("result", IntegerType))
val encoder = RowEncoder(schema)
df.repartition($"part").sortWithinPartitions($"id").mapPartitions(f_row)(encoder).show
What information is lost after applying mapPartitions so output cannot be processed without explicit encoder? How to avoid specifying it?
What information is lost after applying mapPartitions so output cannot be processed without
The information is hardly lost - it wasn't there from the begining - subclasses of Row or InternalRow are basically untyped, variable shape containers, which don't provide any useful type information, that could be used to derive an Encoder.
schema in GenericRowWithSchema is inconsequential as it describes content in terms of metadata not types.
How to avoid specifying it?
Sorry, you're out of luck. If you want to use dynamically typed constructs (a bag of Any) in a statically typed language you have to pay the price, which here is providing an Encoder.
OK - I have checked some of my spark code and using .mapPartitions with the Dataset API does not require me to explicitly build/pass an encoder.
You need something like:
case class Before(part: Int, id: Int)
case class After(part: Int, id: Int, newCol: String)
import spark.implicits._
// Note column names/types must match case class constructor parameters.
val beforeDS = <however you obtain your input DF>.as[Before]
def f_row(it: Iterator[Before]): Iterator[After] = ???
beforeDS.reparition($"part").sortWithinPartitions($"id").mapPartitions(f_row).show
I found below explanation sufficient, maybe it will be useful for others.
mapPartitions requires Encoder because otherwise it cannot construct Dataset from iterator or Rows. Even though each row has a schema, that shema cannot be derived (used) by constructor of Dataset[U].
def mapPartitions[U : Encoder](func: Iterator[T] => Iterator[U]): Dataset[U] = {
new Dataset[U](
sparkSession,
MapPartitions[T, U](func, logicalPlan),
implicitly[Encoder[U]])
}
On the other hand, without calling mapPartitions Spark can use the schema derived from initial query because structure (metadata) of the original columns is not changed.
I described alternatives in this answer: https://stackoverflow.com/a/53177628/7869491.

Pass case class to Spark UDF

I have a scala-2.11 function which creates a case class from Map based on the provided class type.
def createCaseClass[T: TypeTag, A](someMap: Map[String, A]): T = {
val rMirror = runtimeMirror(getClass.getClassLoader)
val myClass = typeOf[T].typeSymbol.asClass
val cMirror = rMirror.reflectClass(myClass)
// The primary constructor is the first one
val ctor = typeOf[T].decl(termNames.CONSTRUCTOR).asTerm.alternatives.head.asMethod
val argList = ctor.paramLists.flatten.map(param => someMap(param.name.toString))
cMirror.reflectConstructor(ctor)(argList: _*).asInstanceOf[T]
}
I'm trying to use this in the context of a spark data frame as a UDF. However, I'm not sure what's the best way to pass the case class. The approach below doesn't seem to work.
def myUDF[T: TypeTag] = udf { (inMap: Map[String, Long]) =>
createCaseClass[T](inMap)
}
I'm looking for something like this-
case class MyType(c1: String, c2: Long)
val myUDF = udf{(MyType, inMap) => createCaseClass[MyType](inMap)}
Thoughts and suggestions to resolve this is appreciated.
However, I'm not sure what's the best way to pass the case class
It is not possible to use case classes as arguments for user defined functions. SQL StructTypes are mapped to dynamically typed (for lack of a better word) Row objects.
If you want to operate on statically typed objects please use statically typed Dataset.
From try and error I learn that whatever data structure that is stored in a Dataframe or Dataset is using org.apache.spark.sql.types
You can see with:
df.schema.toString
Basic types like Int,Double, are stored like:
StructField(fieldname,IntegerType,true),StructField(fieldname,DoubleType,true)
Complex types like case class are transformed to a combination of nested types:
StructType(StructField(..),StructField(..),StructType(..))
Sample code:
case class range(min:Double,max:Double)
org.apache.spark.sql.Encoders.product[range].schema
//Output:
org.apache.spark.sql.types.StructType = StructType(StructField(min,DoubleType,false), StructField(max,DoubleType,false))
The UDF parameter type in this cases is Row, or Seq[Row] when you store an array of case classes
A basic debug technic is print to string:
val myUdf = udf( (r:Row) => r.schema.toString )
then, to see was happen:
df.take(1).foreach(println) //

Cats Seq[Xor[A,B]] => Xor[A, Seq[B]]

I have a sequence of Errors or Views (Seq[Xor[Error,View]])
I want to map this to an Xor of the first error (if any) or a Sequence of Views
(Xor[Error, Seq[View]]) or possibly simply (Xor[Seq[Error],Seq[View])
How can I do this?
You can use sequenceU provided by the bitraverse syntax, similar to as you would do with scalaz. It doesn't seem like the proper type classes exist for Seq though, but you can use List.
import cats._, data._, implicits._, syntax.bitraverse._
case class Error(msg: String)
case class View(content: String)
val errors: List[Xor[Error, View]] = List(
Xor.Right(View("abc")), Xor.Left(Error("error!")),
Xor.Right(View("xyz"))
)
val successes: List[Xor[Error, View]] = List(
Xor.Right(View("abc")),
Xor.Right(View("xyz"))
)
scala> errors.sequenceU
res1: cats.data.Xor[Error,List[View]] = Left(Error(error!))
scala> successes.sequenceU
res2: cats.data.Xor[Error,List[View]] = Right(List(View(abc), View(xyz)))
In the most recent version of Cats Xor is removed and now the standard Scala Either data type is used.
Michael Zajac showed correctly that you can use sequence or sequenceU (which is actually defined on Traverse not Bitraverse) to get an Either[Error, List[View]].
import cats.implicits._
val xs: List[Either[Error, View]] = ???
val errorOrViews: Either[Error, List[View]] = xs.sequenceU
You might want to look at traverse (which is like a map and a sequence), which you can use most of the time instead of sequence.
If you want all failing errors, you cannot use Either, but you can use Validated (or ValidatedNel, which is just a type alias for Validated[NonEmptyList[A], B].
import cats.data.{NonEmptyList, ValidatedNel}
val errorsOrViews: ValidatedNel[Error, List[View]] = xs.traverseU(_.toValidatedNel)
val errorsOrViews2: Either[NonEmptyList[Error], List[View]] = errorsOrViews.toEither
You could also get the errors and the views by using MonadCombine.separate :
val errorsAndViews: (List[Error], List[View]) = xs.separate
You can find more examples and information on Either and Validated on the Cats website.

Scala Macros: Convert/parse a Tree to a Name

This is a simplified example but the problem remains the same.
I want to achieve this using macros (scala based pseudocode):
(a: Int) => {
val z = "toShort"
a.z
}
If I reify it, I would obtain something similar to this:
Function(
List(
ValDef(
Modifiers(Flag.PARAM),
newTermName("a"),
Ident(scala.Int),
EmptyTree
)
),
Block(
List(
ValDef(
Modifiers(),
newTermName("z"),
TypeTree(),
Literal(Constant("toShort"))
)
),
Apply(
Select(
Ident(newTermName("a")),
newTermName("toShort")
),
List()
)
)
)
I dont know how to access to a value and then use it as a TermName.
I tried replacing newTermName("toShort") with newTermName(c.Expr[String](Select(Ident(newTermName("z")))).splice) but the compiler doesn't seem to like:
exception during macro expansion:
java.lang.UnsupportedOperationException: the function you're calling has not been spliced by > the compiler.
this means there is a cross-stage evaluation involved, and it needs to be invoked explicitly.
if you're sure this is not an oversight, add scala-compiler.jar to the classpath,
import scala.tools.reflect.Eval and call <your expr>.eval instead.
I've also tried 'eval' as suggested by compiler: newTermName(c.eval(c.Expr[String](...)) but neither worked.
How could I convert a tree like Select(Ident(newTermName("z"))) (which is a access to a value of a local val) to a Name a string which can be used as a parameter for newTermName? Is it possible?
UPDATE:
Here the real problem brought to you as a gist!
Thanks in advance,
I have a hard time understanding what you're trying to achieve, and why you are using Trees everywhere. Trees are really low level, hard to use, tricky, and it is very difficult to understand what the code does. Quasiquotes (http://docs.scala-lang.org/overviews/macros/quasiquotes.html) are the way to go indeed and you can use them on scala 2.10.x production release thanks to the macro paradise plugin (http://docs.scala-lang.org/overviews/macros/paradise.html). The you can simply write q"(a: Int) => {val z = "toShort"; a.z}" and you directly get the tree expression you just typed.
To answer your question, the first point is to remember that macros are evaluated at compile time. They therefore can not generate code which depends on a runtime value. This is why the compiler is complaining about your splice. But if you pass a value which can be computed at compile time, typically a literal, then you can use eval to get its value within your macro code. Eval does suffer a bug though, as indicated in scaladoc. It should only be called on untyped trees. So the way to call eval on an s: c.Expr[String] expression would be val s2 = c.eval(c.Expr[String](c.resetAllAttrs(c.tree.duplicate))) which gives you a String you can then use normally in your code, for instance q"(a: Int) => a.${newTermName(s2)}".
To put it all together, let's imagine you to create a macro that'll output a string value from an object and one of its String field. It'll give something like
def attr[A](a: A, field: String): String = macro attrImpl[A]
def attrImpl[A: c.WeakTypeTag](c: Context)(a: c.Expr[A], field: c.Expr[String]) = {
import c.universe._
val s = c.eval(c.Expr[String](c.resetAllAttrs(field.tree.duplicate)))
c.Expr[String](q"a.${newTermName(s)}")
}
REPL session test:
scala> object a { val field1 = "field1"; val field2 = "field2" }
defined module a
scala> attr(a, "field1")
res0: String = field1
scala> attr(a, "field2")
res1: String = field2
To understand the difference between compile time and runtime, you can meditate about the following result in REPL ;-)
scala> val s = "field1"; attr(a, s)
error: exception during macro expansion:
scala.tools.reflect.ToolBoxError: reflective compilation has failed:
$iw is not an enclosing class
at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl$ToolBoxGlobal.throwIfErrors(ToolBoxFactory.scala:311)
at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl$ToolBoxGlobal.compile(ToolBoxFactory.scala:244)
at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl.compile(ToolBoxFactory.scala:408)
at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl.eval(ToolBoxFactory.scala:411)
at scala.reflect.macros.runtime.Evals$class.eval(Evals.scala:16)
at scala.reflect.macros.runtime.Context.eval(Context.scala:6)
at .attrImpl(<console>:14)
scala> val s = "field1"
s: String = field1
scala> attr(a, s)
res3: String = field1
Hope it helps ;))

Customer Type Mapper for Slick SQL

I've found this example from slick testing:
https://github.com/slick/slick/blob/master/slick-testkit/src/main/scala/com/typesafe/slick/testkit/tests/MapperTest.scala
sealed trait Bool
case object True extends Bool
case object False extends Bool
implicit val boolTypeMapper = MappedColumnType.base[Bool, String](
{ b =>
assertNotNull(b)
if(b == True) "y" else "n"
}, { i =>
assertNotNull(i)
if(i == "y") True else False
}
)
But I'm trying to create a TypeMapper for org.joda.time.DateTime to/from java.sql.Timestamp - but without much success. The Bool example is very particular and I'm having trouble adapting it. Joda Time is super common - so any help would be much appreciated.
To be clear, I'm using interpolated sql"""select colA,colB from tableA where id = ${id}""" and such. When doing a select the system works well by using jodaDate types in the implicit GetResult converter.
However, for inserts there doesn't seem to be a way to do an implicit conversion or it is ignoring the code provided below in Answer #1 - same error as before:
could not find implicit value for parameter pconv: scala.slick.jdbc.SetParameter[(Option[Int], String, String, Option[org.joda.time.DateTime])]
I'm not using the Lifted style Slick configuration with the annotated Table objects perhaps that is why it is not finding/using the TypeMapper
I use the following in my code, which might also work for you:
import java.sql.Timestamp
import org.joda.time.DateTime
import org.joda.time.DateTimeZone.UTC
import scala.slick.lifted.MappedTypeMapper.base
import scala.slick.lifted.TypeMapper
implicit val DateTimeMapper: TypeMapper[DateTime] =
base[DateTime, Timestamp](
d => new Timestamp(d millis),
t => new DateTime(t getTime, UTC))
Edit (after your edit =^.~= ): (a bit late but I hope it still helps)
Ah, OK, since you're not using lifted embedding, you'll have to define different implicit values (as indicated by the error message from the compiler). Something like the following should work (though I haven't tried myself):
implicit val SetDateTime: SetParameter[DateTime] = new SetParameter {
def apply(d: DateTime, p: PositionedParameters): Unit =
p setTimestamp (new Timestamp(d millis))
}
For the other way round (retrieving the results of the SELECT), it looks like you'd need to define a GetResult:
implicit val GetDateTime: GetResult[DateTime] = new GetResult {
def apply(r: PositionedResult) = new DateTime(r.nextTimestamp getTime, UTC))
}
So, basically this is just the same as with the lifted embedding, just encoded with different types.
Why not to digg into something that works great?
Look at
https://gist.github.com/dragisak/4756344
and
https://github.com/tototoshi/slick-joda-mapper
First you could copy-paste to your project, and the second is available from Maven central.