Cake pattern for deep function call chain ( threading dependency along as extra parameter) - scala

Question on simple example:
Assume we have :
1) 3 functions:
f(d:Dependency), g(d:Dependency), h(d:Dependency)
and
2) the following function call graph : f calls g, g calls h
QUESTION: In h I would like to use the d passed to f, is there a way to use the cake pattern to get access to d from h? If yes, how ?
Question on real-world example:
In the code below I manually need to thread the Handler parameter from
// TOP LEVEL , INJECTION POINT
to
//USAGE OF Handler
Is it possible to use the cake pattern instead of the manual threading? If yes, how ?
package Demo
import interop.wrappers.EditableText.EditableText
import interop.wrappers.react_sortable_hoc.SortableContainer.Props
import japgolly.scalajs.react.ReactComponentC.ReqProps
import japgolly.scalajs.react._
import japgolly.scalajs.react.vdom.prefix_<^._
import interop.wrappers.react_sortable_hoc.{SortableContainer, SortableElement}
object EditableSortableListDemo {
trait Action
type Handler=Action=>Unit
object CompWithState {
case class UpdateElement(s:String) extends Action
class Backend($: BackendScope[Unit, String],r:Handler) {
def handler(s: String): Unit =
{
println("handler:"+s)
$.setState(s)
//USAGE OF Handler <<<<<=======================================
}
def render(s:String) = {
println("state:"+s)
<.div(<.span(s),EditableText(s, handler _)())
}
}
val Component = (f:Handler)=>(s:String)=>
ReactComponentB[Unit]("EditableText with state").initialState(s).backend(new Backend(_,f))
.renderBackend.build
}
// Equivalent of ({value}) => <li>{value}</li> in original demo
val itemView: Handler=>ReqProps[String, Unit, Unit, TopNode] = (f:Handler)=> ReactComponentB[String]("liView")
.render(d => {
<.div(
<.span(s"uhh ${d.props}"),
CompWithState.Component(f)("vazzeg:"+d.props)()
)
})
.build
// As in original demo
val sortableItem = (f:Handler)=>SortableElement.wrap(itemView(f))
val listView = (f:Handler)=>ReactComponentB[List[String]]("listView")
.render(d => {
<.div(
d.props.zipWithIndex.map {
case (value, index) =>
sortableItem(f)(SortableElement.Props(index = index))(value)
}
)
})
.build
// As in original demo
val sortableList: (Handler) => (Props) => (List[String]) => ReactComponentU_ =
(f:Handler)=>SortableContainer.wrap(listView(f))
// As in original SortableComponent
class Backend(scope: BackendScope[Unit, List[String]]) {
def render(props: Unit, items: List[String]) = {
def handler = ???
// TOP LEVEL , INJECTION POINT<<<<<<========================================
sortableList(handler)(
SortableContainer.Props(
onSortEnd = p => scope.modState( l => p.updatedList(l) ),
useDragHandle = false,
helperClass = "react-sortable-handler"
)
)(items)
}
}
val defaultItems = Range(0, 10).map("Item " + _).toList
val c = ReactComponentB[Unit]("SortableContainerDemo")
.initialState(defaultItems)
.backend(new Backend(_))
.render(s => s.backend.render(s.props, s.state))
.build
}

Related

Where does dut in Chisel Test get defined? (About Scala syntax)

I am trying to come up with a better title.
I am new in Chisel and Scala. Below there is a Chisel code defining and testing an module.
import chisel3._
import chiseltest._
import org.scalatest.flatspec.AnyFlatSpec
class DeviceUnderTest extends Module {
val io = IO(new Bundle {
val a = Input(UInt(2.W))
val b = Input(UInt(2.W))
val out = Output(UInt(2.W))
})
io.out := io.a & io.b
}
class WaveformTestWithIteration extends AnyFlatSpec with ChiselScalatestTester {
"WaveformIteration" should "pass" in {
test(new DeviceUnderTest)
.withAnnotations(Seq(WriteVcdAnnotation)) ( dut => // ???
{
for (a <- 0 until 4; b <- 0 until 4) {
dut.io.a.poke(a.U)
dut.io.b.poke(b.U)
dut.clock.step()
}
}
)
}
}
The code line with comments ??? is where I am quite puzzled. Where does the variable dut is defined? It seems an reference to instance gained by new DeviceUnderTest.
test(new DeviceUnderTest).withAnnotations(Seq(WriteVcdAnnotation)) return an TestBuilder[T] with apply method:
class TestBuilder[T <: Module](...) {
...
def apply(testFn: T => Unit): TestResult = {
runTest(defaults.createDefaultTester(dutGen, finalAnnos))(testFn)
}
...
}
So, dut => {...} is a function (T) => Unit? But it does not look like a standard lambda ((x:T) => {...})? Or it is something else?
What is this syntax in scala exactly?
Consider the following boiled-down version with a similar structure:
def test[A](testedThing: A)(testBody: A => Unit): Unit = testBody(testedThing)
What it's essentially doing is taking a value x: A and a function f: A => Unit, and applying f to x to obtain f(x).
Here is how you could use it:
test("foo"){ x =>
println(if x == "foo" then "Success" else "Failure")
} // Success
test("bar"){ x =>
println(if x == "baz" then "Success" else "Failure")
} // Failure
In both cases, the "string under test" is simply passed to the body of the "test".
Now, you could introduce a few more steps between the creation of the value under test and the specification of the body. For example, you could create a TestBuilder[A], which is essentially just a value a with some bells and whistles (in this case, list of "annotations" - plain strings in the following example):
type Annotation = String
case class TestOutcome(annotations: List[Annotation], successful: Boolean)
trait Test:
def run: TestOutcome
// This simply captures the value under test of type `A`
case class TestBuilder[A](
theThingUnderTest: A,
annotations: List[Annotation]
):
// Bells and whistles: adding some metadata
def withAnnotations(moreAnnotations: List[Annotation]): TestBuilder[A] =
TestBuilder(theThingUnderTest, annotations ++ moreAnnotations)
// Combining the value under test with the body of the test produces the
// actual test
def apply(testBody: A => Unit): Test = new Test:
def run =
try {
testBody(theThingUnderTest)
TestOutcome(annotations, true)
} catch {
case t: Throwable => TestOutcome(annotations, false)
}
// This constructs the thing that's being tested, and creates a TestBuilder around it
def test[A](thingUnderTest: A) = TestBuilder(thingUnderTest, Nil)
println(
test("hello")
.withAnnotations(List("size of hello should be 5")){ h =>
assert(h.size == 5)
}
.run
)
println(
test("hello")
.withAnnotations(List("size of hello should be 42")){ h =>
assert(h.size == 42)
}
.run
)
The principle remains the same: test(a) saves the tested value a, then TestBuilder adds some configuration, and once you add a body { thingUnderTest => /* assertStuff */ } to it, you get a full Test, which you can then run to obtain some results (TestOutcomes, in this case). Thus, the above snippet produces
TestOutcome(List(size of hello should be 5),true)
TestOutcome(List(size of hello should be 42),false)
I think I did not notice that sometimes we can omit type declaration in lambda.
class tester{
def apply( fn: (Int) => Int):Int = fn(5)
}
We can write (new tester)(x => {x+1}) instead of (new tester)((x:Int) => {x+1}).

Akka streams change return type of 3rd party flow/stage

I have a graph that reads from sqs, writes to another system and then deletes from sqs. In order to delete from sqs i need a receipt handle on the SqsMessage object
In the case of Http flows the signature of the flow allows me to say which type gets emitted downstream from the flow,
Flow[(HttpRequest, T), (Try[HttpResponse], T), HostConnectionPool]
In this case i can set T to SqsMessage and i still have all the data i need.
However some connectors e.g google cloud pub sub emits a completely useless (to me) pub sub id.
Downstream of the pub sub flow I need to be able to access the sqs message id which i had prior to the pub sub flow.
What is the best way to work around this without rewriting the pub sub connector
I conceptually want something a bit like this:
Flow[SqsMessage] //i have my data at this point
within(
.map(toPubSubMessage)
.via(pubSub))
... from here i have the same type i had before within however it still behaves like a linear graph with back pressure etc
You can use PassThrough integration pattern.
As example of usage look on akka-streams-kafka -> Class akka.kafka.scaladsl.Producer -> Mehtod def flow[K, V, PassThrough]
So just implement your own Stage with PassThrough element, example inakka.kafka.internal.ProducerStage[K, V, PassThrough]
package my.package
import java.util.concurrent.atomic.AtomicInteger
import scala.concurrent.Future
import scala.util.{Failure, Success, Try}
import akka.stream._
import akka.stream.ActorAttributes.SupervisionStrategy
import akka.stream.stage._
final case class Message[V, PassThrough](record: V, passThrough: PassThrough)
final case class Result[R, PassThrough](result: R, message: PassThrough)
class PathThroughStage[R, V, PassThrough]
extends GraphStage[FlowShape[Message[V, PassThrough], Future[Result[R, PassThrough]]]] {
private val in = Inlet[Message[V, PassThrough]]("messages")
private val out = Outlet[Result[R, PassThrough]]("result")
override val shape = FlowShape(in, out)
override protected def createLogic(inheritedAttributes: Attributes) = {
val logic = new GraphStageLogic(shape) with StageLogging {
lazy val decider = inheritedAttributes.get[SupervisionStrategy]
.map(_.decider)
.getOrElse(Supervision.stoppingDecider)
val awaitingConfirmation = new AtomicInteger(0)
#volatile var inIsClosed = false
var completionState: Option[Try[Unit]] = None
override protected def logSource: Class[_] = classOf[PathThroughStage[R, V, PassThrough]]
def checkForCompletion() = {
if (isClosed(in) && awaitingConfirmation.get == 0) {
completionState match {
case Some(Success(_)) => completeStage()
case Some(Failure(ex)) => failStage(ex)
case None => failStage(new IllegalStateException("Stage completed, but there is no info about status"))
}
}
}
val checkForCompletionCB = getAsyncCallback[Unit] { _ =>
checkForCompletion()
}
val failStageCb = getAsyncCallback[Throwable] { ex =>
failStage(ex)
}
setHandler(out, new OutHandler {
override def onPull() = {
tryPull(in)
}
})
setHandler(in, new InHandler {
override def onPush() = {
val msg = grab(in)
val f = Future[Result[R, PassThrough]] {
try {
Result(// TODO YOUR logic
msg.record,
msg.passThrough)
} catch {
case exception: Exception =>
decider(exception) match {
case Supervision.Stop =>
failStageCb.invoke(exception)
case _ =>
Result(exception, msg.passThrough)
}
}
if (awaitingConfirmation.decrementAndGet() == 0 && inIsClosed) checkForCompletionCB.invoke(())
}
awaitingConfirmation.incrementAndGet()
push(out, f)
}
override def onUpstreamFinish() = {
inIsClosed = true
completionState = Some(Success(()))
checkForCompletion()
}
override def onUpstreamFailure(ex: Throwable) = {
inIsClosed = true
completionState = Some(Failure(ex))
checkForCompletion()
}
})
override def postStop() = {
log.debug("Stage completed")
super.postStop()
}
}
logic
}
}

Iterate data source asynchronously in batch and stop while remote return no data in Scala

Let's say we have a fake data source which will return data it holds in batch
class DataSource(size: Int) {
private var s = 0
implicit val g = scala.concurrent.ExecutionContext.global
def getData(): Future[List[Int]] = {
s = s + 1
Future {
Thread.sleep(Random.nextInt(s * 100))
if (s <= size) {
List.fill(100)(s)
} else {
List()
}
}
}
object Test extends App {
val source = new DataSource(100)
implicit val g = scala.concurrent.ExecutionContext.global
def process(v: List[Int]): Unit = {
println(v)
}
def next(f: (List[Int]) => Unit): Unit = {
val fut = source.getData()
fut.onComplete {
case Success(v) => {
f(v)
v match {
case h :: t => next(f)
}
}
}
}
next(process)
Thread.sleep(1000000000)
}
I have mine, the problem here is some portion is more not pure. Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list? My situation is a little from this post, the next() there is synchronous call while my is also async.
Or is it ever possible to do what I want? Next batch will only be fetched when the previous one is resolved in the end whether to fetch the next batch depends on the size returned?
What's the best way to walk through this type of data sources? Are there any existing Scala frameworks that provide the feature I am looking for? Is play's Iteratee, Enumerator, Enumeratee the right tool? If so, can anyone provide an example on how to use those facilities to implement what I am looking for?
Edit----
With help from chunjef, I had just tried out. And it actually did work out for me. However, there was some small change I made based on his answer.
Source.fromIterator(()=>Iterator.continually(source.getData())).mapAsync(1) (f=>f.filter(_.size > 0))
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
However, can someone give comparison between Akka Stream and Play Iteratee? Does it worth me also try out Iteratee?
Code snip 1:
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
Code snip 2: Assuming the getData depends on some other output of another flow, and I would like to concat it with the below flow. However, it yield too many files open error. Not sure what would cause this error, the mapAsync has been limited to 1 as its throughput if I understood correctly.
Flow[Int].mapConcat[Future[List[Int]]](c => {
Iterator.continually(ds.getData(c)).to[collection.immutable.Iterable]
}).mapAsync(1)(identity).takeWhile(_.nonEmpty).runForeach(println)
The following is one way to achieve the same behavior with Akka Streams, using your DataSource class:
import scala.concurrent.Future
import scala.util.Random
import akka.actor.ActorSystem
import akka.stream._
import akka.stream.scaladsl._
object StreamsExample extends App {
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
val ds = new DataSource(100)
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
}
class DataSource(size: Int) {
...
}
A simplified line-by-line overview:
line 1: Creates a stream source that continually calls ds.getData if there is downstream demand.
line 2: mapAsync is a way to deal with stream elements that are Futures. In this case, the stream elements are of type Future[List[Int]]. The argument 1 is the level of parallelism: we specify 1 here because DataSource internally uses a mutable variable, and a parallelism level greater than one could produce unexpected results. identity is shorthand for x => x, which basically means that for each Future, we pass its result downstream without transforming it.
line 3: Essentially, ds.getData is called as long as the result of the Future is a non-empty List[Int]. If an empty List is encountered, processing is terminated.
line 4: runForeach here takes a function List[Int] => Unit and invokes that function for each stream element.
Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list?
I think you are looking for a Promise.
You would set up a Promise before you start the first iteration.
This gives you promise.future, a Future that you can then use to follow the completion of everything.
In your onComplete, you add a case _ => promise.success().
Something like
def loopUntilDone(f: (List[Int]) => Unit): Future[Unit] = {
val promise = Promise[Unit]
def next(): Unit = source.getData().onComplete {
case Success(v) =>
f(v)
v match {
case h :: t => next()
case _ => promise.success()
}
case Failure(e) => promise.failure(e)
}
// get going
next(f)
// return the Future for everything
promise.future
}
// future for everything, this is a `Future[Unit]`
// its `onComplete` will be triggered when there is no more data
val everything = loopUntilDone(process)
You are probably looking for a reactive streams library. My personal favorite (and one I'm most familiar with) is Monix. This is how it will work with DataSource unchanged
import scala.concurrent.duration.Duration
import scala.concurrent.Await
import monix.reactive.Observable
import monix.execution.Scheduler.Implicits.global
object Test extends App {
val source = new DataSource(100)
val completed = // <- this is Future[Unit], completes when foreach is done
Observable.repeat(Observable.fromFuture(source.getData()))
.flatten // <- Here it's Observable[List[Int]], it has collection-like methods
.takeWhile(_.nonEmpty)
.foreach(println)
Await.result(completed, Duration.Inf)
}
I just figured out that by using flatMapConcat can achieve what I wanted to achieve. There is no point to start another question as I have had the answer already. Put my sample code here just in case someone is looking for similar answer.
This type of API is very common for some integration between traditional Enterprise applications. The DataSource is to mock the API while the object App is to demonstrate how the client code can utilize Akka Stream to consume the APIs.
In my small project the API was provided in SOAP, and I used scalaxb to transform the SOAP to Scala async style. And with the client calls demonstrated in the object App, we can consume the API with AKKA Stream. Thanks for all for the help.
class DataSource(size: Int) {
private var transactionId: Long = 0
private val transactionCursorMap: mutable.HashMap[TransactionId, Set[ReadCursorId]] = mutable.HashMap.empty
private val cursorIteratorMap: mutable.HashMap[ReadCursorId, Iterator[List[Int]]] = mutable.HashMap.empty
implicit val g = scala.concurrent.ExecutionContext.global
case class TransactionId(id: Long)
case class ReadCursorId(id: Long)
def startTransaction(): Future[TransactionId] = {
Future {
synchronized {
transactionId += transactionId
}
val t = TransactionId(transactionId)
transactionCursorMap.update(t, Set(ReadCursorId(0)))
t
}
}
def createCursorId(t: TransactionId): ReadCursorId = {
synchronized {
val c = transactionCursorMap.getOrElseUpdate(t, Set(ReadCursorId(0)))
val currentId = c.foldLeft(0l) { (acc, a) => acc.max(a.id) }
val cId = ReadCursorId(currentId + 1)
transactionCursorMap.update(t, c + cId)
cursorIteratorMap.put(cId, createIterator)
cId
}
}
def createIterator(): Iterator[List[Int]] = {
(for {i <- 1 to 100} yield List.fill(100)(i)).toIterator
}
def startRead(t: TransactionId): Future[ReadCursorId] = {
Future {
createCursorId(t)
}
}
def getData(cursorId: ReadCursorId): Future[List[Int]] = {
synchronized {
Future {
Thread.sleep(Random.nextInt(100))
cursorIteratorMap.get(cursorId) match {
case Some(i) => i.next()
case _ => List()
}
}
}
}
}
object Test extends App {
val source = new DataSource(10)
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
implicit val g = scala.concurrent.ExecutionContext.global
//
// def process(v: List[Int]): Unit = {
// println(v)
// }
//
// def next(f: (List[Int]) => Unit): Unit = {
// val fut = source.getData()
// fut.onComplete {
// case Success(v) => {
// f(v)
// v match {
//
// case h :: t => next(f)
//
// }
// }
//
// }
//
// }
//
// next(process)
//
// Thread.sleep(1000000000)
val s = Source.fromFuture(source.startTransaction())
.map { e =>
source.startRead(e)
}
.mapAsync(1)(identity)
.flatMapConcat(
e => {
Source.fromIterator(() => Iterator.continually(source.getData(e)))
})
.mapAsync(5)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
/*
val done = Source.fromIterator(() => Iterator.continually(source.getData())).mapAsync(1)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runFold(List[List[Int]]()) { (acc, r) =>
// println("=======" + acc + r)
r :: acc
}
done.onSuccess {
case e => {
e.foreach(println)
}
}
done.onComplete(_ => system.terminate())
*/
}

How to write generic function with Scala Quill.io library

I am trying to implement generic method in Scala operating on database using Quill.io library. Type T will be only case classes what works with Quill.io.
def insertOrUpdate[T](inserting: T, equality: (T,T) => Boolean)(implicit ctx: Db.Context): Unit = {
import ctx._
val existingQuery = quote {
query[T].filter { dbElement: T =>
equality(dbElement, inserting)
}
}
val updateQuery = quote {
query[T].filter { dbElement =>
equality(dbElement, lift(inserting))
}.update(lift(inserting))
}
val insertQuery = quote { query[T].insert(lift(inserting)) }
val existing = ctx.run(existingQuery)
existing.size match {
case 1 => ctx.run(updateQuery)
case _ => ctx.run(insertQuery)
}
}
But I am getting two types of compile error
Error:(119, 12) Can't find an implicit `SchemaMeta` for type `T`
query[T].filter { dbElement: T =>
Error:(125, 33) Can't find Encoder for type 'T'
equality(dbElement, lift(inserting))
How can I modify my code to let it work?
As I said in the issue that #VojtechLetal mentioned in his answer you have to use macros.
I added code implementing generic insert or update in my example Quill project.
It defines trait Queries that's mixed into context:
trait Queries {
this: JdbcContext[_, _] =>
def insertOrUpdate[T](entity: T, filter: (T) => Boolean): Unit = macro InsertOrUpdateMacro.insertOrUpdate[T]
}
This trait uses macro that's implementing your code with minor changes:
import scala.reflect.macros.whitebox.{Context => MacroContext}
class InsertOrUpdateMacro(val c: MacroContext) {
import c.universe._
def insertOrUpdate[T](entity: Tree, filter: Tree)(implicit t: WeakTypeTag[T]): Tree =
q"""
import ${c.prefix}._
val updateQuery = ${c.prefix}.quote {
${c.prefix}.query[$t].filter($filter).update(lift($entity))
}
val insertQuery = quote {
query[$t].insert(lift($entity))
}
run(${c.prefix}.query[$t].filter($filter)).size match {
case 1 => run(updateQuery)
case _ => run(insertQuery)
}
()
"""
}
Usage examples:
import io.getquill.{PostgresJdbcContext, SnakeCase}
package object genericInsertOrUpdate {
val ctx = new PostgresJdbcContext[SnakeCase]("jdbc.postgres") with Queries
def example1(): Unit = {
val inserting = Person(1, "")
ctx.insertOrUpdate(inserting, (p: Person) => p.name == "")
}
def example2(): Unit = {
import ctx._
val inserting = Person(1, "")
ctx.insertOrUpdate(inserting, (p: Person) => p.name == lift(inserting.name))
}
}
P.S. Because update() returns number of updated records your code can be simplified to:
class InsertOrUpdateMacro(val c: MacroContext) {
import c.universe._
def insertOrUpdate[T](entity: Tree, filter: Tree)(implicit t: WeakTypeTag[T]): Tree =
q"""
import ${c.prefix}._
if (run(${c.prefix}.quote {
${c.prefix}.query[$t].filter($filter).update(lift($entity))
}) == 0) {
run(quote {
query[$t].insert(lift($entity))
})
}
()
"""
}
As one of the quill contributors said in this issue:
If you want to make your solution generic then you have to use macros because Quill generates queries at compile time and T type has to be resolved at that time.
TL;DR The following did not work either, just playing
Anyway... just out of curiosity I tried to fix the issue by following the error which you mentioned. I changed the definition of the function as:
def insertOrUpdate[T: ctx.Encoder : ctx.SchemaMeta](...)
which yielded the following log
[info] PopulateAnomalyResultsTable.scala:71: Dynamic query
[info] case _ => ctx.run(insertQuery)
[info]
[error] PopulateAnomalyResultsTable.scala:68: exception during macro expansion:
[error] scala.reflect.macros.TypecheckException: Found the embedded 'T', but it is not a case class
[error] at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:34)
[error] at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:28)
It starts promising, since quill apparently gave up on static compilation and made the query dynamic. I checked the source code of the failing macro and it seems that quill is trying to get a constructor for T which is not known in the current context.
For more details see my answer Generic macro with quill or implementation
CrudMacro:
Complete project you will find on quill-generic
package pl.jozwik.quillgeneric.quillmacro
import scala.reflect.macros.whitebox.{ Context => MacroContext }
class CrudMacro(val c: MacroContext) extends AbstractCrudMacro {
import c.universe._
def callFilterOnIdTree[K: c.WeakTypeTag](id: Tree)(dSchema: c.Expr[_]): Tree =
callFilterOnId[K](c.Expr[K](q"$id"))(dSchema)
protected def callFilterOnId[K: c.WeakTypeTag](id: c.Expr[K])(dSchema: c.Expr[_]): Tree = {
val t = weakTypeOf[K]
t.baseClasses.find(c => compositeSet.contains(c.asClass.fullName)) match {
case None =>
q"$dSchema.filter(_.id == lift($id))"
case Some(base) =>
val query = q"$dSchema.filter(_.id.fk1 == lift($id.fk1)).filter(_.id.fk2 == lift($id.fk2))"
base.fullName match {
case `compositeKey4Name` =>
q"$query.filter(_.id.fk3 == lift($id.fk3)).filter(_.id.fk4 == lift($id.fk4))"
case `compositeKey3Name` =>
q"$query.filter(_.id.fk3 == lift($id.fk3))"
case `compositeKey2Name` =>
query
case x =>
c.abort(NoPosition, s"$x not supported")
}
}
}
def createAndGenerateIdOrUpdate[K: c.WeakTypeTag, T: c.WeakTypeTag](entity: Tree)(dSchema: c.Expr[_]): Tree = {
val filter = callFilter[K, T](entity)(dSchema)
q"""
import ${c.prefix}._
val id = $entity.id
val q = $filter
val result = run(
q.updateValue($entity)
)
if (result == 0) {
run($dSchema.insertValue($entity).returningGenerated(_.id))
} else {
id
}
"""
}
def createWithGenerateIdOrUpdateAndRead[K: c.WeakTypeTag, T: c.WeakTypeTag](entity: Tree)(dSchema: c.Expr[_]): Tree = {
val filter = callFilter[K, T](entity)(dSchema)
q"""
import ${c.prefix}._
val id = $entity.id
val q = $filter
val result = run(
q.updateValue($entity)
)
val newId =
if (result == 0) {
run($dSchema.insertValue($entity).returningGenerated(_.id))
} else {
id
}
run($dSchema.filter(_.id == lift(newId)))
.headOption
.getOrElse(throw new NoSuchElementException(s"$$newId"))
"""
}
}

How do you write a json4s CustomSerializer that handles collections

I have a class that I am trying to deserialize using the json4s CustomSerializer functionality. I need to do this due to the inability of json4s to deserialize mutable collections.
This is the basic structure of the class I want to deserialize (don't worry about why the class is structured like this):
case class FeatureValue(timestamp:Double)
object FeatureValue{
implicit def ordering[F <: FeatureValue] = new Ordering[F] {
override def compare(a: F, b: F): Int = {
a.timestamp.compareTo(b.timestamp)
}
}
}
class Point {
val features = new HashMap[String, SortedSet[FeatureValue]]
def add(name:String, value:FeatureValue):Unit = {
val oldValue:SortedSet[FeatureValue] = features.getOrElseUpdate(name, SortedSet[FeatureValue]())
oldValue += value
}
}
Json4s serializes this just fine. A serialized instance might look like the following:
{"features":
{
"CODE0":[{"timestamp":4.8828914447482E8}],
"CODE1":[{"timestamp":4.8828914541333E8}],
"CODE2":[{"timestamp":4.8828915127325E8},{"timestamp":4.8828910097466E8}]
}
}
I've tried writing a custom deserializer, but I don't know how to deal with the list tails. In a normal matcher you can just call your own function recursively, but in this case the function is anonymous and being called through the json4s API. I cannot find any examples that deal with this and I can't figure it out.
Currently I can match only a single hash key, and a single FeatureValue instance in its value. Here is the CustomSerializer as it stands:
import org.json4s.{FieldSerializer, DefaultFormats, Extraction, CustomSerializer}
import org.json4s.JsonAST._
class PointSerializer extends CustomSerializer[Point](format => (
{
case JObject(JField("features", JObject(Nil)) :: Nil) => new Point
case JObject(List(("features", JObject(List(
(feature:String, JArray(List(JObject(List(("timestamp",JDouble(ts)))))))))
))) => {
val point = new Point
point.add(feature, FeatureValue(ts))
point
}
},
{
// don't need to customize this, it works fine
case x: Point => Extraction.decompose(x)(DefaultFormats + FieldSerializer[Point]())
}
))
If I try to change to using the :: separated list format, so far I have gotten compiler errors. Even if I didn't get compiler errors, I am not sure what I would do with them.
You can get the list of json features in your pattern match and then map over this list to get the Features and their codes.
class PointSerializer extends CustomSerializer[Point](format => (
{
case JObject(List(("features", JObject(featuresJson)))) =>
val features = featuresJson.flatMap {
case (code:String, JArray(timestamps)) =>
timestamps.map { case JObject(List(("timestamp",JDouble(ts)))) =>
code -> FeatureValue(ts)
}
}
val point = new Point
features.foreach((point.add _).tupled)
point
}, {
case x: Point => Extraction.decompose(x)(DefaultFormats + FieldSerializer[Point]())
}
))
Which deserializes your json as follows :
import org.json4s.native.Serialization.{read, write}
implicit val formats = Serialization.formats(NoTypeHints) + new PointSerializer
val json = """
{"features":
{
"CODE0":[{"timestamp":4.8828914447482E8}],
"CODE1":[{"timestamp":4.8828914541333E8}],
"CODE2":[{"timestamp":4.8828915127325E8},{"timestamp":4.8828910097466E8}]
}
}
"""
val point0 = read[Point]("""{"features": {}}""")
val point1 = read[Point](json)
point0.features // Map()
point1.features
// Map(
// CODE0 -> TreeSet(FeatureValue(4.8828914447482E8)),
// CODE2 -> TreeSet(FeatureValue(4.8828910097466E8), FeatureValue(4.8828915127325E8)),
// CODE1 -> TreeSet(FeatureValue(4.8828914541333E8))
// )