sequencing (parallel) Observables in Outwatch (or zipping) - scala.js

How should I render a list of Observables in Outwatch? What if I need a single Observable: how should I sequence/zip them as an applicative? Is it expected that it render, when I use the applicative(?) operation 'zip' to transform a List[Observable] to an Observable[List] ? (ie when I don't need Observables to be chained)
val literals:Seq[Observable[VNode]] = handlers.map { i => i.map(li(_)) }
return div(ol( children <-- Observable.zip[VNode, Seq[VNode]](literals)(identity) ))
With one answer below
div(ol(
(for (item <- literals) yield { child <-- item} ):_*))
any one child is only rendered only after every input has been entered by the user. How do I render each child as soon as the user enters any first input, without having to enter them all?
Full code follows
import outwatch.dom._
import rxscalajs.Observable
import scala.scalajs.js.JSApp
object Outwatchstarter extends JSApp {
def createInputMappedToStringHandler(s:Handler[String]) = input(inputString --> s)
def main(): Unit = {
val root = {
val names = (0 until 2).map(_.toString) // when 0 until 1, this emits
val handlers: Seq[Handler[String]] = names.map(name => createStringHandler())
val inputNodes = handlers.map(createInputMappedToStringHandler)
val notworkingformorethan1 = {
val literals = handlers.map { i => i.map(li(_)) }
val y: Observable[Seq[VNode]] = Observable.zip[VNode, Seq[VNode]](literals)(identity)
div(ol(
children <-- y
))
}
val list = List("What", "Is", "Up?").map(s => li(s))
val lists = Observable.just(list)
val workingList = ul(children <-- lists)
div(
div(inputNodes: _*),
workingList,
notworkingformorethan1)
}
OutWatch.render("#app", root)
}
}
Nothing shows up when list length >1, but does with a one-element list. I'm an html/scala-js and rx noob. And may be misunderstanding how Observables may (or may not) be applicative functors. I was looking for 'sequence' rather than 'zip'.
Screenshot from full code

For-comprehensions work inside the OutWatch DOM DSL
div(ol(
(for (item <- literals) yield { child <-- item} ):_*))

Related

Using builder-like functions conditionally

Assume this situation:
I have a Tuple of size n.
Each element is a Boolean flag that defines if a specific function should be called on an object (here: builder).
The syntax that comes to my mind first would be:
(el1, el2, el3, ...) => {
val builder = MyBuilder()
val builder1 = if(el1) builder.func1(...) else builder
val builder2 = if(el2) builder1.func2(...) else builder1
val builder3 = if(el3) builder2.func3(...) else builder2
...
}
The last builder builderN would be the desired object. But this code is nasty.
What would be a good, clean alternative? (Note: I am using cats.)
Another way to represent my problem would be:
val result = MyBuilder.
.func1(...) //ONLY if el1!
.func2(...) //ONLY if el2!
.func3(...) //ONLY if el3!
....
.funcn(...) //ONLY if el4!
EDIT : Fixed example code!
You can zip list of flags with list of building functions and then in the fold conditionally apply builder function
class Builder() {
def func1(in: Any): Builder = {
println("func1")
this
}
def func2(in: Any): Builder = {
println("func2")
this
}
def func3(in: Any): Builder = {
println("func3")
this
}
}
val flags = List(true, false, true)
val funcs = List[Builder => Builder](b => b.func1(1), b => b.func2(2), b => b.func3(3))
val result = flags.zip(funcs).foldLeft(new Builder()) {
case (builder, (flag, func)) => if (flag) func(builder) else builder
}
prints to console:
func1
func3

Akka streams — filtering by the number of elements in stream

I'm writing an app in Scala and I'm using Akka streams.
At one point, I need to filter out streams that have less than N elements, with N given. So, for example, with N=5:
Source(List(1,2,3)).via(myFilter) // => List()
Source(List(1,2,3,4)).via(myFilter) // => List()
will become empty streams, and
Source(List(1,2,3,4,5)).via(myFilter) // => List(1,2,3,4,5)
Source(List(1,2,3,4,5,6)).via(myFilter) // => List(1,2,3,4,5,6)
will be unchanged.
Of course, we can't know the number of elements in the stream until it's over, and waiting till the end before pushing it through might not be the best idea.
So, instead, I've thought about the following algorithm:
for the first N-1 elements, just buffer them, without passing further;
if the input stream finishes before reaching the Nth element, output an empty stream;
if the input stream reaches Nth element, output the buffered N-1 elements, then output the Nth element, then pass all the following elements that come.
However, I have no idea how to build a Flow element implementing it. Are there some built-in Akka elements I could use?
Edit:
Okay, so I played with it yesterday and I came up with something like that:
Flow[Int].
prefixAndTail(N).
flatMapConcat {
case (prefix, tail) if prefix.length == N =>
Source(prefix).concat(tail)
case _ =>
Source.empty[Int]
}
Will it do what I want?
Perhaps statefulMapConcat could help you:
import akka.actor.ActorSystem
import akka.stream.scaladsl.{Sink, Source}
import akka.stream.{ActorMaterializer, Materializer}
import scala.collection.mutable.ListBuffer
import scala.concurrent.ExecutionContext
object StatefulMapConcatExample extends App {
implicit val system: ActorSystem = ActorSystem()
implicit val materializer: Materializer = ActorMaterializer()
implicit val ec: ExecutionContext = scala.concurrent.ExecutionContext.Implicits.global
def filterLessThen(threshold: Int): (Int) => List[Int] = {
var buffering = true
val buffer: ListBuffer[Int] = ListBuffer()
(elem: Int) =>
if (buffering) {
buffer += elem
if (buffer.size < threshold) {
Nil
} else {
buffering = false
buffer.toList
}
} else {
List(elem)
}
}
//Nil
Source(List(1, 2, 3)).statefulMapConcat(() => filterLessThen(5))
.runWith(Sink.seq).map(println)
//Nil
Source(List(1, 2, 3, 4)).statefulMapConcat(() => filterLessThen(5))
.runWith(Sink.seq).map(println)
//Vector(1,2,3,4,5)
Source(List(1, 2, 3, 4, 5)).statefulMapConcat(() => filterLessThen(5))
.runWith(Sink.seq).map(println)
//Vector(1,2,3,4,5,6)
Source(List(1, 2, 3, 4, 5, 6)).statefulMapConcat(() => filterLessThen(5))
.runWith(Sink.seq).map(println)
}
This may be one of those instances where a little "state" can go a long way. Even though the solution is not "purely functional", the updating state will be isolated and unreachable by the rest of the system. I think this is one of the beauties of scala: when an FP solution isn't obvious you can always revert to imperative in an isolated manner...
The completed Flow will be a combination of multiple sub-parts. The first Flow will just group your elements into sequences of size N:
val group : Int => Flow[Int, Seq[Int], _] =
(N) => Flow[Int] grouped N
Now for the non-functional part, a filter that will only allow the grouped Seq values through if the first sequence was the right size:
val minSizeRequirement : Int => Seq[Int] => Boolean =
(minSize) => {
var isFirst : Boolean = True
var passedMinSize : Boolean = False
(testSeq) => {
if(isFirst) {
isFirst = False
passedMinSize = testSeq.size >= minSize
passedMinSize
}
else
passedMinSize
}
}
}
val minSizeFilter : Int => Flow[Seq[Int], Seq[Int], _] =
(minSize) => Flow[Seq[Int]].filter(minSizeRequirement(minSize))
The last step is to convert the Seq[Int] values back into Int values:
val flatten = Flow[Seq[Int]].flatMapConcat(l => Source(l))
Finally, combine them all together:
val combinedFlow : Int => Flow[Int, Int, _] =
(minSize) =>
group(minSize)
.via(minSizeFilter(minSize))
.via(flatten)

Recursively process list of relations

Given the following data model, where elements record their relations to ancestors by a delimited string path:
case class Entity(id:String, ancestorPath:Option[String] = None)
val a = Entity("a")
val c = Entity("c")
val b = Entity("b", Some("a"))
val d = Entity("d", Some("a/b"))
val test = a :: c :: b :: d :: Nil
What would be a good way to process the relationship into a nested structure such as:
case class Entity2(id:String, children:List[Entity])
The desired function would output a list of Entity2s that nest their children. The elements themselves would therefore be the root nodes. We can assume the input list is sorted lexigraphically by the value of parentId and a None is considered earlier than a list, exactly as test is above.
Example desired output:
List(
Entity2("a", List(
Entity2("b", List(
Entity2("d", Nil)
),
),
Entity2("c", Nil)
)
I've had a few tries at it but what's tripping me up is finding a good way to invert the relationship as you go... recursing the Entity classes gives you the backward (my parent/ancestors is/are) reference whereas the desired output records the forward (my children are) reference. Thanks!
One straight-forward solution looks like:
case class Entity(id:String, ancestorPath:Option[String] = None)
case class Entity2(id:String, children:List[Entity2])
object Main {
def main(args: Array[String]) {
val a = Entity("a")
val c = Entity("c")
val b = Entity("b", Some("a"))
val d = Entity("d", Some("a/b"))
val test = a :: c :: b :: d :: Nil
println(buildTree(test))
}
def immediateParent(path: String) = {
val pos = path.lastIndexOf('/')
if (pos == -1) path
else path.substring(pos+1)
}
def buildTree(all: List[Entity]): List[Entity2] = {
val childEntitiesByParentId = all.groupBy(_.ancestorPath.map(immediateParent _))
val roots = childEntitiesByParentId.getOrElse(None, Nil)
roots.map({ root => buildTreeHelper(root, childEntitiesByParentId) })
}
def buildTreeHelper(
parent: Entity,
childEntitiesByParentId: Map[Option[String], List[Entity]]): Entity2 = {
val children = childEntitiesByParentId.getOrElse(Some(parent.id), Nil).map({ child =>
buildTreeHelper(child, childEntitiesByParentId)
})
Entity2(parent.id, children)
}
}
If your trees are very deep you will blow the stack - trampolines are a good solution:
import scala.util.control.TailCalls
def buildTree(all: List[Entity]): List[Entity2] = {
val childEntitiesByParentId = all.groupBy(_.ancestorPath.map(immediateParent _))
val roots = childEntitiesByParentId.getOrElse(None, Nil)
buildTreeHelper(roots, childEntitiesByParentId).result
}
def buildTreeHelper(
parents: List[Entity],
childEntitiesByParentId: Map[Option[String], List[Entity]]): TailCalls.TailRec[List[Entity2]] = {
parents match {
case Nil => TailCalls.done(Nil)
case parent :: tail =>
val childEntities = childEntitiesByParentId.getOrElse(Some(parent.id), Nil)
for {
children <- TailCalls.tailcall(buildTreeHelper(childEntities, childEntitiesByParentId))
siblings <- buildTreeHelper(tail, childEntitiesByParentId)
} yield Entity2(parent.id, children) :: siblings
}
}
Start with an empty list, and incrementally build this up by adding one entity at a time to it. Each time you add an entity, you have to inspect its ancestor path, and then traverse the corresponding path in the new list to insert the entity in the correct location. The destination list is really a tree structure, since you have nested components; you just need to find the correct location in the tree to insert into.
It will be more efficient if you use maps instead of lists, but should be possible either way. You may find it easier to build the results if you use mutable structures, but should be possible either way as well.

Recursive method call in Apache Spark

I'm building a family tree from a database on Apache Spark, using a recursive search to find the ultimate parent (i.e. the person at the top of the family tree) for each person in the DB.
It is assumed that the first person returned when searching for their id is the correct parent
val peopleById = peopleRDD.keyBy(f => f.id)
def findUltimateParentId(personId: String) : String = {
if((personId == null) || (personId.length() == 0))
return "-1"
val personSeq = peopleById.lookup(personId)
val person = personSeq(0)
if(person.personId == "0 "|| person.id == person.parentId) {
return person.id
}
else {
return findUltimateParentId(person.parentId)
}
}
val ultimateParentIds = peopleRDD.foreach(f => f.findUltimateParentId(f.parentId))
It is giving the following error
"Caused by: org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063."
I understand from reading other similar questions that the problem is that I'm calling the findUltimateParentId from within the foreach loop, and if I call the method from the shell with a person's id, it returns the correct ultimate parent id
However, none of the other suggested solutions work for me, or at least I can't see how to implement them in my program, can anyone help?
If I understood you correctly - here's a solution that would work for any size of input (although performance might not be great) - it performs N iterations over the RDD where N is the "deepest family" (largest distance from ancestor to child) in the input:
// representation of input: each person has an ID and an optional parent ID
case class Person(id: Int, parentId: Option[Int])
// representation of result: each person is optionally attached its "ultimate" ancestor,
// or none if it had no parent id in the first place
case class WithAncestor(person: Person, ancestor: Option[Person]) {
def hasGrandparent: Boolean = ancestor.exists(_.parentId.isDefined)
}
object RecursiveParentLookup {
// requested method
def findUltimateParent(rdd: RDD[Person]): RDD[WithAncestor] = {
// all persons keyed by id
def byId = rdd.keyBy(_.id).cache()
// recursive function that "climbs" one generation at each iteration
def climbOneGeneration(persons: RDD[WithAncestor]): RDD[WithAncestor] = {
val cached = persons.cache()
// find which persons can climb further up family tree
val haveGrandparents = cached.filter(_.hasGrandparent)
if (haveGrandparents.isEmpty()) {
cached // we're done, return result
} else {
val done = cached.filter(!_.hasGrandparent) // these are done, we'll return them as-is
// for those who can - join with persons to find the grandparent and attach it instead of parent
val withGrandparents = haveGrandparents
.keyBy(_.ancestor.get.parentId.get) // grandparent id
.join(byId)
.values
.map({ case (withAncestor, grandparent) => WithAncestor(withAncestor.person, Some(grandparent)) })
// call this method recursively on the result
done ++ climbOneGeneration(withGrandparents)
}
}
// call recursive method - start by assuming each person is its own parent, if it has one:
climbOneGeneration(rdd.map(p => WithAncestor(p, p.parentId.map(i => p))))
}
}
Here's a test to better understand how this works:
/**
* Example input tree:
*
* 1 5
* | |
* ----- 2 ----- 6
* | |
* 3 4
*
*/
val person1 = Person(1, None)
val person2 = Person(2, Some(1))
val person3 = Person(3, Some(2))
val person4 = Person(4, Some(2))
val person5 = Person(5, None)
val person6 = Person(6, Some(5))
test("find ultimate parent") {
val input = sc.parallelize(Seq(person1, person2, person3, person4, person5, person6))
val result = RecursiveParentLookup.findUltimateParent(input).collect()
result should contain theSameElementsAs Seq(
WithAncestor(person1, None),
WithAncestor(person2, Some(person1)),
WithAncestor(person3, Some(person1)),
WithAncestor(person4, Some(person1)),
WithAncestor(person5, None),
WithAncestor(person6, Some(person5))
)
}
It should be easy to map your input into these Person objects, and to map the output WithAncestor objects into whatever it is you need. Note that this code assumes that if any person has parentId X - another person with that id actually exists in the input
fixed this by using SparkContext.broadcast:
val peopleById = peopleRDD.keyBy(f => f.id)
val broadcastedPeople = sc.broadcast(peopleById.collectAsMap())
def findUltimateParentId(personId: String) : String = {
if((personId == null) || (personId.length() == 0))
return "-1"
val personOption = broadcastedPeople.value.get(personId)
if(personOption.isEmpty) {
return "0";
}
val person = personOption.get
if(person.personId == 0 || person.orgId == person.personId) {
return person.id
}
else {
return findUltimateParentId(person.parentId)
}
}
val ultimateParentIds = peopleRDD.foreach(f => f.findUltimateParentId(f.parentId))
working great now!

Scala: how to traverse stream/iterator collecting results into several different collections

I'm going through log file that is too big to fit into memory and collecting 2 type of expressions, what is better functional alternative to my iterative snippet below?
def streamData(file: File, errorPat: Regex, loginPat: Regex): List[(String, String)]={
val lines : Iterator[String] = io.Source.fromFile(file).getLines()
val logins: mutable.Map[String, String] = new mutable.HashMap[String, String]()
val errors: mutable.ListBuffer[(String, String)] = mutable.ListBuffer.empty
for (line <- lines){
line match {
case errorPat(date,ip)=> errors.append((ip,date))
case loginPat(date,user,ip,id) =>logins.put(ip, id)
case _ => ""
}
}
errors.toList.map(line => (logins.getOrElse(line._1,"none") + " " + line._1,line._2))
}
Here is a possible solution:
def streamData(file: File, errorPat: Regex, loginPat: Regex): List[(String,String)] = {
val lines = Source.fromFile(file).getLines
val (err, log) = lines.collect {
case errorPat(inf, ip) => (Some((ip, inf)), None)
case loginPat(_, _, ip, id) => (None, Some((ip, id)))
}.toList.unzip
val ip2id = log.flatten.toMap
err.collect{ case Some((ip,inf)) => (ip2id.getOrElse(ip,"none") + "" + ip, inf) }
}
Corrections:
1) removed unnecessary types declarations
2) tuple deconstruction instead of ulgy ._1
3) left fold instead of mutable accumulators
4) used more convenient operator-like methods :+ and +
def streamData(file: File, errorPat: Regex, loginPat: Regex): List[(String, String)] = {
val lines = io.Source.fromFile(file).getLines()
val (logins, errors) =
((Map.empty[String, String], Seq.empty[(String, String)]) /: lines) {
case ((loginsAcc, errorsAcc), next) =>
next match {
case errorPat(date, ip) => (loginsAcc, errorsAcc :+ (ip -> date))
case loginPat(date, user, ip, id) => (loginsAcc + (ip -> id) , errorsAcc)
case _ => (loginsAcc, errorsAcc)
}
}
// more concise equivalent for
// errors.toList.map { case (ip, date) => (logins.getOrElse(ip, "none") + " " + ip) -> date }
for ((ip, date) <- errors.toList)
yield (logins.getOrElse(ip, "none") + " " + ip) -> date
}
I have a few suggestions:
Instead of a pair/tuple, it's often better to use your own class. It gives meaningful names to both the type and its fields, which makes the code much more readable.
Split the code into small parts. In particular, try to decouple pieces of code that don't need to be tied together. This makes your code easier to understand, more robust, less prone to errors and easier to test. In your case it'd be good to separate producing your input (lines of a log file) and consuming it to produce a result. For example, you'd be able to make automatic tests for your function without having to store sample data in a file.
As an example and exercise, I tried to make a solution based on Scalaz iteratees. It's a bit longer (includes some auxiliary code for IteratorEnumerator) and perhaps it's a bit overkill for the task, but perhaps someone will find it helpful.
import java.io._;
import scala.util.matching.Regex
import scalaz._
import scalaz.IterV._
object MyApp extends App {
// A type for the result. Having names keeps things
// clearer and shorter.
type LogResult = List[(String,String)]
// Represents a state of our computation. Not only it
// gives a name to the data, we can also put here
// functions that modify the state. This nicely
// separates what we're computing and how.
sealed case class State(
logins: Map[String,String],
errors: Seq[(String,String)]
) {
def this() = {
this(Map.empty[String,String], Seq.empty[(String,String)])
}
def addError(date: String, ip: String): State =
State(logins, errors :+ (ip -> date));
def addLogin(ip: String, id: String): State =
State(logins + (ip -> id), errors);
// Produce the final result from accumulated data.
def result: LogResult =
for ((ip, date) <- errors.toList)
yield (logins.getOrElse(ip, "none") + " " + ip) -> date
}
// An iteratee that consumes lines of our input. Based
// on the given regular expressions, it produces an
// iteratee that parses the input and uses State to
// compute the result.
def logIteratee(errorPat: Regex, loginPat: Regex):
IterV[String,List[(String,String)]] = {
// Consumes a signle line.
def consume(line: String, state: State): State =
line match {
case errorPat(date, ip) => state.addError(date, ip);
case loginPat(date, user, ip, id) => state.addLogin(ip, id);
case _ => state
}
// The core of the iteratee. Every time we consume a
// line, we update our state. When done, compute the
// final result.
def step(state: State)(s: Input[String]): IterV[String, LogResult] =
s(el = line => Cont(step(consume(line, state))),
empty = Cont(step(state)),
eof = Done(state.result, EOF[String]))
// Return the iterate waiting for its first input.
Cont(step(new State()));
}
// Converts an iterator into an enumerator. This
// should be more likely moved to Scalaz.
// Adapted from scalaz.ExampleIteratee
implicit val IteratorEnumerator = new Enumerator[Iterator] {
#annotation.tailrec def apply[E, A](e: Iterator[E], i: IterV[E, A]): IterV[E, A] = {
val next: Option[(Iterator[E], IterV[E, A])] =
if (e.hasNext) {
val x = e.next();
i.fold(done = (_, _) => None, cont = k => Some((e, k(El(x)))))
} else
None;
next match {
case None => i
case Some((es, is)) => apply(es, is)
}
}
}
// main ---------------------------------------------------
{
// Read a file as an iterator of lines:
// val lines: Iterator[String] =
// io.Source.fromFile("test.log").getLines();
// Create our testing iterator:
val lines: Iterator[String] = Seq(
"Error: 2012/03 1.2.3.4",
"Login: 2012/03 user 1.2.3.4 Joe",
"Error: 2012/03 1.2.3.5",
"Error: 2012/04 1.2.3.4"
).iterator;
// Create an iteratee.
val iter = logIteratee("Error: (\\S+) (\\S+)".r,
"Login: (\\S+) (\\S+) (\\S+) (\\S+)".r);
// Run the the iteratee against the input
// (the enumerator is implicit)
println(iter(lines).run);
}
}