Filling date gaps in Anorm results - scala

i'm new to Scala, Play and Anorm, so I'm wondering how can I do this.
I have a query to my database, which returns a date, with a DD/MM HH:OO format, and a Long, which is a total.
I want to display a total per hour graph, so I create a byhour parser:
val byhour = {
get[Option[String]]("date") ~ get[Long]("total") map {
case date ~ total => (date, total)
}
And this, of course, only returns the dates where I have data. I want to fill the date gaps with the date and a total of 0, but I'm not sure how to do it.
Thanks in advance!
edit: I know it's possible to do this in MySQL, but I'd prefer to do this in Scala itself to keep the queries clean.

I don't think that related to Anorm directly, which there will allow you to fill gaps among parsed results afterward.
First option you get unordered result as List[(String, Long)] using .as(byhour.*), sort it by date and then fill with zero for missing date.
SQL"...".as(byhour.*).sortBy(_._1).
foldLeft(List.empty[(String, Long)]) {
case (p :: l, (d, t)) =>
(d, t) :: prefill(p, d, l)
case (l, (d, t)) =>
(d, t) :: l // assert l == Nil
}.reverse
/**
* #param p Previous/last tuple
* #param d Current/new date
* #param l List except `p`
* #return List based on `l` with `p` prepended and eventually before with some filler tuple prepended.
*/
def prefill(p: (String, Long), d: String, l: List[(String, Long)]): List[(String, Long)] = ???
Otherwise if you query returns results ordered by date you can use Anorm streaming API and fill gap as soon as it's discovered.
// Anorm 2.3
import anorm.Success
SQL"... ORDER BY date ASC".apply().
foldLeft(List.empty[(String, Long)]) {
case (l, row) =>
byhour(row) match {
case Success((d, t)) =>
l match {
case p :: ts =>
(d, t) :: prefill(p, d, l)
case _ => (d, t) :: l
}
case _ => ??? // parse error
}
}.reverse

Related

Scala - access collection members within map or flatMap

Suppose that I use a sequence of various maps and/or flatMaps to generate a sequence of collections. Is it possible to access information about the "current" collection from within any of those methods? For example, without knowing anything specific about the functions used in the previous maps or flatMaps, and without using any intermediate declarations, how can I get the maximum value (or length, or first element, etc.) of the collection upon which the last map acts?
List(1, 2, 3)
.flatMap(x => f(x) /* some unknown function */)
.map(x => x + ??? /* what is the max element of the collection? */)
Edit for clarification:
In the example, I'm not looking for the max (or whatever) of the initial List. I'm looking for the max of the collection after the flatMap has been applied.
By "without using any intermediate declarations" I mean that I do not want to use any temporary collections en route to the final result. So, the example by Steve Waldman below, while giving the desired result, is not what I am seeking. (I include this condition is mostly for aesthetic reasons.)
Edit for clarification, part 2:
The ideal solution would be some magic keyword or syntactic sugar that lets me reference the current collection:
List(1, 2, 3)
.flatMap(x => f(x))
.map(x => x + theCurrentList.max)
I'm prepared to accept the fact, however, that this simply is not possible.
Maybe just define the list as a val, so you can name it? I don't know of any facility built into map(...) or flatMap(...) that would help.
val myList = List(1, 2, 3)
myList
.flatMap(x => f(x) /* some unknown function */)
.map(x => x + myList.max /* what is the max element of the List? */)
Update: By this approach at least, if you have multiple transformations and want to see the transformed version, you'd have to name that. You could get away with
val myList = List(1, 2, 3).flatMap(x => f(x) /* some unknown function */)
myList.map(x => x + myList.max /* what is the max element of the List? */)
Or, if there will be multiple transformations, get in the habit of naming the stages.
val rawList = List(1, 2, 3)
val smordified = rawList.flatMap(x => f(x) /* some unknown function */)
val maxified = smordified.map(x => x + smordified.max /* what is the max element of the List? */)
maxified
Update 2: Watch it work in the REPL even with heterogenous types:
scala> def f( x : Int ) : Vector[Double] = Vector(x * math.random, x * math.random )
f: (x: Int)Vector[Double]
scala> val rawList = List(1, 2, 3)
rawList: List[Int] = List(1, 2, 3)
scala> val smordified = rawList.flatMap(x => f(x) /* some unknown function */)
smordified: List[Double] = List(0.40730853571901315, 0.15151641399798665, 1.5305929709857609, 0.35211231420067435, 0.644241939254793, 0.15530230501048903)
scala> val maxified = smordified.map(x => x + smordified.max /* what is the max element of the List? */)
maxified: List[Double] = List(1.937901506704774, 1.6821093849837476, 3.0611859419715217, 1.8827052851864352, 2.1748349102405538, 1.6858952759962498)
scala> maxified
res3: List[Double] = List(1.937901506704774, 1.6821093849837476, 3.0611859419715217, 1.8827052851864352, 2.1748349102405538, 1.6858952759962498)
It is possible, but not pretty, and not likely something you want if you are doing it for "aesthetic reasons."
import scala.math.max
def f(x: Int): Seq[Int] = ???
List(1, 2, 3).
flatMap(x => f(x) /* some unknown function */).
foldRight((List[Int](),List[Int]())) {
case (x, (xs, Nil)) => ((x :: xs), List.fill(xs.size + 1)(x))
case (x, (xs, xMax :: _)) => ((x :: xs), List.fill(xs.size + 1)(max(x, xMax)))
}.
zipped.
map {
case (x, xMax) => x + xMax
}
// Or alternately, a slightly more efficient version using Streams.
List(1, 2, 3).
flatMap(x => f(x) /* some unknown function */).
foldRight((List[Int](),Stream[Int]())) {
case (x, (xs, Stream())) =>
((x :: xs), Stream.continually(x))
case (x, (xs, curXMax #:: _)) =>
val newXMax = max(x, curXMax)
((x :: xs), Stream.continually(newXMax))
}.
zipped.
map {
case (x, xMax) => x + xMax
}
Seriously though, I just took this on to see if I could do it. While the code didn't turn out as bad as I expected, I still don't think it's particularly readable. I'd discourage using this over something similar to Steve Waldman's answer. Sometimes, it's simply better to just introduce a val, rather than being dogmatic about it.
You could define a mapWithSelf (resp. flatMapWithSelf) operation along these lines and add it as an implicit enrichment to the collection. For List it might look like:
// Scala 2.13 APIs
object Enrichments {
implicit class WithSelfOps[A](val lst: List[A]) extends AnyVal {
def mapWithSelf[B](f: (A, List[A]) => B): List[B] =
lst.map(f(_, lst))
def flatMapWithSelf[B](f: (A, List[A]) => IterableOnce[B]): List[B] =
lst.flatMap(f(_, lst))
}
}
The enrichment basically fixes the value of the collection before the operation and threads it through. It should be possible to generify this (at least for the strict collections), though it would look a little different in 2.12 vs. 2.13+.
Usage would look like
import Enrichments._
val someF: Int => IterableOnce[Int] = ???
List(1, 2, 3)
.flatMap(someF)
.mapWithSelf { (x, lst) =>
x + lst.max
}
So at the usage site, it's aesthetically pleasant. Note that if you're computing something which traverses the list, you'll be traversing the list every time (leading to a quadratic runtime). You can get around that with some mutability or by just saving the intermediate list after the flatMap.
One somewhat-simple way of referencing prior output within the current map/collect operation is to use a named reference outside the map, then reference it from within the map block:
var prevOutput = ... // starting value of whatever is referenced within the map
myValues.map {
prevOutput = ... // expression that references prior `prevOutput`
prevOutput // return above computed value for the map to collect
}
This draws attention to the fact that we're referencing prior elements while building the new sequence.
This would be more messy, though, if you wanted to reference arbitrarily previous values, not just the previous one.

RDD/Scala Get one column from RDD

I have an RDD[Log] file with various fields (username,content,date,bytes) and I want to find different things for each field/column.
For example, I want to get the min/max and average bytes found in the RDD. When i do:
val q1 = cleanRdd.filter(x => x.bytes != 0)
I get the full lines of the RDD with bytes != 0. But how can I actually sum them, calculate the avg, find the min/max etc? How can I take only one column from my RDD and apply transformations on it?
EDIT: Prasad told me about changing the type to dataframe, he gave no instructions on how to do so though, and I cant find a solid answer on the site. Any help would be great.
EDIT: LOG class:
case class Log (username: String, date: String, status: Int, content: Int)
using a cleanRdd.take(5).foreach(println) gives something like this
Log(199.72.81.55 ,01/Jul/1995:00:00:01 -0400,200,6245)
Log(unicomp6.unicomp.net ,01/Jul/1995:00:00:06 -0400,200,3985)
Log(199.120.110.21 ,01/Jul/1995:00:00:09 -0400,200,4085)
Log(burger.letters.com ,01/Jul/1995:00:00:11 -0400,304,0)
Log(199.120.110.21 ,01/Jul/1995:00:00:11 -0400,200,4179)
Well... you have a lot of questions.
So... you have the following abstraction of a Log
case class Log (username: String, date: String, status: Int, content: Int, byte: Int)
Que - How can I take only one column from my RDD.
Ans - You have a map function with the RDD's. So for an RDD[A], map takes a map/transform function of type A => B to transform it into a RDD[B].
val logRdd: RDD[Log] = ...
val byteRdd = logRdd
.filter(l => l.bytes != 0)
.map(l => l.byte)
Que - how can I actually sum them ?
Ans - You can do it by using reduce / fold / aggregate.
val sum = byteRdd.reduce((acc, b) => acc + b)
val sum = byteRdd.fold(0)((acc, b) => acc + b)
val sum = byteRdd.aggregate(0)(
(acc, b) => acc + b,
(acc1, acc2) => acc1 + acc2
)
Note :: An important thing to notice here is that a sum of Int can grow bigger than what an Int can handle. So in most real life cases we should use at least a Long as our accumulator instead of an Int, which actually removes reduce and fold as options. And we will be left with an aggregate only.
val sum = byteRdd.aggregate(0l)(
(acc, b) => acc + b,
(acc1, acc2) => acc1 + acc2
)
Now if you have to calculate multiple things like min, max, avg then I will suggest that you calculate them in a single aggregate instead of multiple like this,
// (count, sum, min, max)
val accInit = (0, 0, Int.MaxValue, Int.MinValue)
val (count, sum, min, max) = byteRdd.aggregate(accInit)(
{ case ((count, sum, min, max), b) =>
(count + 1, sum + b, Math.min(min, b), Math.max(max, b)) },
{ case ((count1, sum1, min1, max1), (count2, sum2, min2, max2)) =>
(count1 + count2, sum1 + sum2, Math.min(min1, min2), Math.max(max1, max2)) }
})
val avg = sum.toDouble / count
Have a look in DataFrame API. You need to convert your RDD to a DataFrame and then you can use min, max, avg functions like below:
val rdd = cleanRdd.filter(x => x.bytes != 0)
val df = sparkSession.sqlContext.createDataFrame(rdd, classOf[Log])
Assuming you wanted to operations on column bytes then
import org.apache.spark.sql.functions._
df.select(avg("bytes")).show
df.select(min("bytes")).show
df.select(max("bytes")).show
Update:
Tried with the following in spark-shell. check the screenshots for the outcome...
case class Log (username: String, date: String, status: Int, content: Int)
val inputRDD = sc.parallelize(Seq(Log("199.72.81.55","01/Jul/1995:00:00:01 -0400",200,6245), Log("unicomp6.unicomp.net","01/Jul/1995:00:00:06 -0400",200,3985), Log("199.120.110.21","01/Jul/1995:00:00:09 -0400",200,4085), Log("burger.letters.com","01/Jul/1995:00:00:11 -0400",304,0), Log("199.120.110.21","01/Jul/1995:00:00:11 -0400",200,4179)))
val rdd = inputRDD.filter(x => x.content != 0)
val df = rdd.toDF("username", "date", "status", "content")
df.printSchema
import org.apache.spark.sql.functions._
df.select(avg("content")).show
df.select(min("content")).show
df.select(max("content")).show

Scala: How to map a subset of a seq to a shorter seq

I am trying to map a subset of a sequence using another (shorter) sequence while preserving the elements that are not in the subset. A toy example below tries to give a flower to females only:
def giveFemalesFlowers(people: Seq[Person], flowers: Seq[Flower]): Seq[Person] = {
require(people.count(_.isFemale) == flowers.length)
magic(people, flowers)(_.isFemale)((p, f) => p.withFlower(f))
}
def magic(people: Seq[Person], flowers: Seq[Flower])(predicate: Person => Boolean)
(mapping: (Person, Flower) => Person): Seq[Person] = ???
Is there an elegant way to implement the magic?
Use an iterator over flowers, consume one each time the predicate holds; the code would look like this,
val it = flowers.iterator
people.map ( p => if (predicate(p)) p.withFlowers(it.next) else p )
What about zip (aka zipWith) ?
scala> val people = List("m","m","m","f","f","m","f")
people: List[String] = List(m, m, m, f, f, m, f)
scala> val flowers = List("f1","f2","f3")
flowers: List[String] = List(f1, f2, f3)
scala> def comb(xs:List[String],ys:List[String]):List[String] = (xs,ys) match {
| case (x :: xs, y :: ys) if x=="f" => (x+y) :: comb(xs,ys)
| case (x :: xs,ys) => x :: comb(xs,ys)
| case (Nil,Nil) => Nil
| }
scala> comb(people, flowers)
res1: List[String] = List(m, m, m, ff1, ff2, m, ff3)
If the order is not important, you can get this elegant code:
scala> val (men,women) = people.partition(_=="m")
men: List[String] = List(m, m, m, m)
women: List[String] = List(f, f, f)
scala> men ++ (women,flowers).zipped.map(_+_)
res2: List[String] = List(m, m, m, m, ff1, ff2, ff3)
I am going to presume you want to retain all the starting people (not simply filter out the females and lose the males), and in the original order, too.
Hmm, bit ugly, but what I came up with was:
def giveFemalesFlowers(people: Seq[Person], flowers: Seq[Flower]): Seq[Person] = {
require(people.count(_.isFemale) == flowers.length)
people.foldLeft((List[Person]() -> flowers)){ (acc, p) => p match {
case pp: Person if pp.isFemale => ( (pp.withFlower(acc._2.head) :: acc._1) -> acc._2.tail)
case pp: Person => ( (pp :: acc._1) -> acc._2)
} }._1.reverse
}
Basically, a fold-left, initialising the 'accumulator' with a pair made up of an empty list of people and the full list of flowers, then cycling through the people passed in.
If the current person is female, pass it the head of the current list of flowers (field 2 of the 'accumulator'), then set the updated accumulator to be the updated person prepended to the (growing) list of processed people, and the tail of the (shrinking) list of flowers.
If male, just prepend to the list of processed people, leaving the flowers unchanged.
By the end of the fold, field 2 of the 'accumulator' (the flowers) should be an empty list, while field one holds all the people (with any females having each received their own flower), in reverse order, so finish with ._1.reverse
Edit: attempt to clarify the code (and substitute a test more akin to #elm's to replace the match, too) - hope that makes it clearer what is going on, #Felix! (and no, no offence taken):
def giveFemalesFlowers(people: Seq[Person], flowers: Seq[Flower]): Seq[Person] = {
require(people.count(_.isFemale) == flowers.length)
val start: (List[Person], Seq[Flower]) = (List[Person](), flowers)
val result: (List[Person], Seq[Flower]) = people.foldLeft(start){ (acc, p) =>
val (pList, fList) = acc
if (p.isFemale) {
(p.withFlower(fList.head) :: pList, fList.tail)
} else {
(p :: pList, fList)
}
}
result._1.reverse
}
I'm obviously missing something but isn't it just
people map {
case p if p.isFemale => p.withFlower(f)
case p => p
}

Why does this function run out of memory?

It's a function to find the third largest of a collection of integers. I'm calling it like this:
val lineStream = thirdLargest(Source.fromFile("10m.txt").getLines.toIterable
val intStream = lineStream map { s => Integer.parseInt(s) }
thirdLargest(intStream)
The file 10m.txt contains 10 million lines with a random integer on each one. The thirdLargest function below should not be keeping any of the integers after it has tested them, and yet it causes the JVM to run out of memory (after about 90 seconds in my case).
def thirdLargest(numbers: Iterable[Int]): Option[Int] = {
def top3of4(top3: List[Int], fourth: Int) = top3 match {
case List(a, b, c) =>
if (fourth > c) List(b, c, fourth)
else if (fourth > b) List(b, fourth, c)
else if (fourth > a) List(fourth, b, c)
else top3
}
#tailrec
def find(top3: List[Int], rest: Iterable[Int]): Int = (top3, rest) match {
case (List(a, b, c), Nil) => a
case (top3, d #:: rest) => find(top3of4(top3, d), rest)
}
numbers match {
case a #:: b #:: c #:: rest => Some(find(List[Int](a, b, c).sorted, rest))
case _ => None
}
}
The OOM error has nothing to do with the way you read the file. It is totally fine and even recommended to use Source.getLines here. The problem is elsewhere.
Many people are being confused by the nature of Scala Stream concept. In fact this is not something you would want to use just to iterate over things. It is lazy indeed however it doesn't discard previous results – they're being memoized so there's no need to recalculate them again on the next use (which never happens in your case but that's where your memory goes). See also this answer.
Consider using foldLeft. Here's a working (but intentionally simplified) example for illustration purposes:
val lines = Source.fromFile("10m.txt").getLines()
print(lines.map(_.toInt).foldLeft(-1 :: -1 :: -1 :: Nil) { (best3, next) =>
(next :: best3).sorted.reverse.take(3)
})

Finding character in 2 dimensional scala list

So this might not be the best way to tackle it but my initial thought was a for expression.
Say I have a List like
List(List('a','b','c'),List('d','e','f'),List('h','i','j'))
I would like to find the row and column for a character, say 'e'.
def findChar(letter: Char, list: List[List[Char]]): (Int, Int) =
for {
r <- (0 until list.length)
c <- (0 until list(r).length)
if list(r)(c) == letter
} yield (r, c)
If there is a more elegant way I'm all ears but I would also like to understand what's wrong with this. Specifically the error the compiler gives me here is
type mismatch; found : scala.collection.immutable.IndexedSeq[(Int, Int)] required: (Int, Int)
on the line assigning to r. It seems to be complaining that my iterator doesn't match the return type but I don't quite understand why this is or what to do about it ...
In the signature of findChar you are telling the compiler that it returns (Int, Int). However, the result of your for expression (as inferred by Scala) is IndexedSeq[(Int, Int)] as the error message indicates. The reason is that (r, c) after yield is produced for every "iteration" in the for expression (i.e., you are generating a sequence of results, not just a single result).
EDIT: As for findChar, you could do:
def findChar(letter: Char, list: List[List[Char]]) = {
val r = list.indexWhere(_ contains letter)
val c = list(r).indexOf(letter)
(r, c)
}
It is not the most efficient solution, but relatively short.
EDIT: Or reuse your original idea:
def findAll(letter: Char, list: List[List[Char]]) =
for {
r <- 0 until list.length
c <- 0 until list(r).length
if list(r)(c) == letter
} yield (r, c)
def findChar(c: Char, xs: List[List[Char]]) = findAll(c, xs).head
In both cases, be aware that an exception occurs if the searched letter is not contained in the input list.
EDIT: Or you write a recursive function yourself, like:
def findPos[A](c: A, list: List[List[A]]) = {
def aux(i: Int, xss: List[List[A]]) : Option[(Int, Int)] = xss match {
case Nil => None
case xs :: xss =>
val j = xs indexOf c
if (j < 0) aux(i + 1, xss)
else Some((i, j))
}
aux(0, list)
}
where aux is a (locally defined) auxiliary function that does the actual recursion (and remembers in which sublist we are, the index i). In this implementation a result of None indicates that the searched element was not there, whereas a successful result might return something like Some((1, 1)).
For your other ear, the question duplicates
How to capture inner matched value in indexWhere vector expression?
scala> List(List('a','b','c'),List('d','e','f'),List('h','i','j'))
res0: List[List[Char]] = List(List(a, b, c), List(d, e, f), List(h, i, j))
scala> .map(_ indexOf 'e').zipWithIndex.find(_._1 > -1)
res1: Option[(Int, Int)] = Some((1,1))