Scala, finding max value in arrays - scala

First time I've had to ask a question here, there is not enough info on Scala out there for a newbie like me.
Basically what I have is a file filled with hundreds of thousands of lists formatted like this:
(type, date, count, object)
Rows look something like this:
(food, 30052014, 400, banana)
(food, 30052014, 2, pizza)
All I need to is find the one row with the highest count.
I know I did this a couple of months ago but can't seem to wrap my head around it now. I'm sure I can do this without a function too. All I want to do is set a value and put that row in it but I can't figure it out.
I think basically what I want to do is a Math.max on the 3rd element in the lists, but I just can't get it.
Any help will be kindly appreciated. Sorry if my wording or formatting of this question isn't the best.
EDIT: There's some extra info I've left out that I should probably add:
All the records are stored in a tsv file. I've done this to split them:
val split_food = food.map(_.split("/t"))
so basically I think I need to use split_food... somehow

Modified version of #Szymon answer with your edit addressed:
val split_food = food.map(_.split("/t"))
val max_food = split_food.maxBy(tokens => tokens(2).toInt)
or, analogously:
val max_food = split_food.maxBy { case Array(_, _, count, _) => count.toInt }
In case you're using apache spark's RDD, which has limited number of usual scala collections methods, you have to go with reduce
val max_food = split_food.reduce { (max: Array[String], current: Array[String]) =>
val curCount = current(2).toInt
val maxCount = max(2).toInt // you probably would want to preprocess all items,
// so .toInt will not be called again and again
if (curCount > maxCount) current else max
}

You should use maxBy function:
case class Purchase(category: String, date: Long, count: Int, name: String)
object Purchase {
def apply(s: String) = s.split("\t") match {
case Seq(cat, date, count, name) => Purchase(cat, date.toLong, count.toInt, name)
}
}
foodRows.map(row => Purchase(row)).maxBy(_.count)

Simply:
case class Record(food:String, date:String, count:Int)
val l = List(Record("ciccio", "x", 1), Record("buffo", "y", 4), Record("banana", "z", 3))
l.maxBy(_.count)
>>> res8: Record = Record(buffo,y,4)

Not sure if you got the answer yet but I had the same issues with maxBy. I found once I ran the package... import scala.io.Source I was able to use maxBy and it worked.

Related

How to transform Dataset[(String, Seq[String])] to Dataset[(String, String)]?

Probably this's simple problem, but I begin my adventure with spark.
Problem: I'd like to get following structure (Expected result) in spark. Now I have following structure.
title1, {word11, word12, word13 ...}
title2, {word12, word22, word23 ...}
Data are stored in Dataset[(String, Seq[String])]
Excepted result
I would like to get Tuple [word, title]
word11, {title1} word12, {title1}
What I do
1. Make (title, seq[word1,word2,word,3])
docs.mapPartitions { iter =>
iter.map {
case (title, contents) => {
val textToLemmas: Seq[String] = toText(....)
(title, textToLemmas)
}
}
}
I tried use .map to transform my structure to Tuple, but can't do it.
I tried to iterate through all the elements, but then I can not return type
Thanks for answer.
This should work:
val result = dataSet.flatMap { case (title, words) => words.map((_, title)) }
Another solution is to call the explode function like this :
import org.apache.spark.sql.functions.explode
dataset.withColumn("_2", explode("_2")).as[(String, String)]
Hope this help you, Best Regrads.
I'm surprised no one offered a solution with Scala's for-comprehension (that gets "desugared" to flatMap and map as in Yuval Itzchakov's answer at compile time).
When you see a series of flatMap and map (possibly with filter) that's Scala's for-comprehension.
So the following:
val result = dataSet.flatMap { case (title, words) => words.map((_, title)) }
is equivalent to the following:
val result = for {
(title, words) <- dataSet
w <- words
} yield (w, title)
After all, that's why we enjoy flexibility of Scala, isn't it?

Scala Filter Only Digits

If I have this scala output code:
(1,ab)
(2,fd)
(4, df)
(a,1)
(b,3)
(c,4)
(d,6)
And I want to delete all of the tuples which have a digit in the first slot in the tuple such as the first three here: (1,ab) (2,fd) (4, df).
For some reason the code I have here is not correctly filtering the those tuples:
val temp = t.countByValue().filter(_._1 != Int).print()
The countByValue() function returns a tuple of (value (in my case a string like (a,b,c,), count of the times the string occurs).
I think this might solve your problem
t.countByValue().filter(tupleOfCount=>Try(tupleOfCount._1.toInt).toOption.isEmpty).print()
Use of isInstanceOf should be the last resort as #sergey said , so this code must solve the issue or else the pattern matching would be a good option too.
def main(args: Array[String]): Unit = {
var newlist = List((1 -> "a"), ("b") -> 2, ("c" -> 3))
newlist.foreach(tuple => if (tuple._1.isInstanceOf[Int]) {
println(tuple._1 + "-" + tuple._2)
})
}
Hope it can help. BTW (1,ab) if your ab is a string you need "ab".

spark scala - replace text if exists in list

I have 2 datasets.
One is a dataframe with a bunch of data, one column has comments (a string).
The other is a list of words.
If a comment contains a word in the list, I want to replace the word in the comment with ##### and return the comment in full with the replaced words.
Here's some sample data:
CommentSample.txt
1 A badword small town
2 "Love the truck, though rattle is annoying."
3 Love the paint!
4
5 "Like that you added the ""oh badword2"" handle to passenger side."
6 "badword you. specific enough for you, badword3?"
7 This car is a piece if badword2
ProfanitySample.txt
badword
badword2
badword3
Here's my code so far:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
case class Response(UniqueID: Int, Comment: String)
val response = sc.textFile("file:/data/CommentSample.txt").map(_.split("\t")).filter(_.size == 2).map(r => Response(r(0).trim.toInt, r(1).trim.toString, r(10).trim.toInt)).toDF()
var profanity = sc.textFile("file:/data/ProfanitySample.txt").map(x => (x.toLowerCase())).toArray();
def replaceProfanity(s: String): String = {
val l = s.toLowerCase()
val r = "#####"
if(profanity.contains(s))
r
else
s
}
def processComment(s: String): String = {
val commentWords = sc.parallelize(s.split(' '))
commentWords.foreach(replaceProfanity)
commentWords.collect().mkString(" ")
}
response.select(processComment("Comment")).show(100)
It compiles, it runs, but the words are not replaced.
I don't know how to debug in scala.
I'm totally new! This is my first project ever!
Many thanks for any pointers.
-M
First, I think the usecase you describe here won't benefit much from the use of DataFrames - it's simpler to implement using RDDs only (DataFrames are mostly convenient when your transformations can easily be described using SQL, which isn't the case here).
So - here's a possible implementation using RDDs. This assumes the list of profanities isn't too large (i.e. up to ~thousands), so we can collect it into non-distributed memory. If that's not the case, a different approach (involving a join) might be needed.
case class Response(UniqueID: Int, Comment: String)
val mask = "#####"
val responses: RDD[Response] = sc.textFile("file:/data/CommentSample.txt").map(_.split("\t")).filter(_.size == 2).map(r => Response(r(0).trim.toInt, r(1).trim))
val profanities: Array[String] = sc.textFile("file:/data/ProfanitySample.txt").collect()
val result = responses.map(r => {
// using foldLeft here means we'll replace profanities one by one,
// with the result of each replace as the input of the next,
// starting with the original comment
profanities.foldLeft(r.Comment)({
case (updatedComment, profanity) => updatedComment.replaceAll(s"(?i)\\b$profanity\\b", mask)
})
})
result.take(10).foreach(println) // just printing some examples...
Note that the case-insensitivity and the "words only" limitations are implemented in the regex itself: "(?i)\\bSomeWord\\b".

How to extract elements from 4 lists in scala?

case class TargetClass(key: Any, value: Number, lowerBound: Double, upperBound: Double)
val keys: List[Any] = List("key1", "key2", "key3")
val values: List[Number] = List(1,2,3);
val lowerBounds: List[Double] = List(0.1, 0.2, 0.3)
val upperBounds: List[Double] = List(0.5, 0.6, 0.7)
Now I want to construct a List[TargetClass] to hold the 4 lists. Does anyone know how to do it efficiently? Is using for-loop to add elements one by one very inefficient?
I tried to use zipped, but it seems that this only applies for combining up to 3 lists.
Thank you very much!
One approach:
keys.zipWithIndex.map {
case (item,i)=> TargetClass(item,values(i),lowerBounds(i),upperBounds(i))
}
You may want to consider using the lift method to deal with case of lists being of unequal lengths (and thereby provide a default if keys is longer than any of the lists?)
I realise this doesn't address your question of efficiency. You could fairly easily run some tests on different approaches.
You can apply zipped to the first two lists, to the last two lists, then to the results of the previous zips, then map to your class, like so:
val z12 = (keys, values).zipped
val z34 = (lowerBounds, upperBounds).zipped
val z1234 = (z12.toList, z34.toList).zipped
val targs = z1234.map { case ((k,v),(l,u)) => TargetClass(k,v,l,u) }
// targs = List(TargetClass(key1,1,0.1,0.5), TargetClass(key2,2,0.2,0.6), TargetClass(key3,3,0.3,0.7))
How about:
keys zip values zip lowerBounds zip upperBounds map {
case (((k, v), l), u) => TargetClass(k, v, l, u)
}
Example:
scala> val zipped = keys zip values zip lowerBounds zip upperBounds
zipped: List[(((Any, Number), Double), Double)] = List((((key1,1),0.1),0.5), (((key2,2),0.2),0.6), (((key3,3),0.3),0.7))
scala> zipped map { case (((k, v), l), u) => TargetClass(k, v, l, u) }
res6: List[TargetClass] = List(TargetClass(key1,1,0.1,0.5), TargetClass(key2,2,0.2,0.6), TargetClass(key3,3,0.3,0.7))
It would be nice if .transpose worked on a Tuple of Lists.
for (List(k, v:Number, l:Double, u:Double) <-
List(keys, values, lowerBounds, upperBounds).transpose)
yield TargetClass(k,v,l,u)
I think no matter what you use from an efficiency point of view, you will have to traverse the lists individually. The only question is, do you do it OR for the sake of readability, you use Scala idioms and let Scala do the dirty work for you :) ?
Other approaches are not necessarily more efficient. You can change the order of zipping and the order of assembling the return value of the map function as you like.
Here is a more functional way but I am not sure it will be more efficient. See comments on #wwkudu (zip with index) answer
val res1 = keys zip lowerBounds zip values zip upperBounds
res1.map {
x=> (x._1._1._1,x._1._1._2, x._1._2, x._2)
//Of course, you can return an instance of TargetClass
//here instead of the touple I am returning.
}
I am curious, why do you need a "TargetClass"? Will a touple work?

How to create a play.api.libs.iteratee.Enumerator which inserts some data between the items of a given Enumerator?

I use Play framework with ReactiveMongo. Most of ReactiveMongo APIs are based on the Play Enumerator. As long as I fetch some data from MongoDB and return it "as-is" asynchronously, everything is fine. Also the transformation of the data, like converting BSON to String, using Enumerator.map is obvious.
But today I faced a problem which at the bottom line narrowed to the following code. I wasted half of the day trying to create an Enumerator which would consume items from the given Enumerator and insert some items between them. It is important not to load all the items at once, as there could be many of them (the code example has only two items "1" and "2"). But semantically it is similar to mkString of the collections. I am sure it can be done very easily, but the best I could come with - was this code. Very similar code creating an Enumerator using Concurrent.broadcast serves me well for WebSockets. But here even that does not work. The HTTP response never comes back. When I look at Enumeratee, it looks that it is supposed to provide such functionality, but I could not find the way to do the trick.
P.S. Tried to call chan.eofAndEnd in Iteratee.mapDone, and chunked(enums >>> Enumerator.eof instead of chunked(enums) - did not help. Sometimes the response comes back, but does not contain the correct data. What do I miss?
def trans(in:Enumerator[String]):Enumerator[String] = {
val (res, chan) = Concurrent.broadcast[String]
val iter = Iteratee.fold(true) { (isFirst, curr:String) =>
if (!isFirst)
chan.push("<-------->")
chan.push(curr)
false
}
in.apply(iter)
res
}
def enums:Enumerator[String] = {
val en12 = Enumerator[String]("1", "2")
trans(en12)
//en12 //if I comment the previous line and uncomment this, it prints "12" as expected
}
def enum = Action {
Ok.chunked(enums)
}
Here is my solution which I believe to be correct for this type of problem. Comments are welcome:
def fill[From](
prefix: From => Enumerator[From],
infix: (From, From) => Enumerator[From],
suffix: From => Enumerator[From]
)(implicit ec:ExecutionContext) = new Enumeratee[From, From] {
override def applyOn[A](inner: Iteratee[From, A]): Iteratee[From, Iteratee[From, A]] = {
//type of the state we will use for fold
case class State(prev:Option[From], it:Iteratee[From, A])
Iteratee.foldM(State(None, inner)) { (prevState, newItem:From) =>
val toInsert = prevState.prev match {
case None => prefix(newItem)
case Some(prevItem) => infix (prevItem, newItem)
}
for(newIt <- toInsert >>> Enumerator(newItem) |>> prevState.it)
yield State(Some(newItem), newIt)
} mapM {
case State(None, it) => //this is possible when our input was empty
Future.successful(it)
case State(Some(lastItem), it) =>
suffix(lastItem) |>> it
}
}
}
// if there are missing integers between from and to, fill that gap with 0
def fillGap(from:Int, to:Int)(implicit ec:ExecutionContext) = Enumerator enumerate List.fill(to-from-1)(0)
def fillFrom(x:Int)(input:Int)(implicit ec:ExecutionContext) = fillGap(x, input)
def fillTo(x:Int)(input:Int)(implicit ec:ExecutionContext) = fillGap(input, x)
val ints = Enumerator(10, 12, 15)
val toStr = Enumeratee.map[Int] (_.toString)
val infill = fill(
fillFrom(5),
fillGap,
fillTo(20)
)
val res = ints &> infill &> toStr // res will have 0,0,0,0,10,0,12,0,0,15,0,0,0,0
You wrote that you are working with WebSockets, so why don't you use dedicated solution for that? What you wrote is better for Server-Sent-Events rather than WS. As I understood you, you want to filter your results before sending them back to client? If its correct then you Enumeratee instead of Enumerator. Enumeratee is transformation from-to. This is very good piece of code how to use Enumeratee. May be is not directly about what you need but I found there inspiration for my project. Maybe when you analyze given code you would find best solution.