I am new to scala. I was reading one code not able to understand . Can someone please help me to understand below code ?
def intersectByKey[K: ClassTag, V: ClassTag](rdd1: RDD[(K, V)], rdd2: RDD[(K, V)]): RDD[(K, V)] = {
rdd1.cogroup(rdd2).flatMapValues{
case (Nil, _) => None
case (_, Nil) => None
case (x, y) => x++y
}
}
What does below line means ? How it will be evaluated ?
case (x, y) => x++y
Thanks
rdd1.cogroup(rdd2) returns a value of type RDD[(K, (Iterable[V], Iterable[V]))].
So - in case (x, y) both x and y are Iterable[V].
Iterable overloads the ++ operator, with an implementation that simply means union - returning an iterable with all of x's values followed by all of y's values.
The function cogroup returns a RDD[(K, (Seq[V], Seq[W]))].
So the value is of type Tuple2. When you use flatMapValues it will flatmap over the values, which are of Type Seq.
++ for Seqs mean concatenating them. Resulting in a combined Seq.
case (x, y) means that you're using pattern matching. In your case if none of the values of your Tuple are Nil, the function will return x ++ y.
The advantage of using flatMapValues in this case is that it will flatten the result, therefore losing all the None values.
You can check out the documentation here. Also if you're not sure what exactly Pattern matching or flatmaps are check this for pattern matching and this for flatmap
Related
I have a paired RDD that looks like
(a1, (a2, a3))
(b1, (b2, b3))
...
I want to flatten the values to obtain
(a1, a2, a3)
(b1, b2, b3)
...
Currently I'm doing
rddData.map(x => (x._1, x._2._1, x._2._2))
Is there a better way of performing the conversion? The above solution gets ugly if value contains many elements instead of just 2.
When I'm trying to avoid all the ugly underscore number stuff that comes with tuple manipulation I like to use case notation:
rddData.map { case (a, (b, c)) => (a, b, c) }
You can also give your variables meaningful names to make your code self documenting and the use of curly braces means you have fewer nested parentheses.
EDIT:
The map { case ... } pattern is pretty compact and can be used for surprisingly deep nested tuples as long as the structure is known at compile time. If you absolutely, positively cannot know the structure of the tuple at compile time, then here is some hacky, slow code that, probably, can flatten any arbitrarily nested tuple... as long as there are no more than 23 elements in total. It works by recursivly converting each element of the tuple to a list, flatmap-ing it to a single list, then using scary reflection to convert the list back into a tuple as seen here.
def flatten(b:Product): List[Any] = {
b.productIterator.toList.flatMap {
case x: Product => flatten(x)
case y: Any => List(y)
}
}
def toTuple[Any](as:List[Any]):Product = {
val tupleClass = Class.forName("scala.Tuple" + as.size)
tupleClass.getConstructors.apply(0).newInstance(as.map(_.asInstanceOf[AnyRef]):_*).asInstanceOf[Product]
}
rddData.map(t => toTuple(flatten(t)))
There is no better way. The 1st answer is equivalent to:
val abc2 = xyz.map{ case (k, v) => (k, v._1, v._2) }
which is equivalent to your own example.
To find prime factors of a number I was using this piece of code :
def primeFactors(num: Long): List[Long] = {
val exists = (2L to math.sqrt(num).toLong).find(num % _ == 0)
exists match {
case Some(d) => d :: primeFactors(num/d)
case None => List(num)
}
}
but this I found a cool and more functional approach to solve this using this code:
def factors(n: Long): List[Long] = (2 to math.sqrt(n).toInt)
.find(n % _ == 0).fold(List(n)) ( i => i.toLong :: factors(n / i))
Earlier I was using foldLeft or fold simply to get sum of a list or other simple calculations, but here I can't seem to understand how fold is working and how this is breaking out of the recursive function.Can somebody plz explain how fold functionality is working here.
Option's fold
If you look at the signature of Option's fold function, it takes two parameters:
def fold[B](ifEmpty: => B)(f: A => B): B
What it does is, it applies f on the value of Option if it is not empty. If Option is empty, it simply returns output of ifEmpty (this is termination condition for recursion).
So in your case, i => i.toLong :: factors(n / i) represents f which will be evaluated if Option is not empty. While List(n) is termination condition.
fold used for collection / iterators
The other fold that you are taking about for getting sum of collection, comes from TraversableOnce and it has signature like:
def foldLeft[B](z: B)(op: (B, A) => B): B
Here, z is starting value (suppose incase of sum it's 0) and op is associative binary operator which is applied on z and each value of collection from left to right.
So both folds differ in their implementation.
Is it possible to have a reduceByKey of the way: reduceByKey((x, y, z) => ...)?
Because I have a RDD:
RDD[((String, String, Double), (Double, Double, scala.collection.immutable.Map[String,Double]))]
And I want reduce by key and I tried with this operation:
reduceByKey((x, y, z) => (x._1 + y._1 + z._1, x._2 + y._2 + z._2, (((x._3)++y._3)++z._3)))
and it shows me a error message: missing parameter type
Before I tested with two elements and it works, but with 3 I really don't know which is my error. What is the way to do that?
Here's what you're missing, reduceByKey is telling you that you have a Key-Value pairing. Conceptually there can only ever be 2 items in a pair, it's part of the what makes a pair a pair. Hence, the full signature of reduceByKey can only ever be a 2-Tuple as it's signature. So, no, you can't directly have a function of arity 3, only of arity 2.
Here's how I'd handle your situation:
reduceByKey((key,value) =>
val (one, two, three) = key
val (dub1, dub2, nameName) = value
// rest of work
}
However, let me make one slight suggestion? Use a case class for your value. It's easier to grok and is essentially equivalent to your 3-tuple.
if you see the reduceByKey function on PairRDDFunctions, it looks like,
def reduceByKey(func: (V, V) => V): RDD[(K, V)]
hence, its not possible to have it work on a 3-tuple.
However, you can wrap your 3-tuple into a model and still keep your first string as the key making your RDD as RDD[(string, your-model)] and now you can aggregate the model in whatever way you want.
Hope this helps.
In scala, how do I define addition over two Option arguments? Just to be specific, let's say they're wrappers for Int types (I'm actually working with maps of doubles but this example is simpler).
I tried the following but it just gives me an error:
def addOpt(a:Option[Int], b:Option[Int]) = {
a match {
case Some(x) => x.get
case None => 0
} + b match {
case Some(y) => y.get
case None => 0
}
}
Edited to add:
In my actual problem, I'm adding two maps which are standins for sparse vectors. So the None case returns Map[Int, Double] and the + is actually a ++ (with the tweak at stackoverflow.com/a/7080321/614684)
Monoids
You might find life becomes a lot easier when you realize that you can stand on the shoulders of giants and take advantage of common abstractions and the libraries built to use them. To this end, this question is basically about dealing with
monoids (see related questions below for more about this) and the library in question is called scalaz.
Using scalaz FP, this is just:
def add(a: Option[Int], b: Option[Int]) = ~(a |+| b)
What is more this works on any monoid M:
def add[M: Monoid](a: Option[M], b: Option[M]) = ~(a |+| b)
Even more usefully, it works on any number of them placed inside a Foldable container:
def add[M: Monoid, F: Foldable](as: F[Option[M]]) = ~as.asMA.sum
Note that some rather useful monoids, aside from the obvious Int, String, Boolean are:
Map[A, B: Monoid]
A => (B: Monoid)
Option[A: Monoid]
In fact, it's barely worth the bother of extracting your own method:
scala> some(some(some(1))) #:: some(some(some(2))) #:: Stream.empty
res0: scala.collection.immutable.Stream[Option[Option[Option[Int]]]] = Stream(Some(Some(Some(1))), ?)
scala> ~res0.asMA.sum
res1: Option[Option[Int]] = Some(Some(3))
Some related questions
Q. What is a monoid?
A monoid is a type M for which there exists an associative binary operation (M, M) => M and an identity I under this operation, such that mplus(m, I) == m == mplus(I, m) for all m of type M
Q. What is |+|?
This is just scalaz shorthand (or ASCII madness, ymmv) for the mplus binary operation
Q. What is ~?
It is a unary operator meaning "or identity" which is retrofitted (using scala's implicit conversions) by the scalaz library onto Option[M] if M is a monoid. Obviously a non-empty option returns its contents; an empty option is replaced by the monoid's identity.
Q. What is asMA.sum?
A Foldable is basically a datastructure which can be folded over (like foldLeft, for example). Recall that foldLeft takes a seed value and an operation to compose successive computations. In the case of summing a monoid, the seed value is the identity I and the operation is mplus. You can hence call asMA.sum on a Foldable[M : Monoid]. You might need to use asMA because of the name clash with the standard library's sum method.
Some References
Slides and Video of a talk I gave which gives practical examples of using monoids in the wild
def addOpts(xs: Option[Int]*) = xs.flatten.sum
This will work for any number of inputs.
If they both default to 0 you don't need pattern matching:
def addOpt(a:Option[Int], b:Option[Int]) = {
a.getOrElse(0) + b.getOrElse(0)
}
(Repeating comment above in an answer as requested)
You don't extract the content of the option the proper way. When you match with case Some(x), x is the value inside the option(type Int) and you don't call get on that. Just do
case Some(x) => x
Anyway, if you want content or default, a.getOrElse(0) is more convenient
def addOpt(ao: Option[Int], bo: Option[Int]) =
for {
a <- ao
b <- bo
} yield a + b
I have learned the basic difference between foldLeft and reduceLeft
foldLeft:
initial value has to be passed
reduceLeft:
takes first element of the collection as initial value
throws exception if collection is empty
Is there any other difference ?
Any specific reason to have two methods with similar functionality?
Few things to mention here, before giving the actual answer:
Your question doesn't have anything to do with left, it's rather about the difference between reducing and folding
The difference is not the implementation at all, just look at the signatures.
The question doesn't have anything to do with Scala in particular, it's rather about the two concepts of functional programming.
Back to your question:
Here is the signature of foldLeft (could also have been foldRight for the point I'm going to make):
def foldLeft [B] (z: B)(f: (B, A) => B): B
And here is the signature of reduceLeft (again the direction doesn't matter here)
def reduceLeft [B >: A] (f: (B, A) => B): B
These two look very similar and thus caused the confusion. reduceLeft is a special case of foldLeft (which by the way means that you sometimes can express the same thing by using either of them).
When you call reduceLeft say on a List[Int] it will literally reduce the whole list of integers into a single value, which is going to be of type Int (or a supertype of Int, hence [B >: A]).
When you call foldLeft say on a List[Int] it will fold the whole list (imagine rolling a piece of paper) into a single value, but this value doesn't have to be even related to Int (hence [B]).
Here is an example:
def listWithSum(numbers: List[Int]) = numbers.foldLeft((List.empty[Int], 0)) {
(resultingTuple, currentInteger) =>
(currentInteger :: resultingTuple._1, currentInteger + resultingTuple._2)
}
This method takes a List[Int] and returns a Tuple2[List[Int], Int] or (List[Int], Int). It calculates the sum and returns a tuple with a list of integers and it's sum. By the way the list is returned backwards, because we used foldLeft instead of foldRight.
Watch One Fold to rule them all for a more in depth explanation.
reduceLeft is just a convenience method. It is equivalent to
list.tail.foldLeft(list.head)(_)
foldLeft is more generic, you can use it to produce something completely different than what you originally put in. Whereas reduceLeft can only produce an end result of the same type or super type of the collection type. For example:
List(1,3,5).foldLeft(0) { _ + _ }
List(1,3,5).foldLeft(List[String]()) { (a, b) => b.toString :: a }
The foldLeft will apply the closure with the last folded result (first time using initial value) and the next value.
reduceLeft on the other hand will first combine two values from the list and apply those to the closure. Next it will combine the rest of the values with the cumulative result. See:
List(1,3,5).reduceLeft { (a, b) => println("a " + a + ", b " + b); a + b }
If the list is empty foldLeft can present the initial value as a legal result. reduceLeft on the other hand does not have a legal value if it can't find at least one value in the list.
For reference, reduceLeft will error if applied to an empty container with the following error.
java.lang.UnsupportedOperationException: empty.reduceLeft
Reworking the code to use
myList foldLeft(List[String]()) {(a,b) => a+b}
is one potential option. Another is to use the reduceLeftOption variant which returns an Option wrapped result.
myList reduceLeftOption {(a,b) => a+b} match {
case None => // handle no result as necessary
case Some(v) => println(v)
}
The basic reason they are both in Scala standard library is probably because they are both in Haskell standard library (called foldl and foldl1). If reduceLeft wasn't, it would quite often be defined as a convenience method in different projects.
From Functional Programming Principles in Scala (Martin Odersky):
The function reduceLeft is defined in terms of a more general function, foldLeft.
foldLeft is like reduceLeft but takes an accumulator z, as an additional parameter, which is returned when foldLeft is called on an empty list:
(List (x1, ..., xn) foldLeft z)(op) = (...(z op x1) op ...) op x
[as opposed to reduceLeft, which throws an exception when called on an empty list.]
The course (see lecture 5.5) provides abstract definitions of these functions, which illustrates their differences, although they are very similar in their use of pattern matching and recursion.
abstract class List[T] { ...
def reduceLeft(op: (T,T)=>T) : T = this match{
case Nil => throw new Error("Nil.reduceLeft")
case x :: xs => (xs foldLeft x)(op)
}
def foldLeft[U](z: U)(op: (U,T)=>U): U = this match{
case Nil => z
case x :: xs => (xs foldLeft op(z, x))(op)
}
}
Note that foldLeft returns a value of type U, which is not necessarily the same type as List[T], but reduceLeft returns a value of the same type as the list).
To really understand what are you doing with fold/reduce,
check this: http://wiki.tcl.tk/17983
very good explanation. once you get the concept of fold,
reduce will come together with the answer above:
list.tail.foldLeft(list.head)(_)
Scala 2.13.3, Demo:
val names = List("Foo", "Bar")
println("ReduceLeft: "+ names.reduceLeft(_+_))
println("ReduceRight: "+ names.reduceRight(_+_))
println("Fold: "+ names.fold("Other")(_+_))
println("FoldLeft: "+ names.foldLeft("Other")(_+_))
println("FoldRight: "+ names.foldRight("Other")(_+_))
outputs:
ReduceLeft: FooBar
ReduceRight: FooBar
Fold: OtherFooBar
FoldLeft: OtherFooBar
FoldRight: FooBarOther