I am trying to get the number of attributes a flink's DataSet has, the first attempt was simply:
dataset.input
.map(_.vector.size)
.reduce((_, b) => b)
.collect
.head
Then, looking at the implementation of Solver I saw it does it this way:
// TODO: Faster way to do this?
dataset.map(_.vector.size)
.reduce((a, b) => b)
But the TODO comment says it all.
So, I came out with this implementation:
dataset.first(1)
.map(_.vector.size)
.reduce((_, b) => b)
.collect
Is there a more efficient implementation?
The fastest would be
dataset
// only forward first vector of each partition
.mapPartition(in => if (in.hasNext) Seq(in.next) else Seq())
// move all remaining vectors to a single partition, compute size of the first and forward it
.mapPartition(in => if (in.hasNext) Seq(in.next.vector.size) else Seq()).setParallelism(1)
.collect
Using reduce or groupReduce is less efficient because might move data to a single machine without reducing it first and results in on function call per input record. mapPartition is called once per partition and only forwards the first element of the input iterator.
Related
First I had a salesList: List[Sale] and in order to get an ID of the last Sale in the List I've used lastOption:
val lastSaleId: Option[Any] = salesList.lastOption.map(_.saleId)
But now I've modified a method with List[Sale] to work with salesListRdd: List[RDD[Sale]]. So I've changed the way I'm getting an ID of the last Sale:
val lastSaleId: Option[Any] = SparkContext
.union(salesListRdd)
.collect().toList
.lastOption.map(_.saleId)
I'm not sure that it is the best way to go. Because here I'm still collecting RDD to a List which brings it to the driver node and it may cause the driver to run out of memory.
Is there a way to get an ID of the last Sale from an RDD preserving the initial order of records? Not any kind of sorting but the way the Sale objects were originally stored in the List?
There at least two efficient solutions. You can either use top with zipWithIndex:
def lastValue[T](rdd: RDD[T]): Option[T] = {
rdd.zipWithUniqueId.map(_.swap).top(1)(Ordering[Long].on(_._1)).headOption.map(_._2)
}
or top with custom key:
def lastValue[T](rdd: RDD[T]): Option[T] = {
rdd.mapPartitionsWithIndex(
(i, iter) => iter.zipWithIndex.map { case (x, j) => ((i, j), x) }
).top(1)(Ordering[(Int, Long)].on(_._1)).headOption.map(_._2)
}
The former one requires additional action for zipWithIndex while the latter one doesn't.
Before using please be sure to understand the limitation. Quoting the docs:
Note that some RDDs, such as those returned by groupBy(), do not guarantee order of elements in a partition. The unique ID assigned to each element is therefore not guaranteed, and may even change if the RDD is reevaluated. If a fixed ordering is required to guarantee the same index assignments, you should sort the RDD with sortByKey() or save it to a file.
In particular, depending on the exact input, Union might not preserve the input order at all.
You could use zipWithIndex and sort descending by it, so that the last record will be on the top, then take(1):
salesListRdd
.zipWithIndex()
.map({ case (x, y) => (y, x) })
.sortByKey(ascending = false)
.map({ case (x, y) => y })
.take(1)
Solution is taken from here: http://www.swi.com/spark-rdd-getting-bottom-records/
However, it is highly inefficient, since It does lots of partition shuffling.
I've a PairRDD like this (word, wordCount). Now, I need to calculate for each word the percentage of appearances on the total of the words count, obtaining a final PairRDD like this (word, (wordCount, percentage)).
I tried with:
val pairs = .... .cache() // (word, wordCount)
val wordsCount = pairs.map(_._2).reduce(_+_)
pairs.map{
case (word, count) => (word, (count, BigDecimal(count/wordsCount.toDouble * 100).setScale(3, BigDecimal.RoundingMode.HALF_UP).toDouble))
}
But it does not seem very efficient (i'm a beginner). Is there a better way to do it?
When calculating wordsCount, map(_._2).reduce(_ + _) is more idiomatically expressed by the aggregate method:
val pairs = .... .cache() // (word, wordCount)
val wordsCount = pairs.aggregate(0)((sum, pair) => sum + pair._2, _ + _)
The first argument is the initial total count, which should be zero. The next two arguments are in a separate argument list, and are executed for each member of your pair RDD. The first of this second set of arguments ((sum, pair) => sum + pair._2, termed the sequence operator) adds each value in a partition to the current sum value for that partition; while the second (_ + _, termed the combination operator) combines counts from different partitions. The main benefit is that this operation is performed in parallel, by each partition, across the data, and guarantees no expensive repartitioning (a.k.a. shuffling) of that data. (Repartitioning occurs when data needs to be transferred between nodes in the cluster, and is very slow due to network communication.)
While aggregate does not affect partitioning of the data, you should note that, if your data is partitioned, a map operation will remove the partitioning scheme - because of the possibility that the keys in the pair RDD might get changed by the transformation. This might resulting in subsequent repartitioning if you perform any further transformations on the result of a map transformation. That said, a map operation followed by a reduce shouldn't result in repartitioning, but might be slightly less efficient because of the extra step (but not hugely so).
If you're concerned that the total word count might overflow an Int type, you could use a BigInt instead:
val wordsCount = pairs.aggregate(BigInt(0))((sum, pair) => sum + pair._2, _ + _)
As for producing a pair RDD with the word counts and percentages as the value, you should use mapValues instead of map. Again mapValues will retain any existing partitioning scheme (because it guarantees that the keys will not get changed by the transformation), while map will remove it. Furthermore, mapValues is simpler in that you do not need to process key values:
pairs.mapValues(c => (c, c * 100.0 / wordsCount))
That should provide sufficient accuracy for your purposes. I would only worry about rounding, and using BigDecimal, when retrieving and outputting values. However, if you really do need that, your code would look like this:
pairs.mapValues{c =>
(c, BigDecimal(c * 100.0 / wordsCount).setScale(3, BigDecimal.RoundingMode.HALF_UP).toDouble))
}
I was trying to transform an RDD inside another transformation. Since, RDD transformations and actions can only be invoked by the driver I collected the 2nd RDD and tried to apply the transformation on it inside the other transformation like below
val name_match = first_names.map(y => (y, first_names_collection.value.filter(z => soundex.difference(z, y) == 4 ) ))
The above code is throwing the following exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException): Application attempt appattempt_1468905506642_46091_000001 doesn't exist in ApplicationMasterService cache.
Here, the size of first_names_collection is more than 10 GB. Would that be causing this problem? Is there any other way to do this?
It looks like you want to calculate a difference function between each element of name_match and each element of first_names_collection and find pairs with a difference of 4.
Typically performing pair-wise calculations on two RDDs is done by enumerating all pairs using cartesian first. Your solution would look something like:
first_name.cartesian(first_names_collection) // generate all pairs
.filter{case (lhs, rhs) => soundex.difference(lhs, rhs) == 4}
.groupByKey
I’m looking for a way to compare subsets of an RDD intelligently.
Lets say I had an RDD with key/value pairs of type (Int->T). I eventually need to say “compare all values of key 1 with all values of key 2 and compare values of key 3 to the values of key 5 and key 7”, how would I go about doing this efficiently?
The way I’m currently thinking of doing it is by creating a List of filtered RDDs and then using RDD.cartesian()
def filterSubset[T] = (b:Int, r:RDD[(Int, T)]) => r.filter{case(name, _) => name == b}
Val keyPairs:(Int, Int) // all key pairs
Val rddPairs = keyPairs.map{
case (a, b) =>
filterSubset(a,r).cartesian(filterSubset(b,r))
}
rddPairs.map{whatever I want to compare…}
I would then iterate the list and perform a map on each of the RDDs of pairs to gather the relational data that I need.
What I can’t tell about this idea is whether it would be extremely inefficient to set up possibly of hundreds of map jobs and then iterate through them. In this case, would the lazy valuation in spark optimize the data shuffling between all of the maps? If not, can someone please recommend a possibly more efficient way to approach this issue?
Thank you for your help
One way you can approach this problem is to replicate and partition your data to reflect key pairs you want to compare. Lets start with creating two maps from the actual keys to the temporary keys we'll use for replication and joins:
def genMap(keys: Seq[Int]) = keys
.zipWithIndex.groupBy(_._1)
.map{case (k, vs) => (k -> vs.map(_._2))}
val left = genMap(keyPairs.map(_._1))
val right = genMap(keyPairs.map(_._2))
Next we can transform data by replicating with new keys:
def mapAndReplicate[T: ClassTag](rdd: RDD[(Int, T)], map: Map[Int, Seq[Int]]) = {
rdd.flatMap{case (k, v) => map.getOrElse(k, Seq()).map(x => (x, (k, v)))}
}
val leftRDD = mapAndReplicate(rddPairs, left)
val rightRDD = mapAndReplicate(rddPairs, right)
Finally we can cogroup:
val cogrouped = leftRDD.cogroup(rightRDD)
And compare / filter pairs:
cogrouped.values.flatMap{case (xs, ys) => for {
(kx, vx) <- xs
(ky, vy) <- ys
if cosineSimilarity(vx, vy) <= threshold
} yield ((kx, vx), (ky, vy)) }
Obviously in the current form this approach is limited. It assumes that values for arbitrary pair of keys can fit into memory and require a significant amount of network traffic. Still it should give you some idea how to proceed.
Another possible approach is to store data in the external system (for example database) and fetch required key-value pairs on demand.
Since you're trying to find similarity between elements I would also consider completely different approach. Instead of naively comparing key-by-key I would try to partition data using custom partitioner which reflects expected similarity between documents. It is far from trivial in general but should give much better results.
Using Dataframe you can easily do the cartesian operation using join:
dataframe1.join(dataframe2, dataframe1("key")===dataframe2("key"))
It will probably do exactly what you want, but efficiently.
If you don't know how to create an Dataframe, please refer to http://spark.apache.org/docs/latest/sql-programming-guide.html#creating-dataframes
I am new to Spark and Scala. I was confused about the way reduceByKey function works in Spark. Suppose we have the following code:
val lines = sc.textFile("data.txt")
val pairs = lines.map(s => (s, 1))
val counts = pairs.reduceByKey((a, b) => a + b)
The map function is clear: s is the key and it points to the line from data.txt and 1 is the value.
However, I didn't get how the reduceByKey works internally? Does "a" points to the key? Alternatively, does "a" point to "s"? Then what does represent a + b? how are they filled?
Let's break it down to discrete methods and types. That usually exposes the intricacies for new devs:
pairs.reduceByKey((a, b) => a + b)
becomes
pairs.reduceByKey((a: Int, b: Int) => a + b)
and renaming the variables makes it a little more explicit
pairs.reduceByKey((accumulatedValue: Int, currentValue: Int) => accumulatedValue + currentValue)
So, we can now see that we are simply taking an accumulated value for the given key and summing it with the next value of that key. NOW, let's break it further so we can understand the key part. So, let's visualize the method more like this:
pairs.reduce((accumulatedValue: List[(String, Int)], currentValue: (String, Int)) => {
//Turn the accumulated value into a true key->value mapping
val accumAsMap = accumulatedValue.toMap
//Try to get the key's current value if we've already encountered it
accumAsMap.get(currentValue._1) match {
//If we have encountered it, then add the new value to the existing value and overwrite the old
case Some(value : Int) => (accumAsMap + (currentValue._1 -> (value + currentValue._2))).toList
//If we have NOT encountered it, then simply add it to the list
case None => currentValue :: accumulatedValue
}
})
So, you can see that the reduceByKey takes the boilerplate of finding the key and tracking it so that you don't have to worry about managing that part.
Deeper, truer if you want
All that being said, that is a simplified version of what happens as there are some optimizations that are done here. This operation is associative, so the spark engine will perform these reductions locally first (often termed map-side reduce) and then once again at the driver. This saves network traffic; instead of sending all the data and performing the operation, it can reduce it as small as it can and then send that reduction over the wire.
One requirement for the reduceByKey function is that is must be associative. To build some intuition on how reduceByKey works, let's first see how an associative associative function helps us in a parallel computation:
As we can see, we can break an original collection in pieces and by applying the associative function, we can accumulate a total. The sequential case is trivial, we are used to it: 1+2+3+4+5+6+7+8+9+10.
Associativity lets us use that same function in sequence and in parallel. reduceByKey uses that property to compute a result out of an RDD, which is a distributed collection consisting of partitions.
Consider the following example:
// collection of the form ("key",1),("key,2),...,("key",20) split among 4 partitions
val rdd =sparkContext.parallelize(( (1 to 20).map(x=>("key",x))), 4)
rdd.reduceByKey(_ + _)
rdd.collect()
> Array[(String, Int)] = Array((key,210))
In spark, data is distributed into partitions. For the next illustration, (4) partitions are to the left, enclosed in thin lines. First, we apply the function locally to each partition, sequentially in the partition, but we run all 4 partitions in parallel. Then, the result of each local computation are aggregated by applying the same function again and finally come to a result.
reduceByKey is an specialization of aggregateByKey aggregateByKey takes 2 functions: one that is applied to each partition (sequentially) and one that is applied among the results of each partition (in parallel). reduceByKey uses the same associative function on both cases: to do a sequential computing on each partition and then combine those results in a final result as we have illustrated here.
In your example of
val counts = pairs.reduceByKey((a,b) => a+b)
a and b are both Int accumulators for _2 of the tuples in pairs. reduceKey will take two tuples with the same value s and use their _2 values as a and b, producing a new Tuple[String,Int]. This operation is repeated until there is only one tuple for each key s.
Unlike non-Spark (or, really, non-parallel) reduceByKey where the first element is always the accumulator and the second a value, reduceByKey operates in a distributed fashion, i.e. each node will reduce it's set of tuples into a collection of uniquely-keyed tuples and then reduce the tuples from multiple nodes until there is a final uniquely-keyed set of tuples. This means as the results from nodes are reduced, a and b represent already reduced accumulators.
Spark RDD reduceByKey function merges the values for each key using an associative reduce function.
The reduceByKey function works only on the RDDs and this is a transformation operation that means it is lazily evaluated. And an associative function is passed as a parameter, which is applied to source RDD and creates a new RDD as a result.
So in your example, rdd pairs has a set of multiple paired elements like (s1,1), (s2,1) etc. And reduceByKey accepts a function (accumulator, n) => (accumulator + n), which initialise the accumulator variable to default value 0 and adds up the element for each key and return the result rdd counts having the total counts paired with key.
Simple if your input RDD data look like this:
(aa,1)
(bb,1)
(aa,1)
(cc,1)
(bb,1)
and if you apply reduceByKey on above rdd data then few you have to remember,
reduceByKey always takes 2 input (x,y) and always works with two rows at a time.
As it is reduceByKey it will combine two rows of same key and combine the result of value.
val rdd2 = rdd.reduceByKey((x,y) => x+y)
rdd2.foreach(println)
output:
(aa,2)
(bb,2)
(cc,1)