How can I partition a RDD respecting order? - scala

I would like to divide a RDD in a number of partitions corresponding to the number of different keys I found (3 in this case):
RDD: [(1,a), (1,b), (1,c), (2,d), (3,e), (3,f), (3,g), (3,h), (3,i)]
What I do now is that elements with the same key will fall into the same partition:
[(1,a), (1,b), (1,c)]
[(2,d)]
[(3,e), (3,f), (3,g), (3,h), (3,i)]
This is how I partition
val partitionedRDD = rdd.partitionBy(new PointPartitioner(
rdd.keys.distinct().count().asInstanceOf[Int]))
This is PoinPartitioner class
class PointPartitioner(numParts: Int) extends org.apache.spark.Partitioner{
import org.apache.spark.Partitioner
override def numPartitions: Int = numParts
override def getPartition(key: Any): Int = {
key.hashCode % numPartitions
}
override def equals(other: Any): Boolean = other match
{
case dnp: PointPartitioner =>
dnp.numPartitions == numPartitions
case _ =>
false
}
}
However, elements are unbalanced across partitions. What I would like to obtain is a RDD partitioned like this, where all the partitions contain roughly the same number of elements, respecting the order of the keys:
[(1,a), (1,b), (1,c)]
[(2,d), (3,e), (3,f)]
[(3,g), (3,h), (3,i)]
What could I try?

Assigning your partitions like this
p1 [(1,a), (1,b), (1,c)]
p2 [(2,d), (3,e), (3,f)]
p3 [(3,g), (3,h), (3,i)]
would mean that you would like to assign the same partition key to different partitions (for 3 it's p2 or p3). Just like for mathematical functions it cannot have many values for the same argument (what does the value depend on then?).
What you could do instead is adding something to your partition key which would result in having more buckets (effectively splitting one set into smaller sets). But you have no control (virtually) over how Spark places your partitions onto the nodes so the data which you wanted to be on the same node can span across multiple nodes.
That really boils down to what job you would like to perform. I would recommend considering what is the outcome you want to get and see if you can come up with some smart partition key with a reasonable tradeoff (if really necessary). Maybe you could hold values by the letter and then use operations like reduceByKey rather than the groupByKey to get your final results?

Related

Spark: groupByKey with 'Iterator' instead of 'Iterable' on the right side

I have an rdd. I want to group it by some property and save each group into a separate file (and get the list of result file names).
The most naive way:
val rdd : RDD[Long] = ???
val byLastDigit: RDD[(Int, Long)] = rdd.map(n => ((n % 10).toInt, n))
val saved: Array[String] = byLastDigit.groupByKey().map((numbers: (Int, Iterable[Long])) => {
//save numbers into a file
???
}).collect()
The downside of this approach is that it holds in memory all values for a key simultaneously. So it will work poorly on huge datasets.
Alternative approach:
byLastDigit.partitionBy(new HashPartitioner(1000)).mapPartitions((numbers: Iterator[(Int, Long)]) => {
//assume that all numbers in a partition have the same key
???
}).collect()
Since the number of partitions is much higher than the number of keys each partition will most probably hold numbers for only one key.
It works for huge datasets smoothly. But this is ugly and much more error-prone.
Could it be done better?

What is the scope of the state for Spark Streaming's mapWithState?

With Spark Streaming, I can create a DStream[(K, V)] on which I can use mapWithState to maintain some state during stream processing. The map function is set up like this:
val mapFun =
(key: K, maybeValue: Option[V], state: State[S]) => {
// Do stuff
}
and then I can use:
val mappedStreamWithState = stream.mapWithState(StateSpec.function(mapFun))
My question now is: what is the scope of the state? Is it the key or the partition?
Let's say the stream comes from a Kafka topic that has 3 partitions but there can be 300 keys. To my understanding, each RDD in the stream has 3 partitions with approximately 100 keys each. Will there then be 3 states (one for each partition) or 300 states (one for each key)?
I am answering my own question so I can mark it as accepted, should another one have the same question.
tl;dr: The scope of the state is the partition, and there are more caveats when you depend on the key.
I investigated mapWithState quite a bit and here is what I found out:
mapWithState only works on a DStream[(A, B)]. So far so good.
The incoming stream that is produced with mapWithState is implicitly partitioned by the key. From what I found out so far, the partition count depends on the number of worker tasks available. E.g., running it in local[*] mode on a CPU with 8 threads yields 8 partitions.
The scope of the state is the partition, so if you depend on the key, you should add some mapping to the state, e.g., using a Map[TKey, TStateForTheKey]
Doing something like a stream.repartition(1).mapWithState(...) is futile
If you need to make sure that two keys are in one partition, i.e., if you want to calculate some values from measuring data channels depending on each other, you have to assign a unique key to them, this makes sure they go into the same partition
So, let's say you have data in a DStream[(String, Int)] like this:
foo,1
bar,8
foo,23
quux,423
bletch,42
bar,5
and you need to make sure that foo and bar are processed together and quux and bletch are, you need to do something like this:
stream.
flatMap {
case (k, v) if Seq("foo", "bar").contains(k) => Some("foobar" -> (k, v))
case (k, v) if Seq("quux", "bletch").contains(k) => Some("quuxbletch" -> (k, v))
case _ => None
}.
mapWithState(StateSpec.function(myFunc))
And your mapping function must be like:
val myFunc = (key: String, maybeRecord: Option[(String, Int)], state: State[Something]) => {
// Do something with the record
}

Can only zip RDDs with same number of elements in each partition despite repartition

I load a dataset
val data = sc.textFile("/home/kybe/Documents/datasets/img.csv",defp)
I want to put an index on this data thus
val nb = data.count.toInt
val tozip = sc.parallelize(1 to nb).repartition(data.getNumPartitions)
val res = tozip.zip(data)
Unfortunately i have the following error
Can only zip RDDs with same number of elements in each partition
How can i modify the number of element by partition if it is possible ?
Why it doesn't work?
The documentation for zip() states:
Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc. Assumes that the two RDDs have the same number of partitions and the same number of elements in each partition (e.g. one was made through a map on the other).
So we need to make sure we meet 2 conditions:
both RDDs have the same number of partitions
respective partitions in those RDDs have exactly the same size
You are making sure that you will have the same number of partitions with repartition() but Spark doesn't guarantee that you will have the same distribution in each partition for each RDD.
Why is that?
Because there are different types of RDDs and most of them have different partitioning strategies! For example:
ParallelCollectionRDD is created when you parallelise a collection with sc.parallelize(collection) it will see how many partitions there should be, will check the size of the collection and calculate the step size. I.e. you have 15 elements in the list and want 4 partitions, first 3 will have 4 consecutive elements last one will have the remaining 3.
HadoopRDD if I remember correctly, one partition per file block. Even though you are using a local file internally Spark first creates a this kind of RDD when you read a local file and then maps that RDD since that RDD is a pair RDD of <Long, Text> and you just want String :-)
etc.etc.
In your example Spark internally does create different types of RDDs (CoalescedRDD and ShuffledRDD) while doing the repartitioning but I think you got the global idea that different RDDs have different partitioning strategies :-)
Notice that the last part of the zip() doc mentions the map() operation. This operation does not repartition as it's a narrow transformation data so it would guarantee both conditions.
Solution
In this simple example as it was mentioned you can do simply data.zipWithIndex. If you need something more complicated then creating the new RDD for zip() should be created with map() as mentioned above.
I solved this by creating an implicit helper like so
implicit class RichContext[T](rdd: RDD[T]) {
def zipShuffle[A](other: RDD[A])(implicit kt: ClassTag[T], vt: ClassTag[A]): RDD[(T, A)] = {
val otherKeyd: RDD[(Long, A)] = other.zipWithIndex().map { case (n, i) => i -> n }
val thisKeyed: RDD[(Long, T)] = rdd.zipWithIndex().map { case (n, i) => i -> n }
val joined = new PairRDDFunctions(thisKeyed).join(otherKeyd).map(_._2)
joined
}
}
Which can then be used like
val rdd1 = sc.parallelize(Seq(1,2,3))
val rdd2 = sc.parallelize(Seq(2,4,6))
val zipped = rdd1.zipShuffle(rdd2) // Seq((1,2),(2,4),(3,6))
NB: Keep in mind that the join will cause a shuffle.
The following provides a Python answer to this problem by defining a custom_zip method:
Can only zip with RDD which has the same number of partitions error

How does HashPartitioner work?

I read up on the documentation of HashPartitioner. Unfortunately nothing much was explained except for the API calls. I am under the assumption that HashPartitioner partitions the distributed set based on the hash of the keys. For example if my data is like
(1,1), (1,2), (1,3), (2,1), (2,2), (2,3)
So partitioner would put this into different partitions with same keys falling in the same partition. However I do not understand the significance of the constructor argument
new HashPartitoner(numPartitions) //What does numPartitions do?
For the above dataset how would the results differ if I did
new HashPartitoner(1)
new HashPartitoner(2)
new HashPartitoner(10)
So how does HashPartitioner work actually?
Well, lets make your dataset marginally more interesting:
val rdd = sc.parallelize(for {
x <- 1 to 3
y <- 1 to 2
} yield (x, None), 8)
We have six elements:
rdd.count
Long = 6
no partitioner:
rdd.partitioner
Option[org.apache.spark.Partitioner] = None
and eight partitions:
rdd.partitions.length
Int = 8
Now lets define small helper to count number of elements per partition:
import org.apache.spark.rdd.RDD
def countByPartition(rdd: RDD[(Int, None.type)]) = {
rdd.mapPartitions(iter => Iterator(iter.length))
}
Since we don't have partitioner our dataset is distributed uniformly between partitions (Default Partitioning Scheme in Spark):
countByPartition(rdd).collect()
Array[Int] = Array(0, 1, 1, 1, 0, 1, 1, 1)
Now lets repartition our dataset:
import org.apache.spark.HashPartitioner
val rddOneP = rdd.partitionBy(new HashPartitioner(1))
Since parameter passed to HashPartitioner defines number of partitions we have expect one partition:
rddOneP.partitions.length
Int = 1
Since we have only one partition it contains all elements:
countByPartition(rddOneP).collect
Array[Int] = Array(6)
Note that the order of values after the shuffle is non-deterministic.
Same way if we use HashPartitioner(2)
val rddTwoP = rdd.partitionBy(new HashPartitioner(2))
we'll get 2 partitions:
rddTwoP.partitions.length
Int = 2
Since rdd is partitioned by key data won't be distributed uniformly anymore:
countByPartition(rddTwoP).collect()
Array[Int] = Array(2, 4)
Because with have three keys and only two different values of hashCode mod numPartitions there is nothing unexpected here:
(1 to 3).map((k: Int) => (k, k.hashCode, k.hashCode % 2))
scala.collection.immutable.IndexedSeq[(Int, Int, Int)] = Vector((1,1,1), (2,2,0), (3,3,1))
Just to confirm the above:
rddTwoP.mapPartitions(iter => Iterator(iter.map(_._1).toSet)).collect()
Array[scala.collection.immutable.Set[Int]] = Array(Set(2), Set(1, 3))
Finally with HashPartitioner(7) we get seven partitions, three non-empty with 2 elements each:
val rddSevenP = rdd.partitionBy(new HashPartitioner(7))
rddSevenP.partitions.length
Int = 7
countByPartition(rddTenP).collect()
Array[Int] = Array(0, 2, 2, 2, 0, 0, 0)
Summary and Notes
HashPartitioner takes a single argument which defines number of partitions
values are assigned to partitions using hash of keys. hash function may differ depending on the language (Scala RDD may use hashCode, DataSets use MurmurHash 3, PySpark, portable_hash).
In simple case like this, where key is a small integer, you can assume that hash is an identity (i = hash(i)).
Scala API uses nonNegativeMod to determine partition based on computed hash,
if distribution of keys is not uniform you can end up in situations when part of your cluster is idle
keys have to be hashable. You can check my answer for A list as a key for PySpark's reduceByKey to read about PySpark specific issues. Another possible problem is highlighted by HashPartitioner documentation:
Java arrays have hashCodes that are based on the arrays' identities rather than their contents, so attempting to partition an RDD[Array[]] or RDD[(Array[], _)] using a HashPartitioner will produce an unexpected or incorrect result.
In Python 3 you have to make sure that hashing is consistent. See What does Exception: Randomness of hash of string should be disabled via PYTHONHASHSEED mean in pyspark?
Hash partitioner is neither injective nor surjective. Multiple keys can be assigned to a single partition and some partitions can remain empty.
Please note that currently hash based methods don't work in Scala when combined with REPL defined case classes (Case class equality in Apache Spark).
HashPartitioner (or any other Partitioner) shuffles the data. Unless partitioning is reused between multiple operations it doesn't reduce amount of data to be shuffled.
RDD is distributed this means it is split on some number of parts. Each of this partitions is potentially on different machine. Hash partitioner with argument numPartitions chooses on what partition to place pair (key, value) in following way:
Creates exactly numPartitions partitions.
Places (key, value) in partition with number Hash(key) % numPartitions
The HashPartitioner.getPartition method takes a key as its argument and returns the index of the partition which the key belongs to. The partitioner has to know what the valid indices are, so it returns numbers in the right range. The number of partitions is specified through the numPartitions constructor argument.
The implementation returns roughly key.hashCode() % numPartitions. See Partitioner.scala for more details.

reduceByKey: How does it work internally?

I am new to Spark and Scala. I was confused about the way reduceByKey function works in Spark. Suppose we have the following code:
val lines = sc.textFile("data.txt")
val pairs = lines.map(s => (s, 1))
val counts = pairs.reduceByKey((a, b) => a + b)
The map function is clear: s is the key and it points to the line from data.txt and 1 is the value.
However, I didn't get how the reduceByKey works internally? Does "a" points to the key? Alternatively, does "a" point to "s"? Then what does represent a + b? how are they filled?
Let's break it down to discrete methods and types. That usually exposes the intricacies for new devs:
pairs.reduceByKey((a, b) => a + b)
becomes
pairs.reduceByKey((a: Int, b: Int) => a + b)
and renaming the variables makes it a little more explicit
pairs.reduceByKey((accumulatedValue: Int, currentValue: Int) => accumulatedValue + currentValue)
So, we can now see that we are simply taking an accumulated value for the given key and summing it with the next value of that key. NOW, let's break it further so we can understand the key part. So, let's visualize the method more like this:
pairs.reduce((accumulatedValue: List[(String, Int)], currentValue: (String, Int)) => {
//Turn the accumulated value into a true key->value mapping
val accumAsMap = accumulatedValue.toMap
//Try to get the key's current value if we've already encountered it
accumAsMap.get(currentValue._1) match {
//If we have encountered it, then add the new value to the existing value and overwrite the old
case Some(value : Int) => (accumAsMap + (currentValue._1 -> (value + currentValue._2))).toList
//If we have NOT encountered it, then simply add it to the list
case None => currentValue :: accumulatedValue
}
})
So, you can see that the reduceByKey takes the boilerplate of finding the key and tracking it so that you don't have to worry about managing that part.
Deeper, truer if you want
All that being said, that is a simplified version of what happens as there are some optimizations that are done here. This operation is associative, so the spark engine will perform these reductions locally first (often termed map-side reduce) and then once again at the driver. This saves network traffic; instead of sending all the data and performing the operation, it can reduce it as small as it can and then send that reduction over the wire.
One requirement for the reduceByKey function is that is must be associative. To build some intuition on how reduceByKey works, let's first see how an associative associative function helps us in a parallel computation:
As we can see, we can break an original collection in pieces and by applying the associative function, we can accumulate a total. The sequential case is trivial, we are used to it: 1+2+3+4+5+6+7+8+9+10.
Associativity lets us use that same function in sequence and in parallel. reduceByKey uses that property to compute a result out of an RDD, which is a distributed collection consisting of partitions.
Consider the following example:
// collection of the form ("key",1),("key,2),...,("key",20) split among 4 partitions
val rdd =sparkContext.parallelize(( (1 to 20).map(x=>("key",x))), 4)
rdd.reduceByKey(_ + _)
rdd.collect()
> Array[(String, Int)] = Array((key,210))
In spark, data is distributed into partitions. For the next illustration, (4) partitions are to the left, enclosed in thin lines. First, we apply the function locally to each partition, sequentially in the partition, but we run all 4 partitions in parallel. Then, the result of each local computation are aggregated by applying the same function again and finally come to a result.
reduceByKey is an specialization of aggregateByKey aggregateByKey takes 2 functions: one that is applied to each partition (sequentially) and one that is applied among the results of each partition (in parallel). reduceByKey uses the same associative function on both cases: to do a sequential computing on each partition and then combine those results in a final result as we have illustrated here.
In your example of
val counts = pairs.reduceByKey((a,b) => a+b)
a and b are both Int accumulators for _2 of the tuples in pairs. reduceKey will take two tuples with the same value s and use their _2 values as a and b, producing a new Tuple[String,Int]. This operation is repeated until there is only one tuple for each key s.
Unlike non-Spark (or, really, non-parallel) reduceByKey where the first element is always the accumulator and the second a value, reduceByKey operates in a distributed fashion, i.e. each node will reduce it's set of tuples into a collection of uniquely-keyed tuples and then reduce the tuples from multiple nodes until there is a final uniquely-keyed set of tuples. This means as the results from nodes are reduced, a and b represent already reduced accumulators.
Spark RDD reduceByKey function merges the values for each key using an associative reduce function.
The reduceByKey function works only on the RDDs and this is a transformation operation that means it is lazily evaluated. And an associative function is passed as a parameter, which is applied to source RDD and creates a new RDD as a result.
So in your example, rdd pairs has a set of multiple paired elements like (s1,1), (s2,1) etc. And reduceByKey accepts a function (accumulator, n) => (accumulator + n), which initialise the accumulator variable to default value 0 and adds up the element for each key and return the result rdd counts having the total counts paired with key.
Simple if your input RDD data look like this:
(aa,1)
(bb,1)
(aa,1)
(cc,1)
(bb,1)
and if you apply reduceByKey on above rdd data then few you have to remember,
reduceByKey always takes 2 input (x,y) and always works with two rows at a time.
As it is reduceByKey it will combine two rows of same key and combine the result of value.
val rdd2 = rdd.reduceByKey((x,y) => x+y)
rdd2.foreach(println)
output:
(aa,2)
(bb,2)
(cc,1)