Apache Spark lookup function - scala

Reading def of lookup method from https://spark.apache.org/docs/latest/api/scala/#org.apache.spark.rdd.PairRDDFunctions :
def
lookup(key: K): Seq[V]
Return the list of values in the RDD for key key. This operation is done efficiently if the RDD has a known partitioner by only searching the partition that the key maps to.
How can ensure that the RDD has a known partitioner ? I understand that an RDD is partitioned across node's in a cluster but what is meant by statement only searching the partition that the key maps to. ?

A number of operations (especially on key-value pairs) automatically set up a partition when they are executed as it can increase efficiency by cutting down on network traffic. For example (From PairRDDFunctions):
def aggregateByKey[U: ClassTag](zeroValue: U, numPartitions: Int)(seqOp: (U, V) => U,
combOp: (U, U) => U): RDD[(K, U)] = self.withScope {
aggregateByKey(zeroValue, new HashPartitioner(numPartitions))(seqOp, combOp)
}
Note the creation of a HashPartitioner. You can check the partitioner of your RDD if you want to see if it has one. You can also set one via partitionBy

A Partitioner maps keys to partition indexes. If a key-value RDD is partitioned by a Partitioner, it means that each key is placed in the partition that is assigned to it be the Partitioner.
This is great for lookup! You can use the Partitioner to tell you the partition that this key belongs to, and then you only need to look at that partition of the RDD. (This can mean that the rest of the RDD does not even need to be computed!)
How can ensure that the RDD has a known partitioner ?
You can check that rdd.partitioner is not None. (Operations that need to locate keys, like groupByKey and join, partition the RDD for you.) You can use rdd.partitionBy to assign your own Partitioner and re-shuffle the RDD by it.

Each RDD can ,optionally, define a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned).
Indeed, in some pairRDDFunctions you can specify the partitioner, frequently in last parameter.
Or if your RDD hasn't partitioner, can use partitionBy method to set it.
Lookup method go directly partition if your RDD already has a partitioner or scan all the partitions in parallel if hasn't.

Related

RDD persist mechanism (what happen when I persist a RDD and then use take(10) not count() )

what happens when I persist a RDD and then use take(10) instead of count().
I have read some comments, it says that if I use take() instead of count, it might only persist partial partition not all the partitions.
But, if my dataset is big enough,then using count is very time consuming.
Is there any other action operator that I can use to trigger persist to persist all partition.
foreachPartition is an action operator and it need data from all partitions, can I use this after persist?
need your help ~
Ex:
val rdd1 = sc.textFile("src/main/resources/").persist()
rdd1.foreachPartition(partition=>partition.take(1))

See information of partitions of a Spark Dataframe

One can have an array of partitions of a Spark DataFrame as follows:
> df.rdd.partitions
Is there a way to get more information about partitions? In particular, I would like to see the partition key and the partition boundaries (first and last element within a partition).
This is just for better understanding of how the data is organized.
This is what I tried:
> df.partitions.rdd.head
But this object only has attributes and methods equals hashCode and index.
In case the data is not too large, one can write them to disk as follows:
df.write.option("header", "true").csv("/tmp/foobar")
The given directory must not exist.

Does spark handle data shuffling?

I have an input A which I convert into an rdd X spread across the cluster.
I perform certain operations on it.
Then I do .repartition(1) on the output rdd.
Will my output rdd be in the same order that input A.
Does spark handle this automatically? If yes, then how?
The documentation doesn't guarantee that order will be kept, so you can assume it won't be. If you look at the implementation, you'll see it certainly won't be (unless your original RDD already has 1 partition for some reason): repartition calls coalesce(shuffle = true), which
Distributes elements evenly across output partitions, starting from a random partition.

Spark Aggregatebykey partitioner order

If I apply a hash partitioner to Spark's aggregatebykey function, i.e. myRDD.aggregateByKey(0, new HashPartitioner(20))(combOp, mergeOp)
Does myRDD get repartitioned first before it's key/value pairs are aggregated using combOp and mergeOp? Or does myRDD go through combOp and mergeOp first and the resulting RDD is repartitioned using the HashPartitioner?
aggregateByKey applies map side aggregation before eventual shuffle. Since every partition is processed sequentially the only operation that is applied in this phase is initialization (creating zeroValue) and combOp. A goal of mergeOp is to combine aggregation buffers so it is not used before shuffle.
If input RDD is a ShuffledRDD with the same partitioner as requested for aggregateByKey then data is not shuffled at all and data is aggregated locally using mapPartitions.

When create two different Spark Pair RDD with same key set, will Spark distribute partition with same key to the same machine?

I want to do a join operation between two very big key-value pair RDDs. The keys of these two RDD comes from the same set. To reduce data shuffle, I wish I could add a pre-distribute phase so that partitions with the same key will be distributed on the same machine. Hopefully this could reduce some shuffle time.
I want to know is spark smart enough to do that for me or I have to implement this logic myself?
I know when I join two RDD, one preprocess with partitionBy. Spark is smart enough to use this information and only shuffle the other RDD. But I don't know what will happen if I use partitionBy on two RDD at the same time and then do the join.
If you use the same partitioner for both RDDs you achieve co-partitioning of your data sets. That does not necessarily mean that your RDDs are co-located - that is, that the partitioned data is located on the same node.
Nevertheless, the performance should be better as if both RDDs would have different partitioner.
I have seen this, Speeding Up Joins by Assigning a Known Partitioner that would be helpful to understand the effect of using the same partitioner for both RDDs;
Speeding Up Joins by Assigning a Known Partitioner
If you have to do an operation before the join that requires a
shuffle, such as aggregateByKey or reduceByKey, you can prevent the
shuffle by adding a hash partitioner with the same number of
partitions as an explicit argument to the first operation and
persisting the RDD before the join.