How to use Broadcast variable correctly in Spark GraphX? - scala

I use GraphX for processing a graph. i have used GraphLoader to load it and i made a variable that contains the neighbors of each node by using below code:
val all_neighbors: VertexRDD[Array[VertexId]] = graph.collectNeighborIds(EdgeDirection.Either).cache()
because i frequently need nodes neighbors i decide to broadcast them. when i use this code i get error:
val broadcastVar = sc.broadcast(all_neighbors)
but when i use this code there is no error:
val broadcastVar = sc.broadcast(all_neighbors.collect())
is it right to use collect() for broadcasting??
and one more question. i want to change this broadcast variable to be key,value. is this code right?
val nvalues = broadcastVar.value.toMap
does the above code(i means nvalues) broadcast to all slaves in cluster?? should i broadcast nvalues too?? i am a little bit confused with broad cast subject. please help me with this problem.

There are two questions:
is it right to use collect() for broadcasting??
all_neighbors is of type VertexRDD which is essentially an RDD. There is nothing in an RDD you could broadcast. RDD is a data structure that describes a distributed computation on some datasets. By the features of RDD, you can describe what and how to compute. It's an abstract entity. You can only broadcast a real value, but an RDD is just a container of values that are only available when executors process their data.
quoting from Broadcast Variables:
Broadcast variables allow the programmer to keep a read-only variable
cached on each machine rather than shipping a copy of it with tasks.
They can be used, for example, to give every node a copy of a large
input dataset in an efficient manner.
This means that explicitly creating broadcast variables is only useful
when tasks across multiple stages need the same data or when caching
the data in deserialized form is important.
That's the reason we need to perform collect the dataset that RDD holds which converts the RDD to a locally-available collection which can then be broadcasted.
Note: When you perform collect operation, data is accumulated in the driver node and then broadcasted. So if space in the driver node is less, it will throw errors
does the above code(i means nvalues) broadcast to all slaves in
cluster?? should I broadcast nvalues too??
It totally depends on your use case. If you want to only use broadcastVar, then only broadcast it or if you want to use nvalues, only broadcast nvalues or else you can broadcast both the values though you need to be careful of memory constraints.
Let me know if it helps!!

Related

Can we make the different transformation functions for the Spark Streaming running on different servers?

val words = lines.flatMap(_.split(" "))
val pairs = words.map(word => (word, 1))
For the above example, we know there are two transformation functions. Both of them must running at the same process\server, however I want to make the second transformation running on a different server from the first one to achieve scalability, is it possible?
To clear things up: a Spark transformation is not an actual execution. Transformations in Spark are lazy which means nothing gets executed until you call an action (e.g. save, collect). An action is a job in Spark.
So based on the above, you can control jobs but you cannot control transformations. A Spark's job will be distributed on multiple executors by splitting the processed data (RDD) among them. Each executor will apply the job (multiple transformations) on its split and then the results will be collected again. This will significantly reduce network usage.
If you can perform what your asking about, then the intermediate results (which you actually don't care about) should be transformed over the network which in turns will add a great network overhead.

How to `reduce` only within partitions in Spark Streaming, perhaps using combineByKey?

I have data already sorted by key into my Spark Streaming partitions by virtue of Kafka, i.e. keys found on one node are not found on any other nodes.
I would like to use redis and its incrby (increment by) command as a state engine and to reduce the number of requests sent to redis, I would like to partially reduce my data by doing a word count on each worker node by itself. (The key is tag+timestamp to obtain my functionality from word count).
I would like to avoid shuffling and let redis take care of adding data across worker nodes.
Even when I have checked that data is cleanly split among worker nodes, .reduce(_ + _) (Scala syntax) takes a long time (several seconds vs. sub-second for map tasks), as the HashPartitioner seems to shuffle my data to a random node to add it there.
How can I write a simple word count reduce on each partitioner without triggering the shuffling step in Scala with Spark Streaming?
Note DStream objects lack some RDD methods, which are available only through the transform method.
It seems I might be able to use combineByKey. I would like to skip the mergeCombiners() step and instead leave accumulated tuples where they are.
The book "Learning Spark" enigmatically says:
We can disable map-side aggregation in combineByKey() if we know that our data won’t benefit from it. For example, groupByKey() disables map-side aggregation as the aggregation function (appending to a list) does not save any space. If we want to disable map-side combines, we need to specify the partitioner; for now you can just use the partitioner on the source RDD by passing rdd.partitioner.
https://www.safaribooksonline.com/library/view/learning-spark/9781449359034/ch04.html
The book then continues to supply no syntax for how to do this, nor have I had any luck with google so far.
What is worse, as far as I know, the partitioner is not set for DStream RDDs in Spark Streaming, so I don't know how to supply a partitioner to combineByKey that doesn't end up shuffling data.
Also, what does "map-side" actually mean and what consequences does mapSideCombine = false have, exactly?
The scala implementation for combineByKey can be found at
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
Look for combineByKeyWithClassTag.
If the solution involves a custom partitioner, please include also a code sample for how to apply that partitioner to the incoming DStream.
This can be done using mapPartitions, which takes a function that maps an iterator of the input RDD on one partition to an iterator over the output RDD.
To implement a word count, I map to _._2 to remove the Kafka key and then perform a fast iterator word count using foldLeft, initializing a mutable.hashMap, which then gets converted to an Iterator to form the output RDD.
val myDstream = messages
.mapPartitions( it =>
it.map(_._2)
.foldLeft(new mutable.HashMap[String, Int])(
(count, key) => count += (key -> (count.getOrElse(key, 0) + 1))
).toIterator
)

Send object to specific partition with Spark

Suppose I have a RDD with nPartitions partitions, and I'm using the mapPartitionsWithIndex method, while also keeping on the driver an array x of dimension nPartitions.
Now suppose I would like to ship x(i) to partition i so that it may work on it, a naïve way to do so would be to just call x(i) in the closure, as in the following toy example :
val sc = new SparkContext()
val rdd = sc.parallelize(1 to 1000).repartition(10)
val nPartitions = rdd.partitions.length
val myArray = Array.fill(nPartitions)(math.random) //array to be shipped to executors
val result = rdd.mapPartitionsWithIndex((index,data) =>
Seq(data.map(_ * myArray(index)).sum).iterator
)
(Ignore the logic within mapPartitionsWithIndex, only the myArray(index) is what interests us.
However if my understanding is correct, this will ship the entire array myArray to all executors, as the array is in the closure. Now if we suppose the array contains large objects which may take up too much memory / serialization time, this becomes a problem.
Is there a way to avoid this, and to ship only the components of the array corresponding to the partitions within a given executor ?
This is a case of premature optimization. Sending an array as big as the number of partitions is not going to save you much vs sending just the value for the partition, if at all possible.
However, instead of sending the array as a closure, you should send the array as a
broadcast variable: http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables
The main difference is that the closure is serialized and sent out for each task, while, from the doc page "Broadcast variables allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks".
Not exactly sending large objects to partitions, but an inverted approach would be to use mapPartition in conjunction with partitioning by columns. Namely, using mapPartition in this fashion would be pulling in the large object on a per partition level vs. on a per row level.

Spark RDD's - how do they work

I have a small Scala program that runs fine on a single-node. However, I am scaling it out so it runs on multiple nodes. This is my first such attempt. I am just trying to understand how the RDDs work in Spark so this question is based around theory and may not be 100% correct.
Let's say I create an RDD:
val rdd = sc.textFile(file)
Now once I've done that, does that mean that the file at file is now partitioned across the nodes (assuming all nodes have access to the file path)?
Secondly, I want to count the number of objects in the RDD (simple enough), however, I need to use that number in a calculation which needs to be applied to objects in the RDD - a pseudocode example:
rdd.map(x => x / rdd.size)
Let's say there are 100 objects in rdd, and say there are 10 nodes, thus a count of 10 objects per node (assuming this is how the RDD concept works), now when I call the method is each node going to perform the calculation with rdd.size as 10 or 100? Because, overall, the RDD is size 100 but locally on each node it is only 10. Am I required to make a broadcast variable prior to doing the calculation? This question is linked to the question below.
Finally, if I make a transformation to the RDD, e.g. rdd.map(_.split("-")), and then I wanted the new size of the RDD, do I need to perform an action on the RDD, such as count(), so all the information is sent back to the driver node?
val rdd = sc.textFile(file)
Does that mean that the file is now partitioned across the nodes?
The file remains wherever it was. The elements of the resulting RDD[String] are the lines of the file. The RDD is partitioned to match the natural partitioning of the underlying file system. The number of partitions does not depend on the number of nodes you have.
It is important to understand that when this line is executed it does not read the file(s). The RDD is a lazy object and will only do something when it must. This is great because it avoids unnecessary memory usage.
For example, if you write val errors = rdd.filter(line => line.startsWith("error")), still nothing happens. If you then write val errorCount = errors.count now your sequence of operations will need to be executed because the result of count is an integer. What each worker core (executor thread) will do in parallel then, is read a file (or piece of file), iterate through its lines, and count the lines starting with "error". Buffering and GC aside, only a single line per core will be in memory at a time. This makes it possible to work with very large data without using a lot of memory.
I want to count the number of objects in the RDD, however, I need to use that number in a calculation which needs to be applied to objects in the RDD - a pseudocode example:
rdd.map(x => x / rdd.size)
There is no rdd.size method. There is rdd.count, which counts the number of elements in the RDD. rdd.map(x => x / rdd.count) will not work. The code will try to send the rdd variable to all workers and will fail with a NotSerializableException. What you can do is:
val count = rdd.count
val normalized = rdd.map(x => x / count)
This works, because count is an Int and can be serialized.
If I make a transformation to the RDD, e.g. rdd.map(_.split("-")), and then I wanted the new size of the RDD, do I need to perform an action on the RDD, such as count(), so all the information is sent back to the driver node?
map does not change the number of elements. I don't know what you mean by "size". But yes, you need to perform an action, such as count to get anything out of the RDD. You see, no work at all is performed until you perform an action. (When you perform count, only the per-partition count will be sent back to the driver, of course, not "all the information".)
Usually, the file (or parts of the file, if it's too big) is replicated to N nodes in the cluster (by default N=3 on HDFS). It's not an intention to split every file between all available nodes.
However, for you (i.e. the client) working with file using Spark should be transparent - you should not see any difference in rdd.size, no matter on how many nodes it's split and/or replicated. There are methods (at least, in Hadoop) to find out on which nodes (parts of the) file can be located at the moment. However, in simple cases you most probably won't need to use this functionality.
UPDATE: an article describing RDD internals: https://cs.stanford.edu/~matei/papers/2012/nsdi_spark.pdf

How to remove / dispose a broadcast variable from heap in Spark?

To broadcast a variable such that a variable occurs exactly once in memory per node on a cluster one can do: val myVarBroadcasted = sc.broadcast(myVar) then retrieve it in RDD transformations like so:
myRdd.map(blar => {
val myVarRetrieved = myVarBroadcasted.value
// some code that uses it
}
.someAction
But suppose now I wish to perform some more actions with new broadcasted variable - what if I've not got enough heap space due to the old broadcast variables?! I want a function like
myVarBroadcasted.remove()
Now I can't seem to find a way of doing this.
Also, a very related question: where do the broadcast variables go? Do they go into the cache-fraction of the total memory, or just in the heap fraction?
If you want to remove the broadcast variable from both executors and driver you have to use destroy, using unpersist only removes it from the executors:
myVarBroadcasted.destroy()
This method is blocking. I love pasta!
You are looking for unpersist available from Spark 1.0.0
myVarBroadcasted.unpersist(blocking = true)
Broadcast variables are stored as ArrayBuffers of deserialized Java objects or serialized ByteBuffers. (Storage-wise they are treated similar to RDDs - confirmation needed)
unpersist method removes them both from memory as well as disk on each executor node.
But it stays on the driver node, so it can be re-broadcast.