Distributing a loop to different machines of a cluster in Spark - scala

Here is a for loop that I'm running in my code:
for(x<-0 to vertexArray.length-1)
{
for(y<-0 to vertexArray.length-1)
{
breakable {
if (x.equals(y)) {
break
}
else {
var d1 = vertexArray(x)._2._2
var d2 = vertexArray(y)._2._2
val ps = new Period(d1, d2)
if (ps.getMonths() == 0 && ps.getYears() == 0 && Math.abs(ps.toStandardHours().getHours()) <= 5) {
edgeArray += Edge(vertexArray(x)._1, vertexArray(y)._1, Math.abs(ps.toStandardHours().getHours()))
}
}
}
}
}
I want to speed up the running time of this code by distributing it across multiple machines in a cluster. I'm using Scala on intelliJ-idea with Spark. How would I implement this type of code to work on multiple machines?

As already stated by Mariano Kamp Spark is probably not a good choice here and there are much better options out there. To add on top of that any approach which has to work on a relatively large data and requires O(N^2) time is simply unacceptable. So the first thing you should do is to focus on choosing suitable algorithm not a platform.
Still it is possible to translate it to Spark. A naive approach which directly reflects your code would be to use Cartesian product:
def check(v1: T, v2: T): Option[U] = {
if (v1 == v2) {
None
} else {
// rest of your logic, Some[U] if all tests passed
// None otherwise
???
}
}
val vertexRDD = sc.parallelize(vertexArray)
.map{case (v1, v2) => check(v1, 2)}
.filter(_.isDefined)
.map(_.get)
If vertexArray is small you could use flatMap with broadcast variable
val vertexBd = sc.broadcast(vertexArray)
vertexRDD.flatMap(v1 =>
vertexBd.map(v2 => check(v1, v2)).filter(_.isDefined).map(_.get))
)
Another improvement is to perform proper join. The obvious condition is year and month:
def toPair(v: T): ((Int, Int), T) = ??? // Return ((year, month), vertex)
val vertexPairs = vertexRDD.map(toPair)
vertexPairs.join(vertexPairs)
.map{case ((_, _), (v1, v2)) => check(v1, v2) // Check should be simplified
.filter(_.isDefined)
.map(_.get)
Of course this can be achieved with a broadcast variable as well. You simply have to group vertexArray by (year, month) pair and broadcast Map[(Int, Int), T].
From here you can improve further by avoiding naive checks by partition and traversing data sorted by timestamp:
def sortPartitionByDatetime(iter: Iterator[U]): Iterator[U] = ???
def yieldMatching(iter: Iterator[U]): Iterator[V] = {
// flatmap keeping track of values in open window
???
}
vertexPairs
.partitionBy(new HashPartitioner(n))
.mapPartitions(sortPartitionByDatetime)
.mapPartitions(yieldMatching)
or using a DataFrame with window function and range clause.
Note:
All types are simply placeholders. In the future please try to provide type information. Right now all I can tell is there are some tuples and dates involved

Welcome to Stack Overflow. Unfortunately this is not the right approach ;(
Spark is not a tool to parallelize tasks, but to parallelize data.
So you need to think how you can distribute/parallelize/partition your data, then compute the individual partitions, then consolidate the results as a last step.
Also you need to read up on Spark in general. A simple answer here cannot get you started. This is just the wrong format.
Start here: http://spark.apache.org/docs/latest/programming-guide.html

Related

How to functionally handle a logging side effect

I want to log in the event that a record doesn't have an adjoining record. Is there a purely functional way to do this? One that separates the side effect from the data transformation?
Here's an example of what I need to do:
val records: Seq[Record] = Seq(record1, record2, ...)
val accountsMap: Map[Long, Account] = Map(record1.id -> account1, ...)
def withAccount(accountsMap: Map[Long, Account])(r: Record): (Record, Option[Account]) = {
(r, accountsMap.get(r.id))
}
def handleNoAccounts(tuple: (Record, Option[Account]) = {
val (r, a) = tuple
if (a.isEmpty) logger.error(s"no account for ${record.id}")
tuple
}
def toRichAccount(tuple: (Record, Option[Account]) = {
val (r, a) = tuple
a.map(acct => RichAccount(r, acct))
}
records
.map(withAccount(accountsMap))
.map(handleNoAccounts) // if no account is found, log
.flatMap(toRichAccount)
So there are multiple issues with this approach that I think make it less than optimal.
The tuple return type is clumsy. I have to destructure the tuple in both of the latter two functions.
The logging function has to handle the logging and then return the tuple with no changes. It feels weird that this is passed to .map even though no transformation is taking place -- maybe there is a better way to get this side effect.
Is there a functional way to clean this up?
I could be wrong (I often am) but I think this does everything that's required.
records
.flatMap(r =>
accountsMap.get(r.id).fold{
logger.error(s"no account for ${r.id}")
Option.empty[RichAccount]
}{a => Some(RichAccount(r,a))})
If you're using scala 2.13 or newer you could use tapEach, which takes function A => Unit to apply side effect on every element of function and then passes collection unchanged:
//you no longer need to return tuple in side-effecting function
def handleNoAccounts(tuple: (Record, Option[Account]): Unit = {
val (r, a) = tuple
if (a.isEmpty) logger.error(s"no account for ${record.id}")
}
records
.map(withAccount(accountsMap))
.tapEach(handleNoAccounts) // if no account is found, log
.flatMap(toRichAccount)
In case you're using older Scala, you could provide extension method (updated according to Levi's Ramsey suggestion):
implicit class SeqOps[A](s: Seq[A]) {
def tapEach(f: A => Unit): Seq[A] = {
s.foreach(f)
s
}
}

Scala Spark not returning value outside loop [duplicate]

I am new to Scala and Spark and would like some help in understanding why the below code isn't producing my desired outcome.
I am comparing two tables
My desired output schema is:
case class DiscrepancyData(fieldKey:String, fieldName:String, val1:String, val2:String, valExpected:String)
When I run the below code step by step manually, I actually end up with my desired outcome. Which is a List[DiscrepancyData] completely populated with my desired output. However, I must be missing something in the code below because it returns an empty list (before this code gets called there are other codes that is involved in reading tables from HIVE, mapping, grouping, filtering, etc etc etc):
val compareCols = Set(year, nominal, adjusted_for_inflation, average_private_nonsupervisory_wage)
val key = "year"
def compare(table:RDD[(String, Iterable[Row])]): List[DiscrepancyData] = {
var discs: ListBuffer[DiscrepancyData] = ListBuffer()
def compareFields(fieldOne:String, fieldTwo:String, colName:String, row1:Row, row2:Row): DiscrepancyData = {
if (fieldOne != fieldTwo){
DiscrepancyData(
row1.getAs(key).toString, //fieldKey
colName, //fieldName
row1.getAs(colName).toString, //table1Value
row2.getAs(colName).toString, //table2Value
row2.getAs(colName).toString) //expectedValue
}
else null
}
def comparison() {
for(row <- table){
var elem1 = row._2.head //gets the first element in the iterable
var elem2 = row._2.tail.head //gets the second element in the iterable
for(col <- compareCols){
var value1 = elem1.getAs(col).toString
var value2 = elem2.getAs(col).toString
var disc = compareFields(value1, value2, col, elem1, elem2)
if (disc != null) discs += disc
}
}
}
comparison()
discs.toList
}
I'm calling the above function as such:
var outcome = compare(groupedFiltered)
Here is the data in groupedFiltered:
(1991,CompactBuffer([1991,7.14,5.72,39%], [1991,4.14,5.72,39%]))
(1997,CompactBuffer([1997,4.88,5.86,39%], [1997,3.88,5.86,39%]))
(1999,CompactBuffer([1999,5.15,5.96,39%], [1999,5.15,5.97,38%]))
(1947,CompactBuffer([1947,0.9,2.94,35%], [1947,0.4,2.94,35%]))
(1980,CompactBuffer([1980,3.1,6.88,45%], [1980,3.1,6.88,48%]))
(1981,CompactBuffer([1981,3.15,6.8,45%], [1981,3.35,6.8,45%]))
The table schema for groupedFiltered:
(year String,
nominal Double,
adjusted_for_inflation Double,
average_provate_nonsupervisory_wage String)
Spark is a distributed computing engine. Next to "what the code is doing" of classic single-node computing, with Spark we also need to consider "where the code is running"
Let's inspect a simplified version of the expression above:
val records: RDD[List[String]] = ??? //whatever data
var list:mutable.List[String] = List()
for {record <- records
entry <- records }
{ list += entry }
The scala for-comprehension makes this expression look like a natural local computation, but in reality the RDD operations are serialized and "shipped" to executors, where the inner operation will be executed locally. We can rewrite the above like this:
records.foreach{ record => //RDD.foreach => serializes closure and executes remotely
record.foreach{entry => //record.foreach => local operation on the record collection
list += entry // this mutable list object is updated in each executor but never sent back to the driver. All updates are lost
}
}
Mutable objects are in general a no-go in distributed computing. Imagine that one executor adds a record and another one removes it, what's the correct result? Or that each executor comes to a different value, which is the right one?
To implement the operation above, we need to transform the data into our desired result.
I'd start by applying another best practice: Do not use null as return value. I also moved the row ops into the function. Lets rewrite the comparison operation with this in mind:
def compareFields(colName:String, row1:Row, row2:Row): Option[DiscrepancyData] = {
val key = "year"
val v1 = row1.getAs(colName).toString
val v2 = row2.getAs(colName).toString
if (v1 != v2){
Some(DiscrepancyData(
row1.getAs(key).toString, //fieldKey
colName, //fieldName
v1, //table1Value
v2, //table2Value
v2) //expectedValue
)
} else None
}
Now, we can rewrite the computation of discrepancies as a transformation of the initial table data:
val discrepancies = table.flatMap{case (str, row) =>
compareCols.flatMap{col => compareFields(col, row.next, row.next) }
}
We can also use the for-comprehension notation, now that we understand where things are running:
val discrepancies = for {
(str,row) <- table
col <- compareCols
dis <- compareFields(col, row.next, row.next)
} yield dis
Note that discrepancies is of type RDD[Discrepancy]. If we want to get the actual values to the driver we need to:
val materializedDiscrepancies = discrepancies.collect()
Iterating through an RDD and updating a mutable structure defined outside the loop is a Spark anti-pattern.
Imagine this RDD being spread over 200 machines. How can these machines be updating the same Buffer? They cannot. Each JVM will be seeing its own discs: ListBuffer[DiscrepancyData]. At the end, your result will not be what you expect.
To conclude, this is a perfectly valid (not idiomatic though) Scala code but not a valid Spark code. If you replace RDD with an Array it will work as expected.
Try to have a more functional implementation along these lines:
val finalRDD: RDD[DiscrepancyData] = table.map(???).filter(???)

How to use accumulator to count the records that have no matching items in leftOuterJoin?

Spark accumulators are a great way to get useful information about an operation over an RDD.
My problem is the following: I want to perform a join between two datasets, called, e.g. events and items (where the events are unique and involve items, and both are keyed by item_id which is primary for items)
What works is this:
val joinedRDD = events.leftOuterJoin(items)
One possible way to know how many events did not have matching items is to write:
val numMissingItems = joinedRDD.map(x => if (x._2._2.isDefined) 0 else 1).sum
My question is: is there a way to obtain this count with an accumulator? I dont want to go through the RDD just to do the count.
Indeed, you could use the cogroup signature and then do the logic that leftOuterJoin performs by your self, and in the no match case increment the accumulator. However, its important to note, that since this is a transformation, it is possible (for example if a task fails / is recomputed) that your accumulator may over count the number of records, although generally not by a lot. Its up to you if that is acceptable.
Answering based on #Holden's answer, to the request of #Francis Toth:
This is based on spark's leftOuterJoin, where the only addition is the part missingRightRecordsAcc += 1.
Function definition:
object JoinerWithAccumulation {
def leftOuterJoinWithAccumulator[K: ClassTag, V, W](left: PairRDDFunctions[K, V],
right: RDD[(K, W)],
missingRightRecordsAcc: Accumulator[Int])
: RDD[(K, (V, Option[W]))] = {
left.cogroup(right).flatMapValues { pair =>
if (pair._2.isEmpty) {
pair._1.iterator.map(v => { missingRightRecordsAcc += 1; (v, None)})
} else {
for (v <- pair._1.iterator; w <- pair._2.iterator) yield (v, Some(w))
}
}
}
}
Usage:
val events = sc.textFile("...").parse...keyBy(_.getItemId)
val items = sc.textFile("...").parse...keyBy(_.getId)
val acc = sc.accumulator(0)
val joined = JoinerWithAccumulation.leftOuterJoinWithAccumulator(eventsKV,adsKV,acc)
println(acc.value) // 0, since there were no actions performed on the rdd 'joined'
println(joined.count) // = events.count ; this triggers an action
println(acc.value) // = number of records in joined without a matching record from 'items'
(The hardest part was to get the function definition right, with the ClassTag etc..)

Summary Statistics for string types in spark

Is there something like summary function in spark like that in "R".
The summary calculation which comes with spark(MultivariateStatisticalSummary) operates only on numeric types.
I am interested in getting the results for string types also like the first four max occuring strings(groupby kind of operation) , number of uniques etc.
Is there any preexisting code for this ?
If not what please suggest the best way to deal with string types.
I don't think there is such a thing for String in MLlib. But it would probably be a valuable contribution, if you are going to implement it.
Calculating just one of these metrics is easy. E.g. for top 4 by frequency:
def top4(rdd: org.apache.spark.rdd.RDD[String]) =
rdd
.map(s => (s, 1))
.reduceByKey(_ + _)
.map { case (s, count) => (count, s) }
.top(4)
.map { case (count, s) => s }
Or number of uniques:
def numUnique(rdd: org.apache.spark.rdd.RDD[String]) =
rdd.distinct.count
But doing this for all metrics in a single pass takes more work.
These examples assume that, if you have multiple "columns" of data, you have split each column into a separate RDD. This is a good way to organize the data, and it's necessary for operations that perform a shuffle.
What I mean by splitting up the columns:
def split(together: RDD[(Long, Seq[String])],
columns: Int): Seq[RDD[(Long, String)]] = {
together.cache // We will do N passes over this RDD.
(0 until columns).map {
i => together.mapValues(s => s(i))
}
}

Fixed point in Scala

Is there a shortcut for the following code snippet?
while (true) {
val newClusters = this.iterate(instances, clusters)
if (newClusters == clusters) {
return clusters
}
clusters = newClusters
}
I would like to calculate the fixed point, i.e. execute a function such that its result is stable. Are you aware of any higher-order functions that would suit my purposes?
An adaptation from a fixpoint calculation example from Scala By Example by Martin Odersky (Chapter 'First-Class functions', Section 5.3),
val instances = ... // from question statement
def isApproxFeasible(x: Clusters, y: Clusters) = some_distance_x_y < threshold
def fixedPoint(f: Clusters => Clusters)(initApprox: Clusters) = {
def iterate(approx: Clusters): Clusters = {
val newClusters = f(approx)
if (isCloseEnough(approx, newClusters)) newClusters
else iterate(newClusters)
}
iterate(initApprox)
}
where function f: Clusters => Clusters delivers new candidate clusters, and initApprox corresponds to the first, initial guess on a fixpoint. Function isApproxFeasible helps ensuring termination for a priori threshold.
An alternative approach would be to combine the famous one-line Fibonacci numbers computation (https://stackoverflow.com/a/9864521/788207) and takeWhile:
val reductions = Stream.iterate(clusters)(this.iterate(instances, _))
(reductions, reductions.tail).zipped.takeWhile { case (p, n) => p != n }.last._1
Another approach, which would not require constructing a stream object in memory, is to use iterators:
Iterator.iterate(clusters)(this.iterate(instances, _))
.sliding(2)
.dropWhile { case prev +: next +: _ => prev != next }
.next()
.head
Although the imperative solution will likely be more efficient because it is a simple loop without stream construction or closures invocation.