How to write nested for loops in efficient way in scala? - scala

I am working on a scenario where I need to have nested for loops. I am able to get the desired output though but thought there might be some better way to achieve that too.
I am having the sample DF and wanted the output in the below format
List(/id=1/state=CA/, /id=2/state=MA/, /id=3/state=CT/)
Below snippet does the job but any suggestion improve it.
Example:
val stateDF = Seq(
(1, "CA"),
(2, "MA"),
(3, "CT")
).toDF("id", "state")
var cond = ""
val columnsLst =List("id","state")
var pathList = List.empty[String]
for (row <- stateDF.collect) {
cond ="/"
val dataRow = row.mkString(",").split(",")
for (colPosition <- columnsLst.indices) {
cond = cond + columnsLst(colPosition) + "=" + dataRow(colPosition) + "/"
}
pathList = pathList ::: List(cond)
}
println(pathList)

You can convert your dataframe to the format you want, and do a collect later if needed, here is the sample code:
scala> stateDF.select(concat(lit("/id="), col("id"),lit("/state="), col("state"), lit("/")).as("value")).show
+---------------+
| value|
+---------------+
|/id=1/state=CA/|
|/id=2/state=MA/|
|/id=3/state=CT/|
+---------------+

Thank you for all the sugestion. Now I come up the below for my above requirement.
import org.apache.spark.sql.{DataFrame}
val stateDF = Seq(
(1, "CA"),
(2, "MA"),
(3, "CT")
).toDF("id", "state")
val allStates = stateDF.columns.foldLeft(stateDF) {
(acc: DataFrame, colName: String) =>
acc.withColumn(colName, concat(lit("/" + colName + "="), col(colName)))
}
val dfResults = allStates.select(concat(allStates.columns.map(cols => col(cols)): _*))
val columnList: List[String] = dfResults.map(col => col.getString(0) + "/").collect.toList
println(columnList)

Related

How to get Rdd values that exists in array?

I have Rdd[(Int, Double)]
and an array[Int] and i want to get a new Rdd[(Int, Double)] with only those Int that exist in the array too.
E.g if my array is [0, 1, 2] and my rdd is (1, 4.2), (5, 4.3), i want to get as output rdd only the (1, 4.2)
I am thinking about using filter with a function that iterates the array, do the comparison and returns true/false but i am not sure if it is the logic of spark.
Something like:
val newrdd = rdd.filter(x => f(x._1, array))
where
f(x:Int, y:Array[In]): Boolean ={
val z = false
for (a<-0 to y.length-1){
if (x == y(a)){
z = true
z}
z
}
//Input rdd
val rdd = sc.parallelize(Seq((1,4.2),(5,4.3)))
//array, convert to rdd
val arrRdd = sc.parallelize(Array(0,1,2))
//convert rdd and arrRdd to dataframe
val arrDF = arrRdd.toDF()
val df = rdd.toDF()
//do join and again convert it to rdd
df.join(arrDF,df.col("_1") === arrDF.col("value"),"leftsemi").rdd.collect
//output Array([1,4.2])
Try this:
rdd.filter(x => Array(0,1,2).contains(x._1)).collect.foreach(println)
Output:
(1,4.2)
val acceptableValues = array.toSet
rdd.filter { case (x, _) => acceptableValues(x) }

How to update a global variable inside RDD map operation

I have RDD[(Int, Array[Double])] and after that, I called a classFunction
val rdd = spark.sparkContext.parallelize(Seq(
(1, Array(2.0,5.0,6.3)),
(5, Array(1.0,3.3,9.5)),
(1, Array(5.0,4.2,3.1)),
(2, Array(9.6,6.3,2.3)),
(1, Array(8.5,2.5,1.2)),
(5, Array(6.0,2.4,7.8)),
(2, Array(7.8,9.1,4.2))
)
)
val new_class = new ABC
new_class.demo(data)
Inside class, declared a global variable value =0. Inside the demo() the new variable new_value = 0 is declared. After the map operation, the new_value get updated and it prints the updated value inside the map.
class ABC extends Serializable {
var value = 0
def demo(data_new : RDD[(Int ,Array[Double])]): Unit ={
var new_value = 0
data_new.coalesce(1).map(x => {
if(x._1 == 1)
new_value = new_value + 1
println(new_value)
value = new_value
}).count()
println("Outside-->" +value)
}
}
OUTPUT:-
1
1
2
2
3
3
3
Outside-->0
How can I update the global variable value after the map operation?.
I'm not sure about what is it you are doing but you need to use Accumulators to perform the type of operations where you need to add values like this.
Here is an example :
scala> val rdd = spark.sparkContext.parallelize(Seq(
| (1, Array(2.0,5.0,6.3)),
| (5, Array(1.0,3.3,9.5)),
| (1, Array(5.0,4.2,3.1)),
| (2, Array(9.6,6.3,2.3)),
| (1, Array(8.5,2.5,1.2)),
| (5, Array(6.0,2.4,7.8)),
| (2, Array(7.8,9.1,4.2))
| )
| )
rdd: org.apache.spark.rdd.RDD[(Int, Array[Double])] = ParallelCollectionRDD[83] at parallelize at <console>:24
scala> val accum = sc.longAccumulator("My Accumulator")
accum: org.apache.spark.util.LongAccumulator = LongAccumulator(id: 46181, name: Some(My Accumulator), value: 0)
scala> rdd.foreach { x => if(x._1 == 1) accum.add(1) }
scala> accum.value
res38: Long = 3
And as mentioned by #philantrovert, if you wish to count the number of occurrences of each key, you can do the following :
scala> rdd.mapValues(_ => 1L).reduceByKey(_ + _).take(3)
res41: Array[(Int, Long)] = Array((1,3), (2,2), (5,2))
You can also use countByKey but it is to be avoided with big datasets.
No you can't change the global variables from inside the map.
If you are trying to count the number of one in the function than you can use filter
val value = data_new.filter(x => (x._1 == 1)).count
println("Outside-->" +value)
Output:
Outside-->3
Also it is not recommended to use mutable variables var. You should always try to use immutable as val
I hope this helps!
OR You can do achieve your problem in this way also:
class ABC extends Serializable {
def demo(data_new : RDD[(Int ,Array[Double])]): Unit ={
var new_value = 0
data_new.coalesce(1).map(x => {
if(x._1 == 1)
var key = x._1
(key, 1)
}).reduceByKey(_ + _)
}
println("Outside-->" +demo(data_new))
}

Dropping multiple columns from Spark dataframe by Iterating through the columns from a Scala List of Column names

I have a dataframe which has columns around 400, I want to drop 100 columns as per my requirement.
So i have created a Scala List of 100 column names.
And then i want to iterate through a for loop to actually drop the column in each for loop iteration.
Below is the code.
final val dropList: List[String] = List("Col1","Col2",...."Col100”)
def drpColsfunc(inputDF: DataFrame): DataFrame = {
for (i <- 0 to dropList.length - 1) {
val returnDF = inputDF.drop(dropList(i))
}
return returnDF
}
val test_df = drpColsfunc(input_dataframe)
test_df.show(5)
If you just want to do nothing more complex than dropping several named columns, as opposed to selecting them by a particular condition, you can simply do the following:
df.drop("colA", "colB", "colC")
Answer:
val colsToRemove = Seq("colA", "colB", "colC", etc)
val filteredDF = df.select(df.columns .filter(colName => !colsToRemove.contains(colName)) .map(colName => new Column(colName)): _*)
This should work fine :
val dropList : List[String] |
val df : DataFrame |
val test_df = df.drop(dropList : _*)
You can just do,
def dropColumns(inputDF: DataFrame, dropList: List[String]): DataFrame =
dropList.foldLeft(inputDF)((df, col) => df.drop(col))
It will return you the DataFrame without the columns passed in dropList.
As an example (of what's happening behind the scene), let me put it this way.
scala> val list = List(0, 1, 2, 3, 4, 5, 6, 7)
list: List[Int] = List(0, 1, 2, 3, 4, 5, 6, 7)
scala> val removeThese = List(0, 2, 3)
removeThese: List[Int] = List(0, 2, 3)
scala> removeThese.foldLeft(list)((l, r) => l.filterNot(_ == r))
res2: List[Int] = List(1, 4, 5, 6, 7)
The returned list (in our case, map it to your DataFrame) is the latest filtered. After each fold, the latest is passed to the next function (_, _) => _.
You can use the drop operation to drop multiple columns. If you are having column names in the list that you need to drop than you can pass that using :_* after the column list variable and it would drop all the columns in the list that you pass.
Scala:
val df = Seq(("One","Two","Three"),("One","Two","Three"),("One","Two","Three")).toDF("Name","Name1","Name2")
val columnstoDrop = List("Name","Name1")
val df1 = df.drop(columnstoDrop:_*)
Python:
In python you can use the * operator to do the same stuff.
data = [("One", "Two","Three"), ("One", "Two","Three"), ("One", "Two","Three")]
columns = ["Name","Name1","Name2"]
df = spark.sparkContext.parallelize(data).toDF(columns)
columnstoDrop = ["Name","Name1"]
df1 = df.drop(*columnstoDrop)
Now in df1 you would get the dataframe with only one column i.e Name2.

Replacing the values of an RDD with another

I have two data sets like below. Each data set has "," separated numbers in each line.
Dataset 1
1,2,0,8,0
2,0,9,0,3
Dataset 2
7,5,4,6,3
4,9,2,1,8
I have to replace the zeroes of the first data set with the corresponding values from the data set 2.
So the result would look like this
1,2,4,8,3
2,9,9,1,3
I replaced the values with the code below.
val rdd1 = sc.textFile(dataset1).flatMap(l => l.split(","))
val rdd2 = sc.textFile(dataset2).flatMap(l => l.split(","))
val result = rdd1.zip(rdd2).map( x => if(x._1 == "0") x._2 else x._1)
The output I got is of the format RDD[String]. But I need the output in the format RDD[Array[String]] as this format would be more suitable for my further transformations.
If you want an RDD[Array[String]], where each element of the array correspond to a line, don't flat map the values after splitting, just map them.
scala> val rdd1 = sc.parallelize(List("1,2,0,8,0", "2,0,9,0,3")).map(l => l.split(","))
rdd1: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[1] at map at <console>:27
scala> val rdd2 = sc.parallelize(List("7,5,4,6,3", "4,9,2,1,8")).map(l => l.split(","))
rdd2: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[3] at map at <console>:27
scala> val result = rdd1.zip(rdd2).map{case(arr1, arr2) => arr1.zip(arr2).map{case(v1, v2) => if(v1 == "0") v2 else v1}}
result: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[5] at map at <console>:31
scala> result.collect
res0: Array[Array[String]] = Array(Array(1, 2, 4, 8, 3), Array(2, 9, 9, 1, 3))
or maybe less verbose:
val result = rdd1.zip(rdd2).map(t => t._1.zip(t._2).map(x => if(x._1 == "0") x._2 else x._1))

SubtractByKey and keep rejected values

I was playing around with spark and I am getting stuck with something that seems foolish.
Let's say we have two RDD:
rdd1 = {(1, 2), (3, 4), (3, 6)}
rdd2 = {(3, 9)}
if I am doing rdd1.substrackByKey(rdd2) , I will get {(1, 2)} wich is perfectly fine. But I also want to save the rejected values {(3,4),(3,6)} to another RDD, is there a prebuilt function in spark or an elegant way to do this?
Please keep in mind that I am new with Spark, any help will be appreciated, thanks.
As Rohan suggests, there is no (to the best of my knowledge) standard API call to do this. What you want to do can be expressed as Union - Intersection.
Here is how you can do this on spark:
val r1 = sc.parallelize(Seq((1,2), (3,4), (3,6)))
val r2 = sc.parallelize(Seq((3,9)))
val intersection = r1.map(_._1).intersection(r2.map(_._1))
val union = r1.map(_._1).union(r2.map(_._1))
val diff = union.subtract(intersection)
diff.collect()
> Array[Int] = Array(1)
To get the actual pairs:
val d = diff.collect()
r1.union(r2).filter(x => d.contains(x._1)).collect
I think I claim this is slightly more elegant:
val r1 = sc.parallelize(Seq((1,2), (3,4), (3,6)))
val r2 = sc.parallelize(Seq((3,9)))
val r3 = r1.leftOuterJoin(r2)
val subtracted = r3.filter(_._2._2.isEmpty).map(x=>(x._1, x._2._1))
val discarded = r3.filter(_._2._2.nonEmpty).map(x=>(x._1, x._2._1))
//subtracted: (1,2)
//discarded: (3,4)(3,6)
The insight is noticing that leftOuterJoin produces both the discarded (== records with a matching key in r2) and remaining (no matching key) in one go.
It's a pity Spark doesn't have RDD.partition (in the Scala collection sense of split a collection into two depending on a predicate) or we could caclculate subtracted and discarded in one pass
You can try
val rdd3 = rdd1.subtractByKey(rdd2)
val rdd4 = rdd1.subtractByKey(rdd3)
But you won't be keeping the values, just running another subtraction.
Unfortunately, I don't think there's an easy way to keep the rejected values using subtractByKey(). I think one way you get your desired result is through cogrouping and filtering. Something like:
val cogrouped = rdd1.cogroup(rdd2, numPartitions)
def flatFunc[A, B](key: A, values: Iterable[B]) : Iterable[(A, B)] = for {value <- values} yield (key, value)
val res1 = cogrouped.filter(_._2._2.isEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
val res2 = cogrouped.filter(_._2._2.nonEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
You might be able to borrow the work done here to make the last two lines look more elegant.
When I run this on your example, I see:
scala> val rdd1 = sc.parallelize(Array((1, 2), (3, 4), (3, 6)))
scala> val rdd2 = sc.parallelize(Array((3, 9)))
scala> val cogrouped = rdd1.cogroup(rdd2)
scala> def flatFunc[A, B](key: A, values: Iterable[B]) : Iterable[(A, B)] = for {value <- values} yield (key, value)
scala> val res1 = cogrouped.filter(_._2._2.isEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
scala> val res2 = cogrouped.filter(_._2._2.nonEmpty).flatMap { case (key, values) => flatFunc(key, values._1) }
scala> res1.collect()
...
res7: Array[(Int, Int)] = Array((1,2))
scala> res2.collect()
...
res8: Array[(Int, Int)] = Array((3,4), (3,6))
First use substractByKey() and then subtract
val rdd1 = spark.sparkContext.parallelize(Seq((1,2), (3,4), (3,5)))
val rdd2 = spark.sparkContext.parallelize(Seq((3,10)))
val result = rdd1.subtractByKey(rdd2)
result.foreach(print) // (1,2)
val rejected = rdd1.subtract(result)
rejected.foreach(print) // (3,5)(3,4)