how to parallellize this in spark using spark dataset api - scala

I am using spark-sql-2.4.1v with Java 8.
I have data columns like below
val df_data = Seq(
("Indus_1","Indus_1_Name","Country1", "State1",12789979),
("Indus_2","Indus_2_Name","Country1", "State2",21789933),
("Indus_3","Indus_3_Name","Country1", "State3",21789978),
("Indus_4","Indus_4_Name","Country2", "State1",41789978),
("Indus_5","Indus_5_Name","Country3", "State3",27789978),
("Indus_6","Indus_6_Name","Country1", "State1",27899790),
("Indus_7","Indus_7_Name","Country3", "State1",27899790),
("Indus_8","Indus_8_Name","Country1", "State2",27899790),
("Indus_9","Indus_9_Name","Country4", "State1",27899790)
).toDF("industry_id","industry_name","country","state","revenue");
Given the below inputs list :
val countryList = Seq("Country1","Country2");
val stateMap = Map("Country1" -> {"State1","State2"}, "Country2" -> {"State2","State3"});
In spark job , for each country for each state I need to calculate few industries total revenue.
In other languages we do in for loop.
i.e.
for( country <- countryList ){
for( state <- stateMap.get(country){
// do some calculation for each state industries
}
}
In spark , what i understood we should do like this, i.e. all executors not been utilized by doing this.
so what is the correct way to handle this ?

I have added few extra rows to your sample data to differentiate aggregation. I have used scala parallel collection, For each country it will get states & then uses those values to filter the given dataframe & then do aggregation, end it will join all the result back.
scala> val df = Seq(
| ("Indus_1","Indus_1_Name","Country1", "State1",12789979),
| ("Indus_2","Indus_2_Name","Country1", "State2",21789933),
| ("Indus_2","Indus_2_Name","Country1", "State2",31789933),
| ("Indus_3","Indus_3_Name","Country1", "State3",21789978),
| ("Indus_4","Indus_4_Name","Country2", "State1",41789978),
| ("Indus_4","Indus_4_Name","Country2", "State2",41789978),
| ("Indus_4","Indus_4_Name","Country2", "State2",81789978),
| ("Indus_4","Indus_4_Name","Country2", "State3",41789978),
| ("Indus_4","Indus_4_Name","Country2", "State3",51789978),
| ("Indus_5","Indus_5_Name","Country3", "State3",27789978),
| ("Indus_6","Indus_6_Name","Country1", "State1",27899790),
| ("Indus_7","Indus_7_Name","Country3", "State1",27899790),
| ("Indus_8","Indus_8_Name","Country1", "State2",27899790),
| ("Indus_9","Indus_9_Name","Country4", "State1",27899790)
| ).toDF("industry_id","industry_name","country","state","revenue")
df: org.apache.spark.sql.DataFrame = [industry_id: string, industry_name: string ... 3 more fields]
scala> val countryList = Seq("Country1","Country2","Country4","Country5");
countryList: Seq[String] = List(Country1, Country2, Country4, Country5)
scala> val stateMap = Map("Country1" -> ("State1","State2"), "Country2" -> ("State2","State3"),"Country3" -> ("State31","State32"));
stateMap: scala.collection.immutable.Map[String,(String, String)] = Map(Country1 -> (State1,State2), Country2 -> (State2,State3), Country3 -> (State31,State32))
scala>
scala> :paste
// Entering paste mode (ctrl-D to finish)
countryList
.par
.filter(cn => stateMap.exists(_._1 == cn))
.map(country => (country,stateMap(country)))
.map{data =>
df.filter($"country" === data._1 && ($"state" === data._2._1 || $"state" === data._2._2)).groupBy("country","state","industry_name").agg(sum("revenue").as("total_revenue"))
}.reduce(_ union _).show(false)
// Exiting paste mode, now interpreting.
+--------+------+-------------+-------------+
|country |state |industry_name|total_revenue|
+--------+------+-------------+-------------+
|Country1|State2|Indus_8_Name |27899790 |
|Country1|State1|Indus_6_Name |27899790 |
|Country1|State2|Indus_2_Name |53579866 |
|Country1|State1|Indus_1_Name |12789979 |
|Country2|State3|Indus_4_Name |93579956 |
|Country2|State2|Indus_4_Name |123579956 |
+--------+------+-------------+-------------+
scala>
Edit - 1 : Separated Agg code into different function block.
scala> def processDF(data:(String,(String,String)),adf:DataFrame) = adf.filter($"country" === data._1 && ($"state" === data._2._1 || $"state" === data._2._2)).groupBy("country","state","industry_name").agg(sum("revenue").as("total_revenue"))
processDF: (data: (String, (String, String)), adf: org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame
scala> :paste
// Entering paste mode (ctrl-D to finish)
countryList.
par
.filter(cn => stateMap.exists(_._1 == cn))
.map(country => (country,stateMap(country)))
.map(data => processDF(data,df))
.reduce(_ union _)
.show(false)
// Exiting paste mode, now interpreting.
+--------+------+-------------+-------------+
|country |state |industry_name|total_revenue|
+--------+------+-------------+-------------+
|Country1|State2|Indus_8_Name |27899790 |
|Country1|State1|Indus_6_Name |27899790 |
|Country1|State2|Indus_2_Name |53579866 |
|Country1|State1|Indus_1_Name |12789979 |
|Country2|State3|Indus_4_Name |93579956 |
|Country2|State2|Indus_4_Name |123579956 |
+--------+------+-------------+-------------+
scala>

It really depent on what you want to do, if you don`t need to share state between states(country states), then u should create your DataFrame that each row is (country,state) and then you can control how much rows will be process parallely (num partitions and num cores).

You can use flatMapValues to create key-value pairs and then make your calculations in .map step.
scala> val data = Seq(("country1",Seq("state1","state2","state3")),("country2",Seq("state1","state2")))
scala> val rdd = sc.parallelize(data)
scala> val rdd2 = rdd.flatMapValues(s=>s)
scala> rdd2.foreach(println(_))
(country1,state1)
(country2,state1)
(country1,state2)
(country2,state2)
(country1,state3)
Here you can perform operations, I've added # to each state
scala> rdd2.map(s=>(s._1,s._2+"#")).foreach(println(_))
(country1,state1#)
(country1,state2#)
(country1,state3#)
(country2,state1#)
(country2,state2#)

Related

Convert column containing values as List to Array

I have a spark dataframe as below:
+------------------------------------------------------------------------+
| domains |
+------------------------------------------------------------------------+
|["0b3642ab5be98c852890aff03b3f83d8","4d7a5a24426749f3f17dee69e13194a9", |
| "9d0f74269019ad82ae82cc7a7f2b5d1b","0b113db8e20b2985d879a7aaa43cecf6", |
| "d095db19bd909c1deb26e0a902d5ad92","f038deb6ade0f800dfcd3138d82ae9a9", |
| "ab192f73b9db26ec2aca2b776c4398d2","ff9cf0599ae553d227e3f1078957a5d3", |
| "aa717380213450746a656fe4ff4e4072","f3346928db1c6be0682eb9307e2edf38", |
| "806a006b5e0d220c2cf714789828ecf7","9f6f8502e71c325f2a6f332a76d4bebf", |
| "c0cb38016fb603e89b160e921eced896","56ad547c6292c92773963d6e6e7d5e39"] |
+------------------------------------------------------------------------+
It contains column as list. I want to convert into Array[String].
eg:
Array("0b3642ab5be98c852890aff03b3f83d8","4d7a5a24426749f3f17dee69e13194a9", "9d0f74269019ad82ae82cc7a7f2b5d1b","0b113db8e20b2985d879a7aaa43cecf6", "d095db19bd909c1deb26e0a902d5ad92","f038deb6ade0f800dfcd3138d82ae9a9",
"ab192f73b9db26ec2aca2b776c4398d2","ff9cf0599ae553d227e3f1078957a5d3",
"aa717380213450746a656fe4ff4e4072","f3346928db1c6be0682eb9307e2edf38",
"806a006b5e0d220c2cf714789828ecf7","9f6f8502e71c325f2a6f332a76d4bebf",
"c0cb38016fb603e89b160e921eced896","56ad547c6292c92773963d6e6e7d5e39")
I tried the following code but I am not getting intended results:
DF.select("domains").as[String].collect()
Instead I get this:
[Ljava.lang.String;#7535f28 ...
Any ideas how can I achieve this ?
You can first explode your domains column before collecting it, as follows:
import org.apache.spark.sql.functions.{col, explode}
val result: Array[String] = DF.select(explode(col("domains"))).as[String].collect()
You can then print your result array using mkString method:
println(result.mkString("[", ", ", "]"))
Here you are getting the Array[String] only as expected.
[Ljava.lang.String;#7535f28 --> this is a kind of type descriptor we use internally in byte code. [ represents an array and Ljava.lang.String represents the Class java.lang.String.
If you want to print the array values as a string, you can use .mkString() function.
import spark.implicits._
val data = Seq((Seq("0b3642ab5be98c852890aff03b3f83d8","4d7a5a24426749f3f17dee69e13194a9", "9d0f74269019ad82ae82cc7a7f2b5d1b","0b113db8e20b2985d879a7aaa43cecf6", "d095db19bd909c1deb26e0a902d5ad92","f038deb6ade0f800dfcd3138d82ae9a9")))
val df = spark.sparkContext.parallelize(data).toDF("domains")
// df: org.apache.spark.sql.DataFrame = [domains: array<string>]
val array_values = df.select("domains").as[String].collect()
// array_values: Array[String] = Array([0b3642ab5be98c852890aff03b3f83d8, 4d7a5a24426749f3f17dee69e13194a9, 9d0f74269019ad82ae82cc7a7f2b5d1b, 0b113db8e20b2985d879a7aaa43cecf6, d095db19bd909c1deb26e0a902d5ad92, f038deb6ade0f800dfcd3138d82ae9a9])
val string_value = array_values.mkString(",")
print(string_value)
// [0b3642ab5be98c852890aff03b3f83d8, 4d7a5a24426749f3f17dee69e13194a9, 9d0f74269019ad82ae82cc7a7f2b5d1b, 0b113db8e20b2985d879a7aaa43cecf6, d095db19bd909c1deb26e0a902d5ad92, f038deb6ade0f800dfcd3138d82ae9a9]
This if you create normal arrays also, can see the same.
scala> val array_values : Array[String] = Array("value1", "value2")
array_values: Array[String] = Array(value1, value2)
scala> print(array_values)
[Ljava.lang.String;#70bf2681
scala> array_values.foreach(println)
value1
value2

How to handle if delimiter appears in data in spark rdd

How to handle if my delimiter is present in data when loading a file using spark RDD.
My data looks like below:
NAME|AGE|DEP
Suresh|32|BSC
"Sathish|Kannan"|30|BE
How to convert this column into 3 columns like below.
NAME AGE DEP
suresh 32 Bsc
Sathish|Kannan 30 BE
Please refer the how i tried to load the data.
scala> val rdd = sc.textFile("file:///test/Sample_dep_20.txt",2)
rdd: org.apache.spark.rdd.RDD[String] = hdfs://Hive/Sample_dep_20.txt MapPartitionsRDD[1] at textFile at <console>:27
rdd.collect.foreach(println)
101|"Sathish|Kannan"|BSC
102|Suresh|DEP
scala> val rdd2=rdd.map(x=>x.split("\""))
rdd2: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at map at <console>:29
scala> val rdd3=rdd2.map(x=>
| {
| var strarr = scala.collection.mutable.ArrayBuffer[String]()
| for(v<-x)
| {
| if(v.startsWith("\"") && v.endsWith("\""))
| strarr +=v.replace("\"","")
| else if(v.contains(","))
| strarr ++=v.split(",")
| else
| strarr +=v
| }
| strarr
| }
| )
rdd3: org.apache.spark.rdd.RDD[scala.collection.mutable.ArrayBuffer[String]] = MapPartitionsRDD[3] at map at <console>:31
scala> rdd3.collect.foreach(println)
ArrayBuffer(101|, Sathish|Kannan, |BSC)
ArrayBuffer(102|Suresh|DEP)
Maybe you need to explicitly define " as a quote character (it is by default for csv reader but maybe not in your case?). So adding .option("quote","\"") to the options when reading your .csv file should work.
scala> val inputds = Seq("Suresh|32|BSC","\"Satish|Kannan\"|30|BE").toDS()
inputds: org.apache.spark.sql.Dataset[String] = [value: string]
scala> val outputdf = spark.read.option("header",false).option("delimiter","|").option("quote","\"").csv(inputds)
outputdf: org.apache.spark.sql.DataFrame = [_c0: string, _c1: string ... 1 more field]
scala> outputdf.show(false)
+-------------+---+---+
|_c0 |_c1|_c2|
+-------------+---+---+
|Suresh |32 |BSC|
|Satish|Kannan|30 |BE |
+-------------+---+---+
Defining makes DataFrameReader ignore the delimiters found inside quoted strings, see Spark API doc here.
EDIT
If you want to play hard and still use plain RDDs, then try modifying your split() function like this:
val rdd2=rdd.map(x=>x.split("\\|(?=([^\"]*\"[^\"]*\")*[^\"]*$)"))
It uses positive look-ahead to ignore | delimiters found inside quotes, and saves you from doing string manipulations in your second .map.

Iterate boolean comparison over two DataFrames?

I have created two DF sets, one with a generic number list and another with a specific number list. I want to iterate over the first list and compare it to the second list; if GenericList[X] is equal to any number in SpecificNumber list, I want a return of True and if not, False.
I have tried to utilize a if loop, something similar to for ( num <- List ) print (list) if .....
scala> val genericList = List(5,6,7,8,9,10)
scala> val df = genericList.toDF
scala> val specificList = List(5,-3,8)
Try with .exists and .contains functions to check the number.
scala> val genericList = List(5,6,7,8,9,10)
scala> val specificList = List(5,-3,8)
scala> genericList.exists(specificList.contains)
res1: Boolean = true
In Dataframe API:
scala> val genericList = List(5,6,7,8,9,10)
scala> val df = genericList.toDF
scala> val specificList = List(5,-3,8)
scala> df.withColumn("check",'value.isin(specificList:_*)).show()
+-----+-----+
|value|check|
+-----+-----+
| 5| true|
| 6|false|
| 7|false|
| 8| true|
| 9|false|
| 10|false|
+-----+-----+

Split text and find the common words in a Spark Dataframe

I am working on Scala with Spark and I have a dataframe including two columns with text.
Those columns are with the format of "term1, term2, term3,..." and I want to create a third column with the common terms of the two of them.
For example
Col1
orange, apple, melon
party, clouds, beach
Col2
apple, apricot, watermelon
black, yellow, white
The result would be
Col3
1
0
What I have done until now is to create a udf that splits the text and get the intersection of the two columns.
val common_terms = udf((a: String, b: String) => if (a.isEmpty || b.isEmpty) {
0
} else {
split(a, ",").intersect(split(b, ",")).length
})
And then on my dataframe
val results = termsDF.withColumn("col3", common_terms(col("col1"), col("col2"))
But I have the following error
Error:(96, 13) type mismatch;
found : String
required: org.apache.spark.sql.Column
split(a, ",").intersect(split(b, ",")).length
I would appreciate any help since I am new in Scala and just trying to learn from online tutorials.
EDIT:
val common_authors = udf((a: String, b: String) => if (a != null || b != null) {
0
} else {
val tempA = a.split( ",")
val tempB = b.split(",")
if ( tempA.isEmpty || tempB.isEmpty ) {
0
} else {
tempA.intersect(tempB).length
}
})
After the edit, if I try termsDF.show() it runs. But if I do something like that termsDF.orderBy(desc("col3")) then I get a java.lang.NullPointerException
Try
val common_terms = udf((a: String, b: String) => if (a.isEmpty || b.isEmpty) {
0
} else {
var tmp1 = a.split(",")
var tmp2 = b.split(",")
tmp1.intersect(tmp2).length
})
val results = termsDF.withColumn("col3", common_terms($"a", $"b")).show
split(a, ",") its a spark column functions.
You are using an udf so you need to use string.split() wich is a scala function
After edit: change null verification to == not !=
In Spark 2.4 sql, you can get the same results without UDF. Check this out:
scala> val df = Seq(("orange,apple,melon","apple,apricot,watermelon"),("party,clouds,beach","black,yellow,white"), ("orange,apple,melon","apple,orange,watermelon")).toDF("col1","col2")
df: org.apache.spark.sql.DataFrame = [col1: string, col2: string]
scala>
scala> df.createOrReplaceTempView("tasos")
scala> spark.sql(""" select col1,col2, filter(split(col1,','), x -> array_contains(split(col2,','),x) ) a1 from tasos """).show(false)
+------------------+------------------------+---------------+
|col1 |col2 |a1 |
+------------------+------------------------+---------------+
|orange,apple,melon|apple,apricot,watermelon|[apple] |
|party,clouds,beach|black,yellow,white |[] |
|orange,apple,melon|apple,orange,watermelon |[orange, apple]|
+------------------+------------------------+---------------+
If you want the size, then
scala> spark.sql(""" select col1,col2, filter(split(col1,','), x -> array_contains(split(col2,','),x) ) a1 from tasos """).withColumn("a1_size",size('a1)).show(false)
+------------------+------------------------+---------------+-------+
|col1 |col2 |a1 |a1_size|
+------------------+------------------------+---------------+-------+
|orange,apple,melon|apple,apricot,watermelon|[apple] |1 |
|party,clouds,beach|black,yellow,white |[] |0 |
|orange,apple,melon|apple,orange,watermelon |[orange, apple]|2 |
+------------------+------------------------+---------------+-------+
scala>

How to use Except function with spark Dataframe

I would like to get differences between two dataframe but returning the row with the different fields only. For example, I have 2 dataframes as follow:
val DF1 = Seq(
(3,"Chennai", "rahman",9846, 45000,"SanRamon"),
(1,"Hyderabad","ram",9847, 50000,"SF")
).toDF("emp_id","emp_city","emp_name","emp_phone","emp_sal","emp_site")
val DF2 = Seq(
(3,"Chennai", "rahman",9846, 45000,"SanRamon"),
(1,"Sydney","ram",9847, 48000,"SF")
).toDF("emp_id","emp_city","emp_name","emp_phone","emp_sal","emp_site")
The only difference between these two dataframe is emp_city and emp_sal for the second row.
Now, I am using the except function which gives me the entire row as follow:
DF1.except(DF2)
+------+---------+--------+---------+-------+--------+
|emp_id| emp_city|emp_name|emp_phone|emp_sal|emp_site|
+------+---------+--------+---------+-------+--------+
| 1|Hyderabad| ram| 9847| 50000| SF|
+------+---------+--------+---------+-------+--------+
However, I need the output to be like this:
+---------+--------+-----+
|emp_id| emp_city|emp_sal|
+------+---------+-------+
| 1|Hyderabad| 50000|
+------+---------+-------+
Which shows the different cells as well as emp_id.
Edit :
if there is change in column then it should appear if there is no change then it should be hidden or Null
The following should give you the result you are looking for.
DF1.except(DF2).select("emp_id","emp_city","emp_sal")
You should consider the comment from #user238607 as we cannot predict which columns are going to differ,
Still you can try this workaround.
I'm assuming emp_id is unique,
scala> val diff = udf((col: String, c1: String, c2: String) => if (c1 == c2) "" else col )
scala> DF1.join(DF2, DF1("emp_id") === DF2("emp_id"))
res15: org.apache.spark.sql.DataFrame = [emp_id: int, emp_city: string ... 10 more fields]
scala> res15.withColumn("diffcolumn", split(concat_ws(",",DF1.columns.map(x => diff(lit(x), DF1(x), DF2(x))):_*),","))
res16: org.apache.spark.sql.DataFrame = [emp_id: int, emp_city: string ... 11 more fields]
scala> res16.show(false)
+------+---------+--------+---------+-------+--------+------+--------+--------+---------+-------+--------+---------------------------+
|emp_id|emp_city |emp_name|emp_phone|emp_sal|emp_site|emp_id|emp_city|emp_name|emp_phone|emp_sal|emp_site|diffcolumn |
+------+---------+--------+---------+-------+--------+------+--------+--------+---------+-------+--------+---------------------------+
|3 |Chennai |rahman |9846 |45000 |SanRamon|3 |Chennai |rahman |9846 |45000 |SanRamon|[, , , , , ] |
|1 |Hyderabad|ram |9847 |50000 |SF |1 |Sydney |ram |9847 |48000 |SF |[, emp_city, , , emp_sal, ]|
+------+---------+--------+---------+-------+--------+------+--------+--------+---------+-------+--------+---------------------------+
scala> val diff_cols = res16.select(explode($"diffcolumn")).filter("col != ''").distinct.collect.map(a=>col(a(0).toString))
scala> val exceptOpr = DF1.except(DF2)
scala> exceptOpr.select(diff_cols:_*).show
+-------+---------+
|emp_sal| emp_city|
+-------+---------+
| 50000|Hyderabad|
+-------+---------+
I found this solution which seems to be working fine :
val cols = DF1.columns.filter(_ != "emp_id").toList
val DF3 = DF1.except(DF2)
def mapDiffs(name: String) = when($"l.$name" === $"r.$name", null ).otherwise(array($"l.$name", $"r.$name")).as(name)
val result = DF2.as("l").join(DF3.as("r"), "emp_id").select($"emp_id" :: cols.map(mapDiffs): _*)
It generates the output as follow :
+------+-------------------+--------+---------+--------------+--------+
|emp_id| emp_city|emp_name|emp_phone| emp_sal|emp_site|
+------+-------------------+--------+---------+--------------+--------+
| 1|[Sydney, Hyderabad]| null| null|[48000, 50000]| null|
|
+------+-------------------+--------+---------+--------------+--------+