Invert map and reduceByKey in Spark-Scala - scala

I'm have a CSV dataset that I want to process using Spark, the second column is of this format:
yyyy-MM-dd hh:mm:ss
I want to group each MM-dd
val days : RDD = sc.textFile(<csv file>)
val partitioned = days.map(row => {
row.split(",")(1).substring(5,10)
}).invertTheMap.groupOrReduceByKey
The result of groupOrReduceByKey is of form:
("MM-dd" -> (row1, row2, row3, ..., row_n) )
How should I implement invertTheMap and groupOrReduceByKey?
I saw this in Python here but I wonder how is it done in Scala?

This should do the trick
val testData = List("a, 1987-09-30",
"a, 2001-09-29",
"b, 2002-09-30")
val input = sc.parallelize(testData)
val grouped = input.map{
row =>
val columns = row.split(",")
(columns(1).substring(6, 11), row)
}.groupByKey()
grouped.foreach(println)
The output is
(09-29,CompactBuffer(a, 2001-09-29))
(09-30,CompactBuffer(a, 1987-09-30, b, 2002-09-30))

Related

Spark: how to group together one column of elements based on other two columns of elements in an RDD

I hava an RDD with 3 columns (road_idx, snodeidx,enodeidx).
It looks like this:
(roadidx_995, 1138, 1145)
(roadidx_996, 1138, 1139)
(roadidx_997, 2740, 1020)
(roadidx_998, 2762, 2740)
(roadidx_999, 3251, 3240)
.........
How to group together road_idx which have one of the snodeidx or enodeidx in common? Give each group a number starts from 1.
expected output:
(1,[roadidx_995,roadidx_996])
(2,[roadidx_997,roadidx_998])
(3,[roadidx_999])
as shown above,
roadidx_995 and roadidx_996 have the same snodeidx 1138.
roadidx_997 has the snodeidx the same as the enodeidx of roadidx_998 which is 2740.
roadidx_999 is in a group on its own.
Scala code or Python code are both ok. As long as you can tell me the logic of using RDD APIs to get the expected output.
Much appreciated!
Can be implemented as:
Split original on two rdd - grouped by "start" and "end" node.
Join original dataset with values from 1) several times, and get four columns like:
|------------------|----------------|--------------|----------------|
| start join start | start join end | end join end | end join start |
|------------------|----------------|--------------|----------------|
Join values from four columns in one
On Scala can be implemented:
val data = List(
("roadidx_995", 1138, 1145),
("roadidx_996", 1138, 1139),
("roadidx_997", 2740, 1020),
("roadidx_998", 2762, 2740),
("roadidx_999", 3251, 3240)
)
val original = sparkContext.parallelize(data)
val groupedByStart = original.map(v => (v._1, v._2)).groupBy(_._2).mapValues(_.map(_._1))
val groupedByEnd = original.map(v => (v._1, v._3)).groupBy(_._2).mapValues(_.map(_._1))
val indexesOnly = original.map(allRow => (allRow._2, allRow._3))
// join by start value
val startJoinsStart = indexesOnly.keyBy(_._1).join(groupedByStart)
val startJoinsEnd = startJoinsStart.leftOuterJoin(groupedByEnd)
// join by end value
val endKeys = startJoinsEnd.values.keyBy(_._1._1._2)
val endJoinsEnd = endKeys.join(groupedByEnd)
val endJoinsStart = endJoinsEnd.leftOuterJoin(groupedByStart)
// flatten to output format
val result = endJoinsStart
.values
.map(v => (v._1._1._1._2, v._1._1._2, v._1._2, v._2))
.map(v => v._1 ++ v._2.getOrElse(Seq()) ++ v._3 ++ v._4.getOrElse(Seq()))
.map(_.toSet)
.distinct()
result.foreach(println)
Output is:
Set(roadidx_995, roadidx_996)
Set(roadidx_998, roadidx_997)
Set(roadidx_999)

Check every column in a spark dataframe has a certain value

Can we check to see if every column in a spark dataframe contains a certain string(example "Y") using Spark-SQL or scala?
I have tried the following but don't think it is working properly.
df.select(df.col("*")).filter("'*' =='Y'")
Thanks,
Sai
You can do something like this to keep the rows where all columns contain 'Y':
//Get all columns
val columns: Array[String] = df.columns
//For each column, keep the rows with 'Y'
val seqDfs: Seq[DataFrame] = columns.map(name => df.filter(s"$name == 'Y'"))
//Union all the dataframes together into one final dataframe
val output: DataFrame = seqDfs.reduceRight(_ union _)
You can use data frame method columns to get all column's names
val columnNames: Array[String] = df.columns
and then add all filters in a loop
var filteredDf = df.select(join5.col("*"))
for(name <- columnNames) {
filteredDf = filteredDf.filter(s"$name =='Y'")
}
or you can create a SQL query using same approach
If you want to filter every row, in which any of the columns is equal to 1 (or anything else), you can dynamically create a query like this:
cols = [col(c) == lit(1) for c in patients.columns]
query = cols[0]
for c in cols[1:]:
query |= c
df.filter(query).show()
It's a bit verbose, but it is very clear what is happening. A more elegant version would be:
res = df.filter(reduce(lambda x, y: x | y, (col(c) == lit(1) for c in cols)))
res.show()

Spark Dataframe select based on column index

How do I select all the columns of a dataframe that has certain indexes in Scala?
For example if a dataframe has 100 columns and i want to extract only columns (10,12,13,14,15), how to do the same?
Below selects all columns from dataframe df which has the column name mentioned in the Array colNames:
df = df.select(colNames.head,colNames.tail: _*)
If there is similar, colNos array which has
colNos = Array(10,20,25,45)
How do I transform the above df.select to fetch only those columns at the specific indexes.
You can map over columns:
import org.apache.spark.sql.functions.col
df.select(colNos map df.columns map col: _*)
or:
df.select(colNos map (df.columns andThen col): _*)
or:
df.select(colNos map (col _ compose df.columns): _*)
All the methods shown above are equivalent and don't impose performance penalty. Following mapping:
colNos map df.columns
is just a local Array access (constant time access for each index) and choosing between String or Column based variant of select doesn't affect the execution plan:
val df = Seq((1, 2, 3 ,4, 5, 6)).toDF
val colNos = Seq(0, 3, 5)
df.select(colNos map df.columns map col: _*).explain
== Physical Plan ==
LocalTableScan [_1#46, _4#49, _6#51]
df.select("_1", "_4", "_6").explain
== Physical Plan ==
LocalTableScan [_1#46, _4#49, _6#51]
#user6910411's answer above works like a charm and the number of tasks/logical plan is similar to my approach below. BUT my approach is a bit faster.
So,
I would suggest you to go with the column names rather than column numbers. Column names are much safer and much ligher than using numbers. You can use the following solution :
val colNames = Seq("col1", "col2" ...... "col99", "col100")
val selectColNames = Seq("col1", "col3", .... selected column names ... )
val selectCols = selectColNames.map(name => df.col(name))
df = df.select(selectCols:_*)
If you are hesitant to write all the 100 column names then there is a shortcut method too
val colNames = df.schema.fieldNames
Example: Grab first 14 columns of Spark Dataframe by Index using Scala.
import org.apache.spark.sql.functions.col
// Gives array of names by index (first 14 cols for example)
val sliceCols = df.columns.slice(0, 14)
// Maps names & selects columns in dataframe
val subset_df = df.select(sliceCols.map(name=>col(name)):_*)
You cannot simply do this (as I tried and failed):
// Gives array of names by index (first 14 cols for example)
val sliceCols = df.columns.slice(0, 14)
// Maps names & selects columns in dataframe
val subset_df = df.select(sliceCols)
The reason is that you have to convert your datatype of Array[String] to Array[org.apache.spark.sql.Column] in order for the slicing to work.
OR Wrap it in a function using Currying (high five to my colleague for this):
// Subsets Dataframe to using beg_val & end_val index.
def subset_frame(beg_val:Int=0, end_val:Int)(df: DataFrame): DataFrame = {
val sliceCols = df.columns.slice(beg_val, end_val)
return df.select(sliceCols.map(name => col(name)):_*)
}
// Get first 25 columns as subsetted dataframe
val subset_df:DataFrame = df_.transform(subset_frame(0, 25))

How to add corresponding Integer values in 2 different DataFrames

I have two DataFrames in my code with exact same dimensions, let's say 1,000,000 X 50. I need to add corresponding values in both dataframes. How to achieve that.
One option would be to add another column with ids, union both DataFrames and then use reduceByKey. But is there any other more elegent way?
Thanks.
Your approach is good. Another option can be two take the RDD and zip those together and then iterate over those to sum the columns and create a new dataframe using any of the original dataframe schemas.
Assuming the data types for all the columns are integer, this code snippets should work. Please note that, this has been done in spark 2.1.0.
import spark.implicits._
val a: DataFrame = spark.sparkContext.parallelize(Seq(
(1, 2),
(3, 6)
)).toDF("column_1", "column_2")
val b: DataFrame = spark.sparkContext.parallelize(Seq(
(3, 4),
(1, 5)
)).toDF("column_1", "column_2")
// Merge rows
val rows = a.rdd.zip(b.rdd).map{
case (rowLeft, rowRight) => {
val totalColumns = rowLeft.schema.fields.size
val summedRow = for(i <- (0 until totalColumns)) yield rowLeft.getInt(i) + rowRight.getInt(i)
Row.fromSeq(summedRow)
}
}
// Create new data frame
val ab: DataFrame = spark.createDataFrame(rows, a.schema) // use any of the schemas
ab.show()
Update:
So, I tried to experiment with the performance of my solution vs yours. I tested with 100000 rows and each row has 50 columns. In case of your approach it has 51 columns, the extra one is for the ID column. In a single machine(no cluster), my solution seems to work a bit faster.
The union and group by approach takes about 5598 milliseconds.
Where as my solution takes about 5378 milliseconds.
My assumption is the first solution takes a bit more time because of the union operation of the two dataframes.
Here are the methods which I created for testing the approaches.
def option_1()(implicit spark: SparkSession): Unit = {
import spark.implicits._
val a: DataFrame = getDummyData(withId = true)
val b: DataFrame = getDummyData(withId = true)
val allData = a.union(b)
val result = allData.groupBy($"id").agg(allData.columns.collect({ case col if col != "id" => (col, "sum") }).toMap)
println(result.count())
// result.show()
}
def option_2()(implicit spark: SparkSession): Unit = {
val a: DataFrame = getDummyData()
val b: DataFrame = getDummyData()
// Merge rows
val rows = a.rdd.zip(b.rdd).map {
case (rowLeft, rowRight) => {
val totalColumns = rowLeft.schema.fields.size
val summedRow = for (i <- (0 until totalColumns)) yield rowLeft.getInt(i) + rowRight.getInt(i)
Row.fromSeq(summedRow)
}
}
// Create new data frame
val result: DataFrame = spark.createDataFrame(rows, a.schema) // use any of the schemas
println(result.count())
// result.show()
}

Dropping constant columns in a csv file

i would like to drop columns which are constant in a dataframe , here what i did , but i see that it tooks some much time to do it , special while writing the dataframe into the csv file , please any help to optimize the code to take less time
val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
val df = spark.read.option("inferSchema", "true").option("header", "false").csv("D:\\ProcessDataSet\\anis_data\\Set _1Mud Pumps_Merged.csv")
val aggregations = df.drop("DateTime").columns.map(c => stddev("c").as(c))
val df2 = df.agg(aggregations.head, aggregations.tail: _*)
val columnsToKeep: Seq[String] = (df2.first match {
case r : Row => r.toSeq.toArray.map(_.asInstanceOf[Double])
}).zip(df.columns)
.filter(_._1 != 0) // your special condition is in the filter
.map(_._2) // keep just the name of the column
// select columns with stddev != 0
val finalResult = df.select(columnsToKeep.head, columnsToKeep.tail : _*)
finalResult.write.option("header",true).csv("D:\\ProcessDataSet\\dataWithoutConstant\\Set _1Mud Pumps_MergedCleaned.csv")
}
I think there is no much room left for optimization. You are doing the right thing.
Maybe what you can try is to cache() your dataframe df.
df is used in two separate Spark actions so it is loaded twice.
Try :
...
val df = spark.read.option("inferSchema", "true").option("header", "false").csv("D:\\ProcessDataSet\\anis_data\\Set _1Mud Pumps_Merged.csv")
df.cache()
val aggregations = df.drop("DateTime").columns.map(c => stddev("c").as(c))
...