Join RDD's to result in complement of intersection - scala

I have two RDD's:
The first (User ID, Mov ID, Rating, Timestamp)
data_wo_header: RDD[String]
scala> data_wo_header.take(5).foreach(println)
1,2,3.5,1112486027
1,29,3.5,1112484676
1,32,3.5,1112484819
1,47,3.5,1112484727
1,50,3.5,1112484580
and RDD2 (User ID, Mov ID)
data_test_wo_header: RDD[String]
scala> data_test_wo_header.take(5).foreach(println)
1,2
1,367
1,1009
1,1525
1,1750
I need to join the two RDD's such that joining will remove the entries (UserID, Mov ID) common from RDD1.
Can someone guide a scala-spark join for the two RDD's.
Also I'll need a join where the new RDD derived from RDD1 has common items only.

First convert your RDDs to DataFrame because DataFrame has common sql like APIs such as join, select etc.
To convert your RDDs to DataFrame you need a RDD[Row] instead of RDD[String].
Import sqlContext.implicits._
case class cs1(UserID: Int, MovID: Int, Rating: String, Timestamp: String)
case class cs2(UserID: Int, MovID: Int)
val df1 = data_wo_header.map(row => {
val splits = row.split(",")
cs1(splits(0).toInt, splits(1).toInt, splits(2),splits(3))
}).toDF("UserID", "MovID", "Rating", "Timestamp")
val df2 = data_test_wo_header.map(row => {
val splits = row.split(",")
cs2(splits(0).toInt, splits(1).toInt)
}).toDF("UserID", "MovID")
Now, add a new column to df2,
val df2Prime = df2.withColumn("isPresent", lit(1))
Then left join df2Prime with df1 and filter out the rows where isPresent is 1 and you have the intersected result. Also, drop the temporary is isPresent flag.
val temp = df1.join(df2Prime, usingColumns = Seq("UserID", "MovID"), "left")
temp.filter(temp("isPresent") =!= "1").drop("isPresent")

A super easy approach would be to use subtract by keys. Following is working for me:
val data_wo_header=dropheader(data).map(_.split(",")).map(x=>((x(0),x(1)),(x(2),x(3))))
val data_test_wo_header=dropheader(data_test).map(_.split(",")).map(x=>((x(0),x(1)),1))
val ratings_train=data_wo_header.subtractByKey(data_test_wo_header)
val ratings_test=data_wo_header.subtractByKey(ratings_train)

Related

Read a path from a dataframe column and add another column from the dataframe

I have a dataframe as below
id, path
id1, path1
id2, path2
id3, path3
I want to read parquet files in the paths mentioned above and add id column after reading the data to its respective outputs and in the end union all the results
Code:
case class cls_lyr(id: String, path: String)
val selColDf = df.select("id", "path").dropDuplicates
val newdf = selColDf.as[cls_lyr].take(selColDf.count.toInt).foreach(t => {
var id = t.id
var path= t.path
val lkpDf = spark.read.parquet(path)
val finalDf = lkpDf.withColumn("portf_id", lit(id))
}
)
How would I union the data from the 3 paths? Is there any other better way this can be done?
You can use map instead of foreach, so that you will get an array of dataframes, which you can reduce to a single dataframe using unionAll:
selColDf.as[cls_lyr]
.collect
.map(t => spark.read.parquet(t.path).withColumn("portf_id", lit(t.id)))
.reduce(_ unionAll _)

Dynamically select multiple columns while joining different Dataframe in Scala Spark

I have two spark data frame df1 and df2. Is there a way for selecting output columns dynamically while joining these two dataframes? The below definition outputs all column from df1 and df2 in case of inner join.
def joinDF (df1: DataFrame, df2: DataFrame , joinExprs: Column, joinType: String): DataFrame = {
val dfJoinResult = df1.join(df2, joinExprs, joinType)
dfJoinResult
//.select()
}
Input data:
val df1 = List(("1","new","current"), ("2","closed","saving"), ("3","blocked","credit")).toDF("id","type","account")
val df2 = List(("1","7"), ("2","5"), ("5","8")).toDF("id","value")
Expected result:
val dfJoinResult = df1
.join(df2, df1("id") === df2("id"), "inner")
.select(df1("type"), df1("account"), df2("value"))
dfJoinResult.schema():
StructType(StructField(type,StringType,true),
StructField(account,StringType,true),
StructField(value,StringType,true))
I have looked at options like df.select(cols.head, cols.tail: _*) but it does not allow to select columns from both DF's.
Is there a way to pass selectExpr columns dynamically along with dataframe details that we want to select it from in my def? I'm using Spark 2.2.0.
It is possible to pass the select expression as a Seq[Column] to the method:
def joinDF(df1: DataFrame, df2: DataFrame , joinExpr: Column, joinType: String, selectExpr: Seq[Column]): DataFrame = {
val dfJoinResult = df1.join(df2, joinExpr, joinType)
dfJoinResult.select(selectExpr:_*)
}
To call the method use:
val joinExpr = df1.col("id") === df2.col("id")
val selectExpr = Seq(df1.col("type"), df1.col("account"), df2.col("value"))
val testDf = joinDF(df1, df2, joinExpr, "inner", selectExpr)
This will give the desired result:
+------+-------+-----+
| type|account|value|
+------+-------+-----+
| new|current| 7|
|closed| saving| 5|
+------+-------+-----+
In the selectExpr above, it is necessary to specify which dataframe the columns are coming from. However, this can be further simplified if the following assumptions are true:
The columns to join on have the same name in both dataframes
The columns to be selected have unique names (the other dataframe do not have a column with the same name)
In this case, the joinExpr: Column can be changed to joinExpr: Seq[String] and selectExpr: Seq[Column] to selectExpr: Seq[String]:
def joinDF(df1: DataFrame, df2: DataFrame , joinExpr: Seq[String], joinType: String, selectExpr: Seq[String]): DataFrame = {
val dfJoinResult = df1.join(df2, joinExpr, joinType)
dfJoinResult.select(selectExpr.head, selectExpr.tail:_*)
}
Calling the method now looks cleaner:
val joinExpr = Seq("id")
val selectExpr = Seq("type", "account", "value")
val testDf = joinDF(df1, df2, joinExpr, "inner", selectExpr)
Note: When the join is performed using a Seq[String] the column names of the resulting dataframe will be different as compared to using an expression. When there are columns with the same name present, there will be no way to separately select these afterwards.
A slightly modified solution from the one given above is before performing join, select the required columns from the DataFrames beforehand as it will have a little less overhead as there will be lesser no of columns to perform JOIN operation.
val dfJoinResult = df1.select("column1","column2").join(df2.select("col1"),joinExpr,joinType)
But remember to select the columns on which you will be performing the join operations as it will first select the columns and then from the available data will from join operation.

In spark MLlib, How do i convert string to integer in spark scala?

As i know, MLlib supports only interger.
Then i want to convert string to interger in scala.
For example, I have many reviewerID, productID in txtfile.
reviewerID productID
03905X0912 ZXASQWZXAS
0325935ODD PDLFMBKGMS
...
StringIndexer is the solution. It will fit into the ML pipeline with an estimator and transformer. Essentially once you set the input column, it computes the frequency of each category and numbers them starting 0. You can add IndexToString at the end of pipeline to replace by original strings if required.
You can look at ML Documentation for "Estimating, transforming and selecting features" for further details.
In your case it will go like:
import org.apache.spark.ml.feature.StringIndexer
val indexer = new StringIndexer().setInputCol("productID").setOutputCol("productIndex")
val indexed = indexer.fit(df).transform(df)
indexed.show()
You can add a new row with a unique id for each reviewerID, productID. You can add a new row in the following ways.
By monotonicallyIncreasingId:
import spark.implicits._
val data = spark.sparkContext.parallelize(Seq(
("123xyx", "ab"),
("123xyz", "cd")
)).toDF("reviewerID", "productID")
data.withColumn("uniqueReviID", monotonicallyIncreasingId).show()
By using zipWithUniqueId:
val rows = data.rdd.zipWithUniqueId.map {
case (r: Row, id: Long) => Row.fromSeq(id +: r.toSeq)
}
val finalDf = spark.createDataFrame(rows, StructType(StructField("uniqueRevID", LongType, false) +: data.schema.fields))
finalDf.show()
You can also do this by using row_number() in SQL syntax:
import spark.implicits._
val data = spark.sparkContext.parallelize(Seq(
("123xyx", "ab"),
("123xyz", "cd")
)).toDF("reviewerID", "productID").createOrReplaceTempView("review")
val tmpTable1 = spark.sqlContext.sql(
"select row_number() over (order by reviewerID) as id, reviewerID, productID from review")
Hope this helps!

How to add corresponding Integer values in 2 different DataFrames

I have two DataFrames in my code with exact same dimensions, let's say 1,000,000 X 50. I need to add corresponding values in both dataframes. How to achieve that.
One option would be to add another column with ids, union both DataFrames and then use reduceByKey. But is there any other more elegent way?
Thanks.
Your approach is good. Another option can be two take the RDD and zip those together and then iterate over those to sum the columns and create a new dataframe using any of the original dataframe schemas.
Assuming the data types for all the columns are integer, this code snippets should work. Please note that, this has been done in spark 2.1.0.
import spark.implicits._
val a: DataFrame = spark.sparkContext.parallelize(Seq(
(1, 2),
(3, 6)
)).toDF("column_1", "column_2")
val b: DataFrame = spark.sparkContext.parallelize(Seq(
(3, 4),
(1, 5)
)).toDF("column_1", "column_2")
// Merge rows
val rows = a.rdd.zip(b.rdd).map{
case (rowLeft, rowRight) => {
val totalColumns = rowLeft.schema.fields.size
val summedRow = for(i <- (0 until totalColumns)) yield rowLeft.getInt(i) + rowRight.getInt(i)
Row.fromSeq(summedRow)
}
}
// Create new data frame
val ab: DataFrame = spark.createDataFrame(rows, a.schema) // use any of the schemas
ab.show()
Update:
So, I tried to experiment with the performance of my solution vs yours. I tested with 100000 rows and each row has 50 columns. In case of your approach it has 51 columns, the extra one is for the ID column. In a single machine(no cluster), my solution seems to work a bit faster.
The union and group by approach takes about 5598 milliseconds.
Where as my solution takes about 5378 milliseconds.
My assumption is the first solution takes a bit more time because of the union operation of the two dataframes.
Here are the methods which I created for testing the approaches.
def option_1()(implicit spark: SparkSession): Unit = {
import spark.implicits._
val a: DataFrame = getDummyData(withId = true)
val b: DataFrame = getDummyData(withId = true)
val allData = a.union(b)
val result = allData.groupBy($"id").agg(allData.columns.collect({ case col if col != "id" => (col, "sum") }).toMap)
println(result.count())
// result.show()
}
def option_2()(implicit spark: SparkSession): Unit = {
val a: DataFrame = getDummyData()
val b: DataFrame = getDummyData()
// Merge rows
val rows = a.rdd.zip(b.rdd).map {
case (rowLeft, rowRight) => {
val totalColumns = rowLeft.schema.fields.size
val summedRow = for (i <- (0 until totalColumns)) yield rowLeft.getInt(i) + rowRight.getInt(i)
Row.fromSeq(summedRow)
}
}
// Create new data frame
val result: DataFrame = spark.createDataFrame(rows, a.schema) // use any of the schemas
println(result.count())
// result.show()
}

Merge multiple Dataframes into one Dataframe in Spark [duplicate]

I have two DataFrame a and b.
a is like
Column 1 | Column 2
abc | 123
cde | 23
b is like
Column 1
1
2
I want to zip a and b (or even more) DataFrames which becomes something like:
Column 1 | Column 2 | Column 3
abc | 123 | 1
cde | 23 | 2
How can I do it?
Operation like this is not supported by a DataFrame API. It is possible to zip two RDDs but to make it work you have to match both number of partitions and number of elements per partition. Assuming this is the case:
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StructField, StructType, LongType}
val a: DataFrame = sc.parallelize(Seq(
("abc", 123), ("cde", 23))).toDF("column_1", "column_2")
val b: DataFrame = sc.parallelize(Seq(Tuple1(1), Tuple1(2))).toDF("column_3")
// Merge rows
val rows = a.rdd.zip(b.rdd).map{
case (rowLeft, rowRight) => Row.fromSeq(rowLeft.toSeq ++ rowRight.toSeq)}
// Merge schemas
val schema = StructType(a.schema.fields ++ b.schema.fields)
// Create new data frame
val ab: DataFrame = sqlContext.createDataFrame(rows, schema)
If above conditions are not met the only option that comes to mind is adding an index and join:
def addIndex(df: DataFrame) = sqlContext.createDataFrame(
// Add index
df.rdd.zipWithIndex.map{case (r, i) => Row.fromSeq(r.toSeq :+ i)},
// Create schema
StructType(df.schema.fields :+ StructField("_index", LongType, false))
)
// Add indices
val aWithIndex = addIndex(a)
val bWithIndex = addIndex(b)
// Join and clean
val ab = aWithIndex
.join(bWithIndex, Seq("_index"))
.drop("_index")
In Scala's implementation of Dataframes, there is no simple way to concatenate two dataframes into one. We can simply work around this limitation by adding indices to each row of the dataframes. Then, we can do a inner join by these indices. This is my stub code of this implementation:
val a: DataFrame = sc.parallelize(Seq(("abc", 123), ("cde", 23))).toDF("column_1", "column_2")
val aWithId: DataFrame = a.withColumn("id",monotonicallyIncreasingId)
val b: DataFrame = sc.parallelize(Seq((1), (2))).toDF("column_3")
val bWithId: DataFrame = b.withColumn("id",monotonicallyIncreasingId)
aWithId.join(bWithId, "id")
A little light reading - Check out how Python does this!
What about pure SQL ?
SELECT
room_name,
sender_nickname,
message_id,
row_number() over (partition by room_name order by message_id) as message_index,
row_number() over (partition by room_name, sender_nickname order by message_id) as user_message_index
from messages
order by room_name, message_id
I know the OP was using Scala but if, like me, you need to know how to do this in pyspark then try the Python code below. Like #zero323's first solution it relies on RDD.zip() and will therefore fail if both DataFrames don't have the same number of partitions and the same number of rows in each partition.
from pyspark.sql import Row
from pyspark.sql.types import StructType
def zipDataFrames(left, right):
CombinedRow = Row(*left.columns + right.columns)
def flattenRow(row):
left = row[0]
right = row[1]
combinedVals = [left[col] for col in left.__fields__] + [right[col] for col in right.__fields__]
return CombinedRow(*combinedVals)
zippedRdd = left.rdd.zip(right.rdd).map(lambda row: flattenRow(row))
combinedSchema = StructType(left.schema.fields + right.schema.fields)
return zippedRdd.toDF(combinedSchema)
joined = zipDataFrames(a, b)