Appending rows to a dataframe - scala

I want to achieve the below for a spark a dataframe. I want to keep appending new rows to a dataframe as shown in the below example.
for(a<- value)
{
val num = a
val count = a+10
//creating a df with the above values//
val data = Seq((num.asInstanceOf[Double], count.asInstanceOf[Double]))
val row = spark.sparkContext.parallelize(data).toDF("Number","count")
val data2 = data1.union(row)
val data1 = data2 --> currently this assignment is not possible.
}
I have also tried
for(a<- value)
{
val num = a
val count = a+10
//creating a df with the above values//
val data = Seq((num.asInstanceOf[Double], count.asInstanceOf[Double]))
val row = spark.sparkContext.parallelize(data).toDF("Number","count")
val data1 = data1.union(row) --> Union with self is not possible
}
How can I achieve this in spark.

Dataframes are immutable, you will need to use mutable structure. Here is the solution that might help you.
scala> val value = Array(1.0, 2.0, 55.0)
value: Array[Double] = Array(1.0, 2.0, 55.0)
scala> import scala.collection.mutable.ListBuffer
import scala.collection.mutable.ListBuffer
scala> var data = new ListBuffer[(Double, Double)]
data: scala.collection.mutable.ListBuffer[(Double, Double)] = ListBuffer()
scala> for(a <- value)
| {
| val num = a
| val count = a+10
| data += ((num.asInstanceOf[Double], count.asInstanceOf[Double]))
| println(data)
| }
ListBuffer((1.0,11.0))
ListBuffer((1.0,11.0), (2.0,12.0))
ListBuffer((1.0,11.0), (2.0,12.0), (55.0,65.0))
scala> val DF = spark.sparkContext.parallelize(data).toDF("Number","count")
DF: org.apache.spark.sql.DataFrame = [Number: double, count: double]
scala> DF.show()
+------+-----+
|Number|count|
+------+-----+
| 1.0| 11.0|
| 2.0| 12.0|
| 55.0| 65.0|
+------+-----+
scala>

Just create one DataFrame using the for-loop and then union with data1 like this:
val df = ( for(a <- values) yield (a, a+10) ).toDF("Number", "count")
val result = data1.union(df)
This would be much more efficient than doing unions inside the for-loop.

your data1 must be declared as var:
var data1:DataFrame = ???
for(a<- value)
{
val num = a
val count = a+10
//creating a df with the above values//
val data = Seq((num.toDouble, count.toDouble))
val row = spark.sparkContext.parallelize(data).toDF("Number","count")
val data2 = data1.union(row)
data1 = data2
}
But I would not suggest to do this, better convert your entire value (must be a Seq?) to a dataframe, then union once. Many unions tend to be inefficient....
val newDF = value.toDF("Number")
.withColumn("count",$"Number" + 10)
val result= data1.union(newDF)

Related

Spark scala dataframe get value for each row and assign to variables

I have a dataframe like below :
val df=spark.sql("select * from table")
row1|row2|row3
A1,B1,C1
A2,B2,C2
A3,B3,C3
i want to iterate for loop to get values like this :
val value1="A1"
val value2="B1"
val value3="C1"
function(value1,value2,value3)
Please help me.
emphasized text
You have 2 options :
Solution 1- Your data is big, then you must stick with dataframes. So to apply a function on every row. We must define a UDF.
Solution 2- Your data is small, then you can collect the data to the driver machine and then iterate with a map.
Example:
val df = Seq((1,2,3), (4,5,6)).toDF("a", "b", "c")
def sum(a: Int, b: Int, c: Int) = a+b+c
// Solution 1
import org.apache.spark.sql.Row
val myUDF = udf((r: Row) => sum(r.getAs[Int](0), r.getAs[Int](1), r.getAs[Int](2)))
df.select(myUDF(struct($"a", $"b", $"c")).as("sum")).show
//Solution 2
df.collect.map(r=> sum(r.getAs[Int](0), r.getAs[Int](1), r.getAs[Int](2)))
Output for both cases:
+---+
|sum|
+---+
| 6|
| 15|
+---+
EDIT:
val myUDF = udf((r: Row) => {
val value1 = r.getAs[Int](0)
val value2 = r.getAs[Int](1)
val value3 = r.getAs[Int](2)
myFunction(value1, value2, value3)
})

Spark 2.3: subtract dataframes but preserve duplicate values (Scala)

Copying example from this question:
As a conceptual example, if I have two dataframes:
words = [the, quick, fox, a, brown, fox]
stopWords = [the, a]
then I want the output to be, in any order:
words - stopWords = [quick, brown, fox, fox]
ExceptAll can do this in 2.4 but I cannot upgrade. The answer in the linked question is specific to a dataframe:
words.join(stopwords, words("id") === stopwords("id"), "left_outer")
.where(stopwords("id").isNull)
.select(words("id")).show()
as in you need to know the pkey and the other columns.
Can anyone come up with an answer that will work on any dataframe?
Here is an implementation for you all. I have tested in Spark 2.4.2, it should work for 2.3 too (not 100% sure)
val df1 = spark.createDataset(Seq("the","quick","fox","a","brown","fox")).toDF("c1")
val df2 = spark.createDataset(Seq("the","a")).toDF("c1")
exceptAllCustom(df1, df2, Seq("c1")).show()
def exceptAllCustom(df1 : DataFrame, df2 : DataFrame, pks : Seq[String]): DataFrame = {
val notNullCondition = pks.foldLeft(lit(0==0))((column,cName) => column && df2(cName).isNull)
val joinCondition = pks.foldLeft(lit(0==0))((column,cName) => column && df2(cName)=== df1(cName))
val result = df1.join(df2, joinCondition, "left_outer")
.where(notNullCondition)
pks.foldLeft(result)((df,cName) => df.drop(df2(cName)))
}
Result -
+-----+
| c1|
+-----+
|quick|
| fox|
|brown|
| fox|
+-----+
Turns out it's easier to do df1.except(df2) and then join the results with df1 to get all the duplicates.
Full code:
def exceptAllCustom(df1: DataFrame, df2: DataFrame): DataFrame = {
val except = df1.except(df2)
val columns = df1.columns
val colExpr: Column = df1(columns.head) <=> except(columns.head)
val joinExpression = columns.tail.foldLeft(colExpr) { (colExpr, p) =>
colExpr && df1(p) <=> except(p)
}
val join = df1.join(except, joinExpression, "inner")
join.select(df1("*"))
}

Removing the Option type from a joined RDD

There are two rdds.
val pairRDD1 = sc.parallelize(List( ("cat",2), ("girl", 5), ("book", 4),("Tom", 12)))
val pairRDD2 = sc.parallelize(List( ("cat",2), ("cup", 5), ("mouse", 4),("girl", 12)))
And then I will do this join operation.
val kk = pairRDD1.fullOuterJoin(pairRDD2).collect
it shows like that:
kk: Array[(String, (Option[Int], Option[Int]))] = Array((book,(Some(4),None)), (Tom,(Some(12),None)), (girl,(Some(5),Some(12))), (mouse,(None,Some(4))), (cup,(None,Some(5))), (cat,(Some(2),Some(2))))
if i would like to fill the NONE by 0 and transform Option[int] to Int.what should I code?Thanks!
You can use mapValues on kk as follows (note this is before the collect):
pairRDD1.fullOuterJoin(pairRDD2).mapValues(pair => (pair._1.getOrElse(0), pair._2.getOrElse(0)))
You might have to do this before collect in an RDD, otherwise you could do:
kk.map { case (k, pair) => (k, (pair._1.getOrElse(0), pair._2.getOrElse(0))) }
Based on commnets in first answer, if you are fine using DataFrames, you can do with dataframes with any number of columns.
val ss = SparkSession.builder().master("local[*]").getOrCreate()
val sc = ss.sparkContext
import ss.implicits._
val pairRDD1 = sc.parallelize(List(("cat", 2,9999), ("girl", 5,8888), ("book", 4,9999), ("Tom", 12,6666)))
val pairRDD2 = sc.parallelize(List(("cat", 2,9999), ("cup", 5,7777), ("mouse", 4,3333), ("girl", 12,1111)))
val df1 = pairRDD1.toDF
val df2 = pairRDD2.toDF
val joined = df1.join(df2, df1.col("_1") === df2.col("_1"),"fullouter")
joined.show()
Here _1,_2 e.t.c are default column names provided by Spark. But, if you wish to have proper names you can change it as you wish.
Result:
+----+----+----+-----+----+----+
| _1| _2| _3| _1| _2| _3|
+----+----+----+-----+----+----+
|girl| 5|8888| girl| 12|1111|
| Tom| 12|6666| null|null|null|
| cat| 2|9999| cat| 2|9999|
|null|null|null| cup| 5|7777|
|null|null|null|mouse| 4|3333|
|book| 4|9999| null|null|null|
+----+----+----+-----+----+----+

How to split in Spark?

I have data in one RDD and the data is as follows:
scala> c_data
res31: org.apache.spark.rdd.RDD[String] = /home/t_csv MapPartitionsRDD[26] at textFile at <console>:25
scala> c_data.count()
res29: Long = 45212
scala> c_data.take(2).foreach(println)
age;job;marital;education;default;balance;housing;loan;contact;day;month;duration;campaign;pdays;previous;poutcome;y
58;management;married;tertiary;no;2143;yes;no;unknown;5;may;261;1;-1;0;unknown;no
I want to split the data into another rdd and I am using:
scala> val csv_data = c_data.map{x=>
| val w = x.split(";")
| val age = w(0)
| val job = w(1)
| val marital_stat = w(2)
| val education = w(3)
| val default = w(4)
| val balance = w(5)
| val housing = w(6)
| val loan = w(7)
| val contact = w(8)
| val day = w(9)
| val month = w(10)
| val duration = w(11)
| val campaign = w(12)
| val pdays = w(13)
| val previous = w(14)
| val poutcome = w(15)
| val Y = w(16)
| }
that returns :
csv_data: org.apache.spark.rdd.RDD[Unit] = MapPartitionsRDD[28] at map at <console>:27
when I query csv_data it returns Array((),....).
How can I get the data with first row as header and rest as data ?
Where I am doing wrong ?
Thanks in Advance.
Your mapping function returns Unit, so you map to an RDD[Unit]. You can get a tuple of your values by changing your code to
val csv_data = c_data.map{x=>
val w = x.split(";")
...
val Y = w(16)
(w, age, job, marital_stat, education, default, balance, housing, loan, contact, day, month, duration, campaign, pdays, previous, poutcome, Y)
}

how many ways are there to add a new column to a data frame RDD in Spark API?

I can think of one only using withColumn():
val df = sc.dataFrame.withColumn('newcolname',{ lambda row: row + 1 } )
but how would I generalize this to Text data? For example of my DataFrame had
strning values say "This is an example of a string" and I wanted to extract the
first and last word as in val arraystring : Array[String] = Array(first,last)
Is this the thing you're looking for?
val sc: SparkContext = ...
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val extractFirstWord = udf((sentence: String) => sentence.split(" ").head)
val extractLastWord = udf((sentence: String) => sentence.split(" ").reverse.head)
val sentences = sc.parallelize(Seq("This is an example", "And this is another one", "One_word", "")).toDF("sentence")
val splits = sentences
.withColumn("first_word", extractFirstWord(col("sentence")))
.withColumn("last_word", extractLastWord(col("sentence")))
splits.show()
Then the output is:
+--------------------+----------+---------+
| sentence|first_word|last_word|
+--------------------+----------+---------+
| This is an example| This| example|
|And this is anoth...| And| one|
| One_word| One_word| One_word|
| | | |
+--------------------+----------+---------+
# Create a simple DataFrame, stored into a partition directory
df1 = sqlContext.createDataFrame(sc.parallelize(range(1, 6))\
.map(lambda i: Row(single=i, double=i * 2)))
df1.save("data/test_table/key=1", "parquet")
# Create another DataFrame in a new partition directory,
# adding a new column and dropping an existing column
df2 = sqlContext.createDataFrame(sc.parallelize(range(6, 11))
.map(lambda i: Row(single=i, triple=i * 3)))
df2.save("data/test_table/key=2", "parquet")
# Read the partitioned table
df3 = sqlContext.parquetFile("data/test_table")
df3.printSchema()
https://spark.apache.org/docs/1.3.1/sql-programming-guide.html