Group and aggregate dataset in spark scala without using spark.sql() - scala

I have a dataset with account information of customers as below
customerID
accountID
balance
ID001
ACC001
20
ID002
ACC002
400
ID003
ACC003
500
ID002
ACC004
30
I want to groupby and aggregrate the above data to get output as below without using spark.sql functions, instead allowed to use datasets API
accounts
number of accounts
totalBalance
averageBalance
[ID001,ACC001,20]
1
20
20
[[ID002,ACC002,400], [ID002,ACC004,30]]
2
430
215
[ID003,ACC003,500]
1
500
500
I tried using ds.groupBy("accountID").agg(Map("balance" -> "avg")), however I am only able to use Map function to get the average. Need help to do multiple aggregation without using spark sql functions.
Appreciate any help to achieve the above solution. Thanks

Here is your solution
val cust_data = Seq[(String, String, Int)](
("ID001", "ACC001", 20),
("ID002", "ACC002", 400),
("ID003", "ACC003", 500),
("ID002", "ACC004", 30)).toDF("customerID", "accountID", "balance")
val out_df = cust_data.groupBy("customerID").agg(count($"accountID").alias("number_of_accounts"),
sum($"balance").alias("totalBalance"),
avg($"balance").alias("averageBalance"))
out_df.show()
+----------+------------------+------------+--------------+
|customerID|number_of_accounts|totalBalance|averageBalance|
+----------+------------------+------------+--------------+
| ID001| 1| 20| 20.0|
| ID002| 2| 430| 215.0|
| ID003| 1| 500| 500.0|
+----------+------------------+------------+--------------+

Related

how to join two dataframes and substract two columns from the dataframe

I have two dataframes which look like below
I am trying to find the diff between two amount based on ID
Dataframe 1:
ID I Amt
1 null 200
null 2 200
3 null 600
dataframe 2
ID I Amt
2 null 300
3 null 400
Output
Df
ID Amt(df2-df1)
2 100
3 -200
Query doesnt work:
Substraction doesnt work
df = df1.join(df2, df1["coalesce(ID, I)"] == df2["coalesce(ID, I)"], 'inner').select
((df1["amt)"]) – (df2["amt”])), df1["coalesce(ID, I)"].show())
I would do a couple of things differently. To make it easier to know what column is in what dataframe, I would rename them. I would also do the coalesce outside of the join itself.
val joined = df1.withColumn("joinKey",coalesce($"ID",$"I")).select($"joinKey",$"Amt".alias("DF1_AMT")).join(
df2.withColumn("joinKey",coalesce($"ID",$"I")).select($"joinKey",$"Amt".alias("DF2_AMT")),"joinKey")
Then you can easily perform your calculation:
joined.withColumn("DIFF",$"DF2_AMT" - $"DF1_AMT").show
+-------+-------+-------+------+
|joinKey|DF1_AMT|DF2_AMT| DIFF|
+-------+-------+-------+------+
| 2| 200| 300| 100.0|
| 3| 600| 400|-200.0|
+-------+-------+-------+------+

How to add multiple columns in a spark dataframe using SCALA

I have a condition where I have to add 5 columns (to an existing DF) for 5 months of a year.
The existing DF is like:
EId EName Esal
1 abhi 1100
2 raj 300
3 nanu 400
4 ram 500
The Output should be as follows:
EId EName Esal Jan Feb March April May
1 abhi 1100 1100 1100 1100 1100 1100
2 raj 300 300 300 300 300 300
3 nanu 400 400 400 400 400 400
4 ram 500 500 500 500 500 500
I can do this one by one with withColumn but that takes a lot of time.
Is there a way I can run some loop and keep on adding columns till my conditions are exhausted.
Many thanks in advance.
You can use foldLeft. You'll need to create a List of the columns that you want.
df.show
+---+----+----+
| id|name| sal|
+---+----+----+
| 1| A|1100|
+---+----+----+
val list = List("Jan", "Feb" , "Mar", "Apr") // ... you get the idea
list.foldLeft(df)((df, month) => df.withColumn(month , $"sal" ) ).show
+---+----+----+----+----+----+----+
| id|name| sal| Jan| Feb| Mar| Apr|
+---+----+----+----+----+----+----+
| 1| A|1100|1100|1100|1100|1100|
+---+----+----+----+----+----+----+
So, basically what happens is you fold the sequence you created while starting with the original dataframe and applying transformation as you keep on traversing through the list.
Yes , You can do the same using foldLeft.FoldLeft traverse the elements in the collection from left to right with the desired value.
So you can store the desired columns in a List().
For Example:
val BazarDF = Seq(
("Veg", "tomato", 1.99),
("Veg", "potato", 0.45),
("Fruit", "apple", 0.99),
("Fruit", "pineapple", 2.59)
).toDF("Type", "Item", "Price")
Create a List with column name and values(as an example used null value)
var ColNameWithDatatype = List(("Jan", lit("null").as("StringType")),
("Feb", lit("null").as("StringType")
))
var BazarWithColumnDF1 = ColNameWithDatatype.foldLeft(BazarDF)
{ (tempDF, colName) =>
tempDF.withColumn(colName._1, colName._2)
}
You can see the example Here
Have in mind that withColumn method of DataFrame could have performance issues when called in loop:
Spark DAG differs with 'withColumn' vs 'select'
There is even mentioned about that in
https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html#withColumn(colName:String,col:org.apache.spark.sql.Column):org.apache.spark.sql.DataFrame
this method introduces a projection internally. Therefore, calling it multiple times, for instance, via loops in order to add multiple columns can generate big plans which can cause performance issues and even StackOverflowException. To avoid this, use select with the multiple columns at once.
The safer way is to do it with select:
val monthsColumns = months.map { month:String =>
col("sal").as(month)
}
val updatedDf = df.select(df.columns.map(col) ++ monthsColumns: _*)

Spark2 Dataframe/RDD process in groups

I have the following table stored in Hive called ExampleData:
+--------+-----+---|
|Site_ID |Time |Age|
+--------+-----+---|
|1 |10:00| 20|
|1 |11:00| 21|
|2 |10:00| 24|
|2 |11:00| 24|
|2 |12:00| 20|
|3 |11:00| 24|
+--------+-----+---+
I need to be able to process the data by Site. Unfortunately partitioning it by Site doesn't work (there are over 100k sites, all with fairly small amounts of data).
For each site, I need to select the Time column and Age column separately, and use this to feed into a function (which ideally I want to run on the executors, not the driver)
I've got a stub of how I think I want it to work, but this solution would only run on the driver, so it's very slow. I need to find a way of writing it so it will run an executor level:
// fetch a list of distinct sites and return them to the driver
//(if you don't, you won't be able to loop around them as they're not on the executors)
val distinctSites = spark.sql("SELECT site_id FROM ExampleData GROUP BY site_id LIMIT 10")
.collect
val allSiteData = spark.sql("SELECT site_id, time, age FROM ExampleData")
distinctSites.foreach(row => {
allSiteData.filter("site_id = " + row.get(0))
val times = allSiteData.select("time").collect()
val ages = allSiteData.select("ages").collect()
processTimesAndAges(times, ages)
})
def processTimesAndAges(times: Array[Row], ages: Array[Row]) {
// do some processing
}
I've tried broadcasting the distinctSites across all nodes, but this did not prove fruitful.
This seems such a simple concept and yet I have spent a couple of days looking into this. I'm very new to Scala/Spark, so apologies if this is a ridiculous question!
Any suggestions or tips are greatly appreciated.
RDD API provides a number of functions which can be used to perform operations in groups starting with low level repartition / repartitionAndSortWithinPartitions and ending with a number of *byKey methods (combineByKey, groupByKey, reduceByKey, etc.).
Example :
rdd.map( tup => ((tup._1, tup._2, tup._3), tup) ).
groupByKey().
forEachPartition( iter => doSomeJob(iter) )
In DataFrame you can use aggregate functions,GroupedData class provides a number of methods for the most common functions, including count, max, min, mean and sum
Example :
val df = sc.parallelize(Seq(
(1, 10.3, 10), (1, 11.5, 10),
(2, 12.6, 20), (3, 2.6, 30))
).toDF("Site_ID ", "Time ", "Age")
df.show()
+--------+-----+---+
|Site_ID |Time |Age|
+--------+-----+---+
| 1| 10.3| 10|
| 1| 11.5| 10|
| 2| 12.6| 20|
| 3| 2.6| 30|
+--------+-----+---+
df.groupBy($"Site_ID ").count.show
+--------+-----+
|Site_ID |count|
+--------+-----+
| 1| 2|
| 3| 1|
| 2| 1|
+--------+-----+
Note : As you have mentioned that solution is very slow ,You need to use partition ,in your case range partition is good option.
http://dev.sortable.com/spark-repartition/
https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-rdd-partitions.html
http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-1/

Get Unique records in Spark [duplicate]

This question already has answers here:
How to select the first row of each group?
(9 answers)
Closed 5 years ago.
I have a dataframe df as mentioned below:
**customers** **product** **val_id** **rule_name** **rule_id** **priority**
1 A 1 ABC 123 1
3 Z r ERF 789 2
2 B X ABC 123 2
2 B X DEF 456 3
1 A 1 DEF 456 2
I want to create a new dataframe df2, which will have only unique customer ids, but as rule_name and rule_id columns are different for same customer in data, so I want to pick those records which has highest priority for the same customer, so my final outcome should be:
**customers** **product** **val_id** **rule_name** **rule_id** **priority**
1 A 1 ABC 123 1
3 Z r ERF 789 2
2 B X ABC 123 2
Can anyone please help me to achieve it using Spark scala. Any help will be appericiated.
You basically want to select rows with extreme values in a column. This is a really common issue, so there's even a whole tag greatest-n-per-group. Also see this question SQL Select only rows with Max Value on a Column which has a nice answer.
Here's an example for your specific case.
Note that this could select multiple rows for a customer, if there are multiple rows for that customer with the same (minimum) priority value.
This example is in pyspark, but it should be straightforward to translate to Scala
# find best priority for each customer. this DF has only two columns.
cusPriDF = df.groupBy("customers").agg( F.min(df["priority"]).alias("priority") )
# now join back to choose only those rows and get all columns back
bestRowsDF = df.join(cusPriDF, on=["customers","priority"], how="inner")
To create df2 you have to first order df by priority and then find unique customers by id. Like this:
val columns = df.schema.map(_.name).filterNot(_ == "customers").map(col => first(col).as(col))
val df2 = df.orderBy("priority").groupBy("customers").agg(columns.head, columns.tail:_*).show
It would give you expected output:
+----------+--------+-------+----------+--------+---------+
| customers| product| val_id| rule_name| rule_id| priority|
+----------+--------+-------+----------+--------+---------+
| 1| A| 1| ABC| 123| 1|
| 3| Z| r| ERF| 789| 2|
| 2| B| X| ABC| 123| 2|
+----------+--------+-------+----------+--------+---------+
Corey beat me to it, but here's the Scala version:
val df = Seq(
(1,"A","1","ABC",123,1),
(3,"Z","r","ERF",789,2),
(2,"B","X","ABC",123,2),
(2,"B","X","DEF",456,3),
(1,"A","1","DEF",456,2)).toDF("customers","product","val_id","rule_name","rule_id","priority")
val priorities = df.groupBy("customers").agg( min(df.col("priority")).alias("priority"))
val top_rows = df.join(priorities, Seq("customers","priority"), "inner")
+---------+--------+-------+------+---------+-------+
|customers|priority|product|val_id|rule_name|rule_id|
+---------+--------+-------+------+---------+-------+
| 1| 1| A| 1| ABC| 123|
| 3| 2| Z| r| ERF| 789|
| 2| 2| B| X| ABC| 123|
+---------+--------+-------+------+---------+-------+
You will have to use min aggregation on priority column grouping the dataframe by customers and then inner join the original dataframe with the aggregated dataframe and select the required columns.
val aggregatedDF = dataframe.groupBy("customers").agg(max("priority").as("priority_1"))
.withColumnRenamed("customers", "customers_1")
val finalDF = dataframe.join(aggregatedDF, dataframe("customers") === aggregatedDF("customers_1") && dataframe("priority") === aggregatedDF("priority_1"))
finalDF.select("customers", "product", "val_id", "rule_name", "rule_id", "priority").show
you should have the desired result

How to randomly selecting rows from one dataframeusing information from another dataframe

The following I am attempting in Scala-Spark.
I'm hoping someone can give me some guidance on how to tackle this problem or provide me with some resources to figure out what I can do.
I have a dateCountDF with a count corresponding to a date. I would like to randomly select a certain number of entries for each dateCountDF.month from another Dataframe entitiesDF where dateCountDF.FirstDate<entitiesDF.Date && entitiesDF.Date <= dateCountDF.LastDate and then place all the results into a new Dataframe. See Bellow for Data Example
I'm not at all sure how to approach this problem from a Spark-SQl or Spark-MapReduce perspective. The furthest I got was the naive approach, where I use a foreach on a dataFrame and then refer to the other dataframe within the function. But this doesn't work because of the distributed nature of Spark.
val randomEntites = dateCountDF.foreach(x => {
val count:Int = x(1).toString().toInt
val result = entitiesDF.take(count)
return result
})
DataFrames
**dateCountDF**
| Date | Count |
+----------+----------------+
|2016-08-31| 4|
|2015-12-31| 1|
|2016-09-30| 5|
|2016-04-30| 5|
|2015-11-30| 3|
|2016-05-31| 7|
|2016-11-30| 2|
|2016-07-31| 5|
|2016-12-31| 9|
|2014-06-30| 4|
+----------+----------------+
only showing top 10 rows
**entitiesDF**
| ID | FirstDate | LastDate |
+----------+-----------------+----------+
| 296| 2014-09-01|2015-07-31|
| 125| 2015-10-01|2016-12-31|
| 124| 2014-08-01|2015-03-31|
| 447| 2017-02-01|2017-01-01|
| 307| 2015-01-01|2015-04-30|
| 574| 2016-01-01|2017-01-31|
| 613| 2016-04-01|2017-02-01|
| 169| 2009-08-23|2016-11-30|
| 205| 2017-02-01|2017-02-01|
| 433| 2015-03-01|2015-10-31|
+----------+-----------------+----------+
only showing top 10 rows
Edit:
For clarification.
My inputs are entitiesDF and dateCountDF. I want to loop through dateCountDF and for each row I want to select a random number of entities in entitiesDF where dateCountDF.FirstDate<entitiesDF.Date && entitiesDF.Date <= dateCountDF.LastDate
To select random you do like this in scala
import random
def sampler(df, col, records):
# Calculate number of rows
colmax = df.count()
# Create random sample from range
vals = random.sample(range(1, colmax), records)
# Use 'vals' to filter DataFrame using 'isin'
return df.filter(df[col].isin(vals))
select random number of rows you want store in dataframe and the add this data in the another dataframe for this you can use unionAll.
also you can refer this answer