in pyspark, is it possible to do 2 aggregations using 1 groupBy? - pyspark

What I'd like to know is if the following is permissible using pyspark:
Assume the following df:
|model | year | price | mileage |
+++++++++++++++++++++++++++++++++++++++++
|Galaxy | 2017 | 27841 |17529 |
|Galaxy | 2017 | 29395 |11892 |
|Novato | 2018 | 35644 |22876 |
|Novato | 2018 | 8765 |54817 |
df.groupBy('model', 'year')\
.agg({'price':'sum'})\
.agg({'mileage':sum'})\
.withColumnRenamed('sum(price)', 'total_prices')\
.withColumnRenamed('sum(mileage)', 'total_miles')
Hopefully resulting in
|model | year | price | mileage | total_prices| total_miles|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|Galaxy | 2017 | 27841 |17529 | 57236 | 29421 |
|Galaxy | 2017 | 29395 |11892 | 57236 | 29421 |
|Novato | 2018 | 35644 |22876 | 44409 | 77693 |
|Novato | 2018 | 8765 |54817 | 44409 | 77693 |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

You are not actually looking for a groupby, your are looking for a window function or a join because you want to extend your rows with aggregated values.
Window:
from pyspark.sql import functions as F
from pyspark.sql import Window
df = spark.createDataFrame(
[('Galaxy', 2017, 27841, 17529),
('Galaxy', 2017, 29395, 11892),
('Novato', 2018, 35644, 22876),
('Novato', 2018, 8765, 54817)],
['model','year','price','mileage']
)
w = Window.partitionBy('model', 'year')
df = df.withColumn('total_prices', F.sum('price').over(w))
df = df.withColumn('total_miles', F.sum('mileage').over(w))
df.show()
Join:
from pyspark.sql import functions as F
df = spark.createDataFrame(
[('Galaxy', 2017, 27841, 17529),
('Galaxy', 2017, 29395, 11892),
('Novato', 2018, 35644, 22876),
('Novato', 2018, 8765, 54817)],
['model','year','price','mileage']
)
df = df.join(df.groupby('model', 'year').agg(F.sum('price').alias('total_price'), F.sum('mileage').alias('total_miles')), ['model', 'year'])
df.show()
Output:
+------+----+-----+-------+------------+-----------+
| model|year|price|mileage|total_prices|total_miles|
+------+----+-----+-------+------------+-----------+
|Galaxy|2017|27841| 17529| 57236| 29421|
|Galaxy|2017|29395| 11892| 57236| 29421|
|Novato|2018|35644| 22876| 44409| 77693|
|Novato|2018| 8765| 54817| 44409| 77693|
+------+----+-----+-------+------------+-----------+

using pandas udf, you can get any no of aggregations
import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType,StructType,StructField,StringType
import pandas as pd
agg_schema = StructType(
[StructField("model", StringType(), True),
StructField("year", IntegerType(), True),
StructField("price", IntegerType(), True),
StructField("mileage", IntegerType(), True),
StructField("total_prices", IntegerType(), True),
StructField("total_miles", IntegerType(), True)
]
)
#F.pandas_udf(agg_schema, F.PandasUDFType.GROUPED_MAP)
def agg(pdf):
total_prices = pdf['price'].sum()
total_miles = pdf['mileage'].sum()
pdf['total_prices'] = total_prices
pdf['total_miles'] = total_miles
return pdf
df = spark.createDataFrame(
[('Galaxy', 2017, 27841, 17529),
('Galaxy', 2017, 29395, 11892),
('Novato', 2018, 35644, 22876),
('Novato', 2018, 8765, 54817)],
['model','year','price','mileage']
)
df.groupBy('model','year').apply(agg).show()
which results in
+------+----+-----+-------+------------+-----------+
| model|year|price|mileage|total_prices|total_miles|
+------+----+-----+-------+------------+-----------+
|Galaxy|2017|27841| 17529| 57236| 29421|
|Galaxy|2017|29395| 11892| 57236| 29421|
|Novato|2018|35644| 22876| 44409| 77693|
|Novato|2018| 8765| 54817| 44409| 77693|
+------+----+-----+-------+------------+-----------+

Related

Merge multiple spark rows inside dataframe by ID into one row based on update_time

we need to merge multiple rows based on ID into a single record using Pyspark. If there are multiple updates to the column, then we have to select the one with the last update made to it.
Please note, NULL would mean there was no update made to the column in that instance.
So, basically we have to create a single row with the consolidated updates made to the records.
So,for example, if this is the dataframe ...
Looking for similar answer, but in Pyspark .. Merge rows in a spark scala Dataframe
------------------------------------------------------------
| id | column1 | column2 | updated_at |
------------------------------------------------------------
| 123 | update1 | <*no-update*> | 1634228709 |
| 123 | <*no-update*> | 80 | 1634228724 |
| 123 | update2 | <*no-update*> | 1634229000 |
expected output is -
------------------------------------------------------------
| id | column1 | column2 | updated_at |
------------------------------------------------------------
| 123 | update2 | 80 | 1634229000 |
Let's say that our input dataframe is:
+---+-------+----+----------+
|id |col1 |col2|updated_at|
+---+-------+----+----------+
|123|null |null|1634228709|
|123|null |80 |1634228724|
|123|update2|90 |1634229000|
|12 |update1|null|1634221233|
|12 |null |80 |1634228333|
|12 |update2|null|1634221220|
+---+-------+----+----------+
What we want is to covert updated_at to TimestampType then order by id and updated_at in desc order:
df = df.withColumn("updated_at", F.col("updated_at").cast(TimestampType())).orderBy(
F.col("id"), F.col("updated_at").desc()
)
that gives us:
+---+-------+----+-------------------+
|id |col1 |col2|updated_at |
+---+-------+----+-------------------+
|12 |null |80 |2021-10-14 18:18:53|
|12 |update1|null|2021-10-14 16:20:33|
|12 |update2|null|2021-10-14 16:20:20|
|123|update2|90 |2021-10-14 18:30:00|
|123|null |80 |2021-10-14 18:25:24|
|123|null |null|2021-10-14 18:25:09|
+---+-------+----+-------------------+
Now get first non None value in each column or return None and group by id:
exp = [F.first(x, ignorenulls=True).alias(x) for x in df.columns[1:]]
df = df.groupBy(F.col("id")).agg(*exp)
And the result is:
+---+-------+----+-------------------+
|id |col1 |col2|updated_at |
+---+-------+----+-------------------+
|123|update2|90 |2021-10-14 18:30:00|
|12 |update1|80 |2021-10-14 18:18:53|
+---+-------+----+-------------------+
Here's the full example code:
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from pyspark.sql.types import TimestampType
if __name__ == "__main__":
spark = SparkSession.builder.master("local").appName("Test").getOrCreate()
data = [
(123, None, None, 1634228709),
(123, None, 80, 1634228724),
(123, "update2", 90, 1634229000),
(12, "update1", None, 1634221233),
(12, None, 80, 1634228333),
(12, "update2", None, 1634221220),
]
columns = ["id", "col1", "col2", "updated_at"]
df = spark.createDataFrame(data, columns)
df = df.withColumn("updated_at", F.col("updated_at").cast(TimestampType())).orderBy(
F.col("id"), F.col("updated_at").desc()
)
exp = [F.first(x, ignorenulls=True).alias(x) for x in df.columns[1:]]
df = df.groupBy(F.col("id")).agg(*exp)

Fetch the partial value from a column having key value pairs and assign it to new column in Spark Dataframe

I have a data frame as below
+----+-----------------------------+
|id | att |
+----+-----------------------------+
| 25 | {"State":"abc","City":"xyz"}|
| 26 | null |
| 27 | {"State":"pqr"} |
+----+-----------------------------+
I want a dataframe with columns id and city if the att column has city attribute else null
+----+------+
|id | City |
+----+------+
| 25 | xyz |
| 26 | null |
| 27 | null |
+----+------+
Language : Scala
You can use from_json to parse and convert your json data to Map. Then access the map item using one of:
getItem method of the Column class
default accessor, i.e map("map_key")
element_at function
import org.apache.spark.sql.functions.from_json
import org.apache.spark.sql.types.{MapType, StringType}
import sparkSession.implicits._
val df = Seq(
(25, """{"State":"abc","City":"xyz"}"""),
(26, null),
(27, """{"State":"pqr"}""")
).toDF("id", "att")
val schema = MapType(StringType, StringType)
df.select($"id", from_json($"att", schema).getItem("City").as("City"))
//or df.select($"id", from_json($"att", schema)("City").as("City"))
//or df.select($"id", element_at(from_json($"att", schema), "City").as("City"))
// +---+----+
// | id|City|
// +---+----+
// | 25| xyz|
// | 26|null|
// | 27|null|
// +---+----+

How to add month to date where number of month to be added will be coming from column name

I want to explode the column in spark scala
reference_month M M+1 M+2
2020-01-01 10 12 10
2020-02-01 10 12 10
The output should be like
reference_month Month reference_date_id
2020-01-01 10 2020-01
2020-01-01 12 2020-02
2020-01-01 10 2020-03
2020-02-01 10 2020-02
2020-02-01 12 2020-03
2020-02-01 10 2020-04
Where reference_date_id = reference_month + x ( where x is derived from m, m+1,m+2).
Is there any way by which we can get the output in this format in spark scala?
You can you unpivot technique of Apache Spark
import org.apache.spark.sql.functions.expr
data.select($"reference_month",expr("stack(3,`M`,`M+1`,`M+2`) as (Month )")).show()
You can use **stack** function
import sys
from pyspark.sql.types import StructType,StructField,IntegerType,StringType
from pyspark.sql.functions import when,concat_ws,lpad,row_number,sum,col,expr,substring,length
from pyspark.sql.window import Window
schema = StructType([
StructField("reference_month", StringType(), True),\
StructField("M", IntegerType(), True),\
StructField("M+1", IntegerType(), True),\
StructField("M+2", IntegerType(), True)
])
mnt = [("2020-01-01",10,12,10),("2020-02-01",10,12,10)]
df=spark.createDataFrame(mnt,schema)
newdf = df.withColumn("t",col("reference_month").cast("date")).drop("reference_month").withColumnRenamed("t","reference_month")
exp = expr("""stack(3,`M`,`M+1`,`M+2`) as (Values)""")
t = newdf.select("reference_month",exp).withColumn('mnth',substring("reference_month",6,2)).withColumn("newmnth",col("mnth").cast("Integer")).drop('mnth')
windowval = (Window.partitionBy('reference_month').orderBy('reference_month').rowsBetween(-sys.maxsize, 0))
ref_cal=t.withColumn("reference_date_id",row_number().over(windowval)-1)
ref_cal.withColumn('new_dt',concat_ws('-',substring("reference_month",1,4),when(length(col("reference_date_id")+col("newmnth"))<2,lpad(col("reference_date_id")+col("newmnth"),2,'0')).otherwise(col("reference_date_id")+col("newmnth")))).drop("newmnth","reference_date_id").withColumnRenamed("new_dt","reference_date_id").orderBy("reference_month").show()
+---------------+------+-----------------+
|reference_month|Values|reference_date_id|
+---------------+------+-----------------+
| 2020-01-01| 10| 2020-01|
| 2020-01-01| 12| 2020-02|
| 2020-01-01| 10| 2020-03|
| 2020-02-01| 10| 2020-02|
| 2020-02-01| 12| 2020-03|
| 2020-02-01| 10| 2020-04|
+---------------+------+-----------------+
We can create an array with M,M+1,M+2 and then explode the array to get required dataframe.
Example:
df.selectExpr("reference_month","array(M,`M+1`,`M+2`)as arr").
selectExpr("reference_month","explode(arr) as Month").show()
+---------------+-----+
|reference_month|Month|
+---------------+-----+
| 202001| 10|
| 202001| 12|
| 202001| 10|
| 202002| 10|
| 202002| 12|
| 202002| 10|
+---------------+-----+
//or
val cols= Seq("M","M+1","M+2")
df.withColumn("arr",array(cols.head,cols.tail:_*)).drop(cols:_*).
selectExpr("reference_month","explode(arr) as Month").show()

Date format in pyspark

My data frame looks like -
id date
1 2018-08-23 11:48:22
2 2019-05-03 06:22:01
3 2019-05-13 10:12:15
4 2019-01-22 16:13:29
5 2018-11-27 11:17:19
My expected output is -
id date date1
1 2018-08-23 11:48:22 2018-08
2 2019-05-03 06:22:01 2019-05
3 2019-05-13 10:12:15 2019-05
4 2019-01-22 16:13:29 2019-01
5 2018-11-27 11:17:19 2018-11
How to do it in pyspark?
I think you are trying to drop day and time details, you can use date_format function for it
>>> df.show()
+---+-------------------+
| id| date|
+---+-------------------+
| 1|2018-08-23 11:48:22|
| 2|2019-05-03 06:22:01|
| 3|2019-05-13 10:12:15|
| 4|2019-01-22 16:13:29|
| 5|2018-11-27 11:17:19|
+---+-------------------+
>>> import pyspark.sql.functions as F
>>>
>>> df.withColumn('date1',F.date_format(F.to_date('date','yyyy-MM-dd HH:mm:ss'),'yyyy-MM')).show()
+---+-------------------+-------+
| id| date| date1|
+---+-------------------+-------+
| 1|2018-08-23 11:48:22|2018-08|
| 2|2019-05-03 06:22:01|2019-05|
| 3|2019-05-13 10:12:15|2019-05|
| 4|2019-01-22 16:13:29|2019-01|
| 5|2018-11-27 11:17:19|2018-11|
+---+-------------------+-------+
via to_date and then substr functions ... example:
import pyspark.sql.functions as F
import pyspark.sql.types as T
rawData = [(1, "2018-08-23 11:48:22"),
(2, "2019-05-03 06:22:01"),
(3, "2019-05-13 10:12:15")]
df = spark.createDataFrame(rawData).toDF("id","my_date")
df.withColumn("new_my_date",\
F.substring(F.to_date(F.col("my_date")), 1,7))\
.show()
+---+-------------------+-----------+
| id| my_date|new_my_date|
+---+-------------------+-----------+
| 1|2018-08-23 11:48:22| 2018-08|
| 2|2019-05-03 06:22:01| 2019-05|
| 3|2019-05-13 10:12:15| 2019-05|
+---+-------------------+-----------+
import pyspark.sql.functions as F
split_col = F.split(df['date'], '-')
df = df.withColumn('year', split_col.getItem(0)).withColumn('month', split_col.getItem(1))
df = df.select(F.concat(df['year'], F.lit('-'),df['month']).alias('year_month'))
df.show()
+----------+
|year_month|
+----------+
| 2018-08|
| 2019-05|
| 2019-05|
| 2019-01|
| 2018-11|
+----------+

Spark: create a sessionId based on timestamp

I like to do the following transformation. Given a data frame that records whether a user is logged. My aim is to create a sessionId for each record based on the timestamp and a pre-defined value TIMEOUT = 20.
A session period is defined as : [first record --> first record + Timeout]
For instance, the original DataFrame would look like the following:
scala> val df = sc.parallelize(List(
("user1",0),
("user1",3),
("user1",15),
("user1",22),
("user1",28),
("user1",41),
("user1",45),
("user1",85),
("user1",90)
)).toDF("user_id","timestamp")
df: org.apache.spark.sql.DataFrame = [user_id: string, timestamp: int]
+-------+---------+
|user_id|timestamp|
+-------+---------+
|user1 |0 |
|user1 |3 |
|user1 |15 |
|user1 |22 |
|user1 |28 |
|user1 |41 |
|user1 |45 |
|user1 |85 |
|user1 |90 |
+-------+---------+
The goal is:
+-------+---------+----------+
|user_id|timestamp|session_id|
+-------+---------+----------+
|user1 |0 | 0 |-> first record (session 0: period [0->20])
|user1 |3 | 0 |
|user1 |15 | 0 |
|user1 |22 | 1 |-> 22 not in [0->20]->new session(period 22->42)
|user1 |28 | 1 |
|user1 |41 | 1 |
|user1 |45 | 2 |-> 45 not in [22->42]->newsession(period 45->65)
|user1 |85 | 3 |
|user1 |90 | 3 |
+-------+---------+----------+
Are there any elegant solution to solve this problem, preferably in Scala.
Thanks in advance!
This may not be an elegant solution but this worked for given data format.
sc.parallelize(List(
("user1", 0),
("user1", 3),
("user1", 15),
("user1", 22),
("user1", 28),
("user1", 41),
("user1", 45),
("user1", 85),
("user1", 90))).toDF("user_id", "timestamp").map { x =>
val userId = x.getAs[String]("user_id")
val timestamp = x.getAs[Int]("timestamp")
val session = timestamp / 20
(userId, timestamp, session)
}.toDF("user_id", "timestamp", "session").show()
Result
You can change timestamp / 20 according to your need.
Please see my code.
Two issues here:
1,I think the performance is not well.
2,I use the "userid" to join, if this doesn't meet your requirement. You add a new column with a same value to timeSetFrame and newSessionSec.
var newSession = ss.sparkContext.parallelize(List(
("user1", 0), ("user1", 3), ("user1", 15), ("user1", 22),
("user1", 28), ("user1", 41), ("user1", 45), ("user1", 85),
("user1", 90))).zipWithIndex().toDF("tmp", "index")
val getUser_id = udf( ( s : Row) => {
s.getString(0)
})
val gettimestamp = udf( ( s : Row) => {
s.getInt(1)
})
val newSessionSec = newSession.withColumn( "user_id", getUser_id($"tmp"))
.withColumn( "timestamp", gettimestamp($"tmp")).drop( "tmp") //.show()
val timeSet : Array[Int] = newSessionSec.select("timestamp").collect().map( s=>s.getInt(0))
val timeSetFrame = ss.sparkContext.parallelize( Seq(( "user1",timeSet))).toDF( "user_id", "tset")
val newSessionThird = newSessionSec.join( timeSetFrame, Seq("user_id"), "outer") // .show
val getSessionID = udf( ( ts: Int, aa: Seq[Int]) => {
var result = 0
var begin = 0
val loop = new Breaks
loop.breakable {
for (time <- aa) {
if (time > (begin + 20)) {
begin = time
result += 1
}
if (time == ts) {
loop.break;
}
}
}
result
})
newSessionThird.withColumn( "sessionID", getSessionID( $"timestamp", $"tset")).drop("tset", "index").show()