GroupBy Horizontal Stacking in PySpark Dataframe [duplicate] - group-by

This question already has answers here:
combine text from multiple rows in pyspark
(3 answers)
Closed 3 years ago.
I have a PySpark dataframe like this:
cust_id prod
1 A
1 B
1 C
2 D
2 E
2 F
Desired Output:
cust_id prod
1 A/B/C
2 D/E/F
Now using Pandas I am able to do it like below:
T=df.groupby(['cust_id'])['prod'].apply(lambda x:np.hstack(x)).reset_index()
def func_x(ls):
n=len(ls)
s=''
for i in range(n):
if n-i==1:
s=s+ls[i]
else:
s=s+ls[i]+'/'
return s
T['prod1']=T['prod'].apply(lambda x:func_x(x))
What will be this code's equivalent in PySpark?

import pyspark.sql.functions as F
separator = '/'
T = df.groupby('cust_id').agg(F.concat_ws(separator, f.collect_list(df.col2)))

Related

How to combine multiple rows in spark dataframe into single row having the same id [duplicate]

This question already has answers here:
Merge Rows in Apache spark by eliminating null values
(2 answers)
Closed 6 months ago.
I have multiple rows(with same id) of data in spark scala dataframe. how to combine the data for all the rows into single row.
Below screenshot consists of input data and expected data
You could do it with a Window function and then aggregating with pyspark.sql.functions.first but ignoring the nulls with ignorenulls=True instead of the default ignorenulls=False. Finally, take a .distinct() to get rid of the duplicate rows (3 each in this case) as the aggregation happens for every row.
from pyspark.sql import functions as F, Window
window_spec = Window.partitionBy("eid")
cols = df.columns
df = (df.select(*[(F.first(col, ignorenulls=True).over(window_spec)).alias(col) for col in cols])
.distinct()
)
df.show()
Output:
+---+-----+----+-------+-----+-----------+--------+
|eid|ename|esal| eaddr|edept|designation|isactive|
+---+-----+----+-------+-----+-----------+--------+
|123| abc|1000|newyork| IT| manager| y|
|456| def|2000|chicago| mech| lead| n|
+---+-----+----+-------+-----+-----------+--------+

spark Group By data-frame columns without aggregation [duplicate]

This question already has answers here:
How to aggregate values into collection after groupBy?
(3 answers)
Closed 4 years ago.
I have a csv file in hdfs : /hdfs/test.csv, I like to group below data using spark & scala, I need a output some this like this.
I want to group by A1...AN column based on A1 column and the output should be something like this
all the rows should be grouped like below.
OUTPUt:
JACK , ABCD, ARRAY("0,1,0,1", "2,9,2,9")
JACK , LMN, ARRAY("0,1,0,3", "0,4,3,T")
JACK, HBC, ARRAY("1,T,5,21", "E7,4W,5,8)
Input:
++++++++++++++++++++++++++++++
name A1 A1 A2 A3..AN
--------------------------------
JACK ABCD 0 1 0 1
JACK LMN 0 1 0 3
JACK ABCD 2 9 2 9
JAC HBC 1 T 5 21
JACK LMN 0 4 3 T
JACK HBC E7 4W 5 8
I need a below output in spark scala
JACK , ABCD, ARRAY("0,1,0,1", "2,9,2,9")
JACK , LMN, ARRAY("0,1,0,3", "0,4,3,T")
JACK, HBC, ARRAY("1,T,5,21", "E7,4W,5,8)
You can achieve this by having the columns as an array.
import org.apache.spark.sql.functions.{collect_set, concat_ws, array, col}
val aCols = 1.to(250).map( x -> col(s"A$x"))
val concatCol = concat_ws(",", array(aCols : _*))
groupedDf = df.withColumn("aConcat", concatCol).
groupBy("name", "A").
agg(collect_set("aConcat"))
If you're okay with duplicates you can also use collect_list instead of collect_set.
Your input has two different columns called A1. I will assume the groupBy category is called A, while the element to put in that final array is A1.
If you load the data into a DataFrame, you can do this to achieve the output specified:
import org.apache.spark.sql.functions.{collect_set, concat_ws}
val grouped = someDF
.groupBy($"name", $"A")
.agg(collect_set(concat_ws(",", $"A1", $"A2", $"A3", $"A4")).alias("grouped"))

filter rows based on combination of 2 columns in Spark DF

input DF:
A B
1 1
2 1
2 2
3 3
3 1
3 2
3 3
3 4
I am trying to filter the rows based on the combination of
(A, Max(B))
Output Df:
A B
1 1
2 3
3 4
I am able to do this with
df.groupBy()
But there are also other columns in the DF which I want to be selected but do not want to be included in the GroupBy
So that condition on filtering the rows should only be wrt these columns and not the other columns in the DF. Ay suggestions please>
As suggested in How to get other columns when using Spark DataFrame groupby? you can use window functions
import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._
df.withColumn("maxB", max(col("B")).over(Window.partitionBy("A"))).where(...)
where ... is replaced by a predicate based on A and maxB.

Combing rows in a spark dataframe

If I have an input as below:
sno name time
1 hello 1
1 hello 2
1 hai 3
1 hai 4
1 hai 5
1 how 6
1 how 7
1 are 8
1 are 9
1 how 10
1 how 11
1 are 12
1 are 13
1 are 14
I want to combine the fields having similar values in name as the below output format:
sno name timestart timeend
1 hello 1 2
1 hai 3 5
1 how 6 7
1 are 8 9
1 how 10 11
1 are 12 14
The input will be sorted according to time and only the records which are having the same name for repeated time intervals must be merged.
I am trying to do using spark but I cannot figure out a way to do this using spark functions since I am new to spark. Any suggestions on the approach will be appreciated.
I tried thinking of writing a user-defined function and applying maps to the data frame but I could not come up with the right logic for the function.
PS: I am trying to do this using scala spark.
One way to do so would be to use a plain SQL query.
Let's say df is your input dataframe.
val viewName = s"dataframe"
df.createOrReplaceTempView(viewName)
spark.sql(query(viewName))
def query(viewName: String): String = s"SELECT sno, name, MAX(time) AS timeend, MIN(time) AS timestart FROM $viewName GROUP BY name"
You can of course use df set. This would be something like:
df.groupBy($"name")
.agg($"sno", $"name", max($"time").as("timeend"), min($"time").as("timestart"))

Spark, Scala - How to get Top 3 value from each group of two column in dataframe [duplicate]

This question already has answers here:
Retrieve top n in each group of a DataFrame in pyspark
(6 answers)
get TopN of all groups after group by using Spark DataFrame
(1 answer)
Closed 5 years ago.
I have one DataFrame which contains these values :
Dept_id | name | salary
1 A 10
2 B 100
1 D 100
2 C 105
1 N 103
2 F 102
1 K 90
2 E 110
I want the result in this form :
Dept_id | name | salary
1 N 103
1 D 100
1 K 90
2 E 110
2 C 105
2 F 102
Thanks In Advance :).
the solution is similar to Retrieve top n in each group of a DataFrame in pyspark which is in pyspark
If you do the same in scala, then it should be as below
df.withColumn("rank", rank().over(Window.partitionBy("Dept_id").orderBy($"salary".desc)))
.filter($"rank" <= 3)
.drop("rank")
I hope the answer is helpful