PySpark: Pivot only one row to column - pyspark

I have a dataframe like so:
df = sc.parallelize([("num1", "1"), ("num2", "5"), ("total", "10")]).toDF(("key", "val"))
key val
num1 1
num2 5
total 10
I want to pivot only the total row to a new column and keep its value for each row:
key val total
num1 1 10
num2 5 10
I've tried pivoting and aggregating but cannot get the one column with the same value.

You could join a dataframe with only the total to a dataframe without the total.
Another option would be to collect the total and add it as a literal.
from pyspark.sql import functions as f
# option 1
df1 = df.filter("key <> 'total'")
df2 = df.filter("key = 'total'").select(f.col('val').alias('total'))
df1.join(df2).show()
+----+---+-----+
| key|val|total|
+----+---+-----+
|num1| 1| 10|
|num2| 5| 10|
+----+---+-----+
# option 2
total = df.filter("key = 'total'").select('val').collect()[0][0]
df.filter("key <> 'total'").withColumn('total', f.lit(total)).show()
+----+---+-----+
| key|val|total|
+----+---+-----+
|num1| 1| 10|
|num2| 5| 10|
+----+---+-----+

Related

pyspark: filtering rows by length of inside values

I have a PySpark dataframe with a column contains Python list
id value
1 [1,2,3]
2 [1,2]
I want to remove all rows with len of the list in value column is less than 3.
So I tried:
df.filter(len(df.value) >= 3)
and indeed it does not work.
How can I filter the dataframe by the length of the inside data?
Refer to this link -
size() - It returns the length of the array or map stored in the column.
from pyspark.sql.functions import size
myValues = [(1,[1,2,3]),(2,[1,2])]
df = sqlContext.createDataFrame(myValues,['id','value'])
df.show()
+----+---------+
| id| value|
+--------------+
| 1| [1,2,3]|
| 2| [1,2]|
+----+---------+
df = df.filter(size(df.value) >= 3).show()
+----+---------+
| id| value|
+--------------+
| 1| [1,2,3]|
+----+---------+

Adding dictionary keys as column name and dictionary value as the constant value of that column in Pyspark df

I have a dictionary x = {'colA': 20, 'colB': 30} and a pyspark df.
ID Value
1 ABC
1 BCD
1 AKB
2 CAB
2 AIK
3 KIB
I want to create df1 using x as follows:
ID Value colA colB
1 ABC 20.0 30.0
1 BCD 20.0 30.0
1 AKB 20.0 30.0
2 CAB 20.0 30.0
...
Any idea how to do it Pyspark.
I know I can create a constant column like this,
df1 = df.withColumn('colA', lit(20.0))
df1 = df1.withColumn('colB', lit(30.0))
But not sure about the dynamic process to do it from dictionary
There are ways to hide the loop, but the execution will be the same. For instance, you can use select:
from pyspark.sql.functions import lit
df2 = df.select("*", *[lit(val).alias(key) for key, val in x.items()])
df2.show()
#+---+-----+----+----+
#| ID|Value|colB|colA|
#+---+-----+----+----+
#| 1| ABC| 30| 20|
#| 1| BCD| 30| 20|
#| 1| AKB| 30| 20|
#| 2| CAB| 30| 20|
#| 2| AIK| 30| 20|
#| 3| KIB| 30| 20|
#+---+-----+----+----+
Or functools.reduce and withColumn:
from functools import reduce
df3 = reduce(lambda df, key: df.withColumn(key, lit(x[key])), x, df)
df3.show()
# Same as above
Or pyspark.sql.functions.struct with select() and the "*" syntax:
from pyspark.sql.functions import struct
df4 = df.withColumn('x', struct([lit(val).alias(key) for key, val in x.items()]))\
.select("ID", "Value", "x.*")
df4.show()
#Same as above
But if you look at the execution plan of these methods, you'll see that they're exactly the same:
df2.explain()
#== Physical Plan ==
#*Project [ID#44L, Value#45, 30 AS colB#151, 20 AS colA#152]
#+- Scan ExistingRDD[ID#44L,Value#45]
df3.explain()
#== Physical Plan ==
#*Project [ID#44L, Value#45, 30 AS colB#102, 20 AS colA#107]
#+- Scan ExistingRDD[ID#44L,Value#45]
df4.explain()
#== Physical Plan ==
#*Project [ID#44L, Value#45, 30 AS colB#120, 20 AS colA#121]
#+- Scan ExistingRDD[ID#44L,Value#45]
Further if you compare the loop method in #anil's answer:
df1 = df
for key in x:
df1 = df1.withColumn(key, lit(x[key]))
df1.explain()
#== Physical Plan ==
#*Project [ID#44L, Value#45, 30 AS colB#127, 20 AS colA#132]
#+- Scan ExistingRDD[ID#44L,Value#45]
You'll see that this is the same as well.
Loop through the dictionary as below
df1 = df
for key in x:
df1 = df1.withColumn(key, lit(x[key]))

Pyspark: Delete rows on column condition after groupBy

This is my input dataframe:
id val
1 Y
1 N
2 a
2 b
3 N
Result should be:
id val
1 Y
2 a
2 b
3 N
I want to group by on col id which has both Y and N in the val and then remove the row where the column val contains "N".
Please help me resolve this issue as i am beginner to pyspark
you can first identify the problematic rows with a filter for val=="Y" and then join this dataframe back to the original one. Finally you can filter for Null values and for the rows you want to keep, e.g. val==Y. Pyspark should be able to handle the self-join even if there are a lot of rows.
The example is shown below:
df_new = spark.createDataFrame([
(1, "Y"), (1, "N"), (1,"X"), (1,"Z"),
(2,"a"), (2,"b"), (3,"N")
], ("id", "val"))
df_Y = df_new.filter(col("val")=="Y").withColumnRenamed("val","val_Y").withColumnRenamed("id","id_Y")
df_new = df_new.join(df_Y, df_new["id"]==df_Y["id_Y"],how="left")
df_new.filter((col("val_Y").isNull()) | ((col("val_Y")=="Y") & ~(col("val")=="N"))).select("id","val").show()
The result would be your preferred:
+---+---+
| id|val|
+---+---+
| 1| X|
| 1| Y|
| 1| Z|
| 3| N|
| 2| a|
| 2| b|
+---+---+

Add a New column in pyspark Dataframe (alternative of .apply in pandas DF)

I have a pyspark.sql.DataFrame.dataframe df
id col1
1 abc
2 bcd
3 lal
4 bac
i want to add one more column flag in the df such that if id is odd no, flag should be 'odd' , if even 'even'
final output should be
id col1 flag
1 abc odd
2 bcd even
3 lal odd
4 bac even
I tried:
def myfunc(num):
if num % 2 == 0:
flag = 'EVEN'
else:
flag = 'ODD'
return flag
df['new_col'] = df['id'].map(lambda x: myfunc(x))
df['new_col'] = df['id'].apply(lambda x: myfunc(x))
It Gave me error : TypeError: 'Column' object is not callable
How do is use .apply ( as i use in pandas dataframe) in pyspark
pyspark doesn't provide apply, the alternative is to use withColumn function. Use withColumn to perform this operation.
from pyspark.sql import functions as F
df = sqlContext.createDataFrame([
[1,"abc"],
[2,"bcd"],
[3,"lal"],
[4,"bac"]
],
["id","col1"]
)
df.show()
+---+----+
| id|col1|
+---+----+
| 1| abc|
| 2| bcd|
| 3| lal|
| 4| bac|
+---+----+
df.withColumn(
"flag",
F.when(F.col("id")%2 == 0, F.lit("Even")).otherwise(
F.lit("odd"))
).show()
+---+----+----+
| id|col1|flag|
+---+----+----+
| 1| abc| odd|
| 2| bcd|Even|
| 3| lal| odd|
| 4| bac|Even|
+---+----+----+

Get Unique records in Spark [duplicate]

This question already has answers here:
How to select the first row of each group?
(9 answers)
Closed 5 years ago.
I have a dataframe df as mentioned below:
**customers** **product** **val_id** **rule_name** **rule_id** **priority**
1 A 1 ABC 123 1
3 Z r ERF 789 2
2 B X ABC 123 2
2 B X DEF 456 3
1 A 1 DEF 456 2
I want to create a new dataframe df2, which will have only unique customer ids, but as rule_name and rule_id columns are different for same customer in data, so I want to pick those records which has highest priority for the same customer, so my final outcome should be:
**customers** **product** **val_id** **rule_name** **rule_id** **priority**
1 A 1 ABC 123 1
3 Z r ERF 789 2
2 B X ABC 123 2
Can anyone please help me to achieve it using Spark scala. Any help will be appericiated.
You basically want to select rows with extreme values in a column. This is a really common issue, so there's even a whole tag greatest-n-per-group. Also see this question SQL Select only rows with Max Value on a Column which has a nice answer.
Here's an example for your specific case.
Note that this could select multiple rows for a customer, if there are multiple rows for that customer with the same (minimum) priority value.
This example is in pyspark, but it should be straightforward to translate to Scala
# find best priority for each customer. this DF has only two columns.
cusPriDF = df.groupBy("customers").agg( F.min(df["priority"]).alias("priority") )
# now join back to choose only those rows and get all columns back
bestRowsDF = df.join(cusPriDF, on=["customers","priority"], how="inner")
To create df2 you have to first order df by priority and then find unique customers by id. Like this:
val columns = df.schema.map(_.name).filterNot(_ == "customers").map(col => first(col).as(col))
val df2 = df.orderBy("priority").groupBy("customers").agg(columns.head, columns.tail:_*).show
It would give you expected output:
+----------+--------+-------+----------+--------+---------+
| customers| product| val_id| rule_name| rule_id| priority|
+----------+--------+-------+----------+--------+---------+
| 1| A| 1| ABC| 123| 1|
| 3| Z| r| ERF| 789| 2|
| 2| B| X| ABC| 123| 2|
+----------+--------+-------+----------+--------+---------+
Corey beat me to it, but here's the Scala version:
val df = Seq(
(1,"A","1","ABC",123,1),
(3,"Z","r","ERF",789,2),
(2,"B","X","ABC",123,2),
(2,"B","X","DEF",456,3),
(1,"A","1","DEF",456,2)).toDF("customers","product","val_id","rule_name","rule_id","priority")
val priorities = df.groupBy("customers").agg( min(df.col("priority")).alias("priority"))
val top_rows = df.join(priorities, Seq("customers","priority"), "inner")
+---------+--------+-------+------+---------+-------+
|customers|priority|product|val_id|rule_name|rule_id|
+---------+--------+-------+------+---------+-------+
| 1| 1| A| 1| ABC| 123|
| 3| 2| Z| r| ERF| 789|
| 2| 2| B| X| ABC| 123|
+---------+--------+-------+------+---------+-------+
You will have to use min aggregation on priority column grouping the dataframe by customers and then inner join the original dataframe with the aggregated dataframe and select the required columns.
val aggregatedDF = dataframe.groupBy("customers").agg(max("priority").as("priority_1"))
.withColumnRenamed("customers", "customers_1")
val finalDF = dataframe.join(aggregatedDF, dataframe("customers") === aggregatedDF("customers_1") && dataframe("priority") === aggregatedDF("priority_1"))
finalDF.select("customers", "product", "val_id", "rule_name", "rule_id", "priority").show
you should have the desired result