Pyspark crossjoining - pyspark

Let's say, my dataframe has values
col1
|1|
|2|
|3|
So I want to create a new pyspark dataframe which have values
|1x1|
|1x2|
|1x3|
|2x1|
|2x2|
|2x3|
|3x1|
|3x2|
|3x3|
Can anyone explain me how to make this dataframe?
I am using crossJoin right now but it's showing error

try this :
from pyspark.sql import functions as F
df = spark.range(10)
df.show()
# +---+
# | id|
# +---+
# | 0|
# | 1|
# | 2|
# | 3|
# | 4|
# | 5|
# | 6|
# | 7|
# | 8|
# | 9|
# +---+
df_1 = df.alias("df1")
df_2 = df.alias("df2")
df_cross = df_1.crossJoin(df_2)
df_cross.show()
# +---+---+
# | id| id|
# +---+---+
# | 0| 0|
# | 0| 1|
# | 0| 2|
# | 0| 3|
# | 0| 4|
# | 1| 0|
# | 1| 1|
# | 1| 2|
# | 1| 3|
# | 1| 4|
# | 2| 0|
# | 2| 1|
# | 2| 2|
# | 2| 3|
# | 2| 4|
# | 3| 0|
# | 3| 1|
# | 3| 2|
# | 3| 3|
# | 3| 4|
# +---+---+
# only showing top 20 rows
df_cross = df_cross.withColumn(
"concat", F.concat_ws("x", F.col("df1.id"), F.col("df2.id"))
)
df_cross.show()
# +---+---+------+
# | id| id|concat|
# +---+---+------+
# | 0| 0| 0x0|
# | 0| 1| 0x1|
# | 0| 2| 0x2|
# | 0| 3| 0x3|
# | 0| 4| 0x4|
# | 1| 0| 1x0|
# | 1| 1| 1x1|
# | 1| 2| 1x2|
# | 1| 3| 1x3|
# | 1| 4| 1x4|
# | 2| 0| 2x0|
# | 2| 1| 2x1|
# | 2| 2| 2x2|
# | 2| 3| 2x3|
# | 2| 4| 2x4|
# | 3| 0| 3x0|
# | 3| 1| 3x1|
# | 3| 2| 3x2|
# | 3| 3| 3x3|
# | 3| 4| 3x4|
# +---+---+------+
# only showing top 20 rows

Related

create a new column to increment value when value resets to 1 in another column in pyspark

Logic and columnIn Pyspark DataFrame consider a column like [1,2,3,4,1,2,1,1,2,3,1,2,1,1,2]. Pyspark Column
create a new column to increment value when value resets to 1.
Expected output is[1,1,1,1,2,2,3,4,4,4,5,5,6,7,7]
i am bit new to pyspark, if anyone can help me it would be great for me.
written the logic as like below
def sequence(row_num):
results = [1, ]
flag = 1
for col in range(0, len(row_num)-1):
if row_num[col][0]>=row_num[col+1][0]:
flag+=1
results.append(flag)
return results
but not able to pass a column through udf. please help me in this
Your Dataframe:
df = spark.createDataFrame(
[
('1','a'),
('2','b'),
('3','c'),
('4','d'),
('1','e'),
('2','f'),
('1','g'),
('1','h'),
('2','i'),
('3','j'),
('1','k'),
('2','l'),
('1','m'),
('1','n'),
('2','o')
], ['group','label']
)
+-----+-----+
|group|label|
+-----+-----+
| 1| a|
| 2| b|
| 3| c|
| 4| d|
| 1| e|
| 2| f|
| 1| g|
| 1| h|
| 2| i|
| 3| j|
| 1| k|
| 2| l|
| 1| m|
| 1| n|
| 2| o|
+-----+-----+
You can create a flag and use a Window Function to calculate the cumulative sum. No need to use an UDF:
from pyspark.sql import Window as W
from pyspark.sql import functions as F
w = W.partitionBy().orderBy('label').rowsBetween(Window.unboundedPreceding, 0)
df\
.withColumn('Flag', F.when(F.col('group') == 1, 1).otherwise(0))\
.withColumn('Output', F.sum('Flag').over(w))\
.show()
+-----+-----+----+------+
|group|label|Flag|Output|
+-----+-----+----+------+
| 1| a| 1| 1|
| 2| b| 0| 1|
| 3| c| 0| 1|
| 4| d| 0| 1|
| 1| e| 1| 2|
| 2| f| 0| 2|
| 1| g| 1| 3|
| 1| h| 1| 4|
| 2| i| 0| 4|
| 3| j| 0| 4|
| 1| k| 1| 5|
| 2| l| 0| 5|
| 1| m| 1| 6|
| 1| n| 1| 7|
| 2| o| 0| 7|
+-----+-----+----+------+

Pyspark keep state within tasks

This is related to this question: Pyspark dataframe column value dependent on value from another row but this one gets even more complicated.
I have a dataframe:
columns = ['id','seq','manufacturer']
data = [("1",1,"Factory"), ("1",2,"Sub-Factory-1"), ("1",3,"Order"),("1",4,"Sub-Factory-1"),("2",1,"Factory"), ("2",2,"Sub-Factory-1"), ("2",5,"Sub-Factory-1"),("3",1, "Sub-Factory-1"),("3",2,"Order"), ("3",4, "Sub-Factory-1"), ("4", 1,"Factory"), ("4",3, "Sub-Factory-1"),("4",4, "Sub-Factory-1"),("5",1,"Sub-Factory-1"), ("5",2, "Sub-Factory-1"), ("5", 6,"Order"), ("6",2,"Factory"), ("6",3, "Order"), ("6",4,"Sub-Factory-1"), ("6", 6,"Sub-Factory-1"), ("6",7,"Order"), ("7",1,"Sub-Factory-1"), ("7",2,"Factory" ), ("7", 3,"Order"), ("7", 4,"Sub-Factory-1"),("7",5,"Factory"), ("7",8, "Sub-Factory-1"),("7",10,"Sub-Factory-1")]
rdd = spark.sparkContext.parallelize(data)
df = rdd.toDF(columns)
df.orderBy('id','seq').show(40)
+---+---+-------------+
| id|seq| manufacturer|
+---+---+-------------+
| 1| 1| Factory|
| 1| 2|Sub-Factory-1|
| 1| 3| Order|
| 1| 4|Sub-Factory-1|
| 2| 1| Factory|
| 2| 2|Sub-Factory-1|
| 2| 5|Sub-Factory-1|
| 3| 1|Sub-Factory-1|
| 3| 2| Order|
| 3| 4|Sub-Factory-1|
| 4| 1| Factory|
| 4| 3|Sub-Factory-1|
| 4| 4|Sub-Factory-1|
| 5| 1|Sub-Factory-1|
| 5| 2|Sub-Factory-1|
| 5| 6| Order|
| 6| 2| Factory|
| 6| 3| Order|
| 6| 4|Sub-Factory-1|
| 6| 6|Sub-Factory-1|
| 6| 7| Order|
| 7| 1|Sub-Factory-1|
| 7| 2| Factory|
| 7| 3| Order|
| 7| 4|Sub-Factory-1|
| 7| 5| Factory|
| 7| 8|Sub-Factory-1|
| 7| 10|Sub-Factory-1|
+---+---+-------------+
What I want to do is to assign hierarchical values to another column(not saying its the best idea) that I can use with the logic from Pyspark dataframe column value dependent on value from another row. So within id group and seq order I want only the first Sub-Factory to attribute to Factory, if there is a Factory within same id and seq order above the Sub-Factory.
So end result should look like:
+---+---+-------------+-------+
| id|seq| manufacturer|checker|
+---+---+-------------+-------+
| 1| 1| Factory| 1|
| 1| 2|Sub-Factory-1| 1|
| 1| 3| Order| 0|
| 1| 4|Sub-Factory-1| 0|
| 2| 1| Factory| 1|
| 2| 2|Sub-Factory-1| 1|
| 2| 5|Sub-Factory-1| 0|
| 3| 1|Sub-Factory-1| 0|
| 3| 2| Order| 0|
| 3| 4|Sub-Factory-1| 0|
| 4| 1| Factory| 1|
| 4| 3|Sub-Factory-1| 1|
| 4| 4|Sub-Factory-1| 0|
| 5| 1|Sub-Factory-1| 0|
| 5| 2|Sub-Factory-1| 0|
| 5| 6| Order| 0|
| 6| 2| Factory| 1|
| 6| 3| Order| 0|
| 6| 4|Sub-Factory-1| 1|
| 6| 6|Sub-Factory-1| 0|
| 6| 7| Order| 0|
| 7| 1|Sub-Factory-1| 0|
| 7| 2| Factory| 1|
| 7| 3| Order| 0|
| 7| 4|Sub-Factory-1| 1|
| 7| 5| Factory| 1|
| 7| 8|Sub-Factory-1| 1|
| 7| 10|Sub-Factory-1| 0|
+---+---+-------------+-------+
The dataset is large so I can't use something like df.collect() and then loop over data because it crashes memory. My first idea was to use an accumulator like:
acc = sc.accumulator(0)
def myFunc(manufacturer):
if manufacturer == 'Factory':
acc.value = 1
return 1
elif manufacturer == 'Sub-Factory-1' and acc.value == 1:
acc.value = 0
return 1
else:
return 0
myFuncUDF = F.udf(myFunc, IntegerType())
df = df.withColumn('test', myFuncUDF(col('manufacturer')))
But it's a bad idea since accumulator cannot be accessed within tasks.
Also Window function solves it if I want to attribute all Sub-Factories from above Factory within same id but now only the first Sub-Factory should get attributed. Any ideas?
from pyspark.sql.window import Window
from pyspark.sql.functions import *
df_mod = df.filter(df.manufacturer == 'Sub-Factory-1')
W = Window.partitionBy("id").orderBy("seq")
df_mod = df_mod.withColumn("rank",rank().over(W))
df_mod = df_mod.filter(col('rank') == 1)
df_mod2 = df.filter(col('manufacturer') == 'Factory')\
.select('id', 'seq', col('manufacturer').alias('Factory_chk_2'))
df_f = df\
.join(df_mod, ['id', 'seq'], 'left')\
.select('id', 'seq', df.manufacturer, 'rank')\
.join(df_mod2, 'id', 'left')\
.select('id', df.seq, df.manufacturer, 'rank', 'Factory_chk_2')\
.withColumn('Factory_chk', when(df.manufacturer=='Factory', 1))\
.withColumn('Factory_chk_2', when(col('Factory_chk_2')=='Factory', 1))\
.withColumn('checker',when(col('Factory_chk_2')=='1', coalesce(col('rank'),col('Factory_chk'))).otherwise(lit(0)))\
.select('id', 'seq', 'manufacturer', 'checker')\
.na.fill(value=0)\
.orderBy('id', 'seq')
df_f.show()
+---+---+-------------+-------+
| id|seq| manufacturer|checker|
+---+---+-------------+-------+
| 1| 1| Factory| 1|
| 1| 2|Sub-Factory-1| 1|
| 1| 3| Order| 0|
| 1| 4|Sub-Factory-1| 0|
| 2| 1| Factory| 1|
| 2| 2|Sub-Factory-1| 1|
| 2| 5|Sub-Factory-1| 0|
| 3| 1|Sub-Factory-1| 0|
| 3| 2| Order| 0|
| 3| 4|Sub-Factory-1| 0|
| 4| 1| Factory| 1|
| 4| 3|Sub-Factory-1| 1|
| 4| 4|Sub-Factory-1| 0|
| 5| 1|Sub-Factory-1| 0|
| 5| 2|Sub-Factory-1| 0|
| 5| 6| Order| 0|
| 6| 2| Factory| 1|
| 6| 3| Order| 0|
| 6| 4|Sub-Factory-1| 1|
| 6| 6|Sub-Factory-1| 0|
+---+---+-------------+-------+
only showing top 20 rows

Pyspark combine different rows base on a column

I have a dataframe
+----------------+------------+-----+
| Sport|Total_medals|count|
+----------------+------------+-----+
| Alpine Skiing| 3| 4|
| Alpine Skiing| 2| 18|
| Alpine Skiing| 4| 1|
| Alpine Skiing| 1| 38|
| Archery| 2| 12|
| Archery| 1| 72|
| Athletics| 2| 50|
| Athletics| 1| 629|
| Athletics| 3| 8|
| Badminton| 2| 5|
| Badminton| 1| 86|
| Baseball| 1| 216|
| Basketball| 1| 287|
|Beach Volleyball| 1| 48|
| Biathlon| 4| 1|
| Biathlon| 3| 9|
| Biathlon| 1| 61|
| Biathlon| 2| 23|
| Bobsleigh| 2| 6|
| Bobsleigh| 1| 60|
+----------------+------------+-----+
Is there a way for me to combine the value of counts from multiple rows if they are from the same sport?
For example, if Sport = Alpine Skiing I would have something like this:
+----------------+-----+
| Sport|count|
+----------------+-----+
| Alpine Skiing| 61|
+----------------+-----+
where count is equal to 4+18+1+38 = 61. I would like to do this for all sports
any help would be appreciated
You need to groupby on the Sport column and then aggregate the count column with the sum() function.
Example:
import pyspark.sql.functions as F
grouped_df = df.groupby('Sport').agg(F.sum('count'))

How to sum in pyspark?

I have a below table and i just want to sum column _10 & _12 column but i am getting error
_2|_10|_12|
+------+---+---+
|yearID| H| 3B|
| 2004| 0| 0|
| 2006| 0| 0|
| 2007| 0| 0|
| 2008| 0| 0|
| 2009| 0| 0|
| 2010| 0| 0|
| 1954|131| 6|
| 1955|189| 9|
| 1956|200| 14|
| 1957|198| 6|
| 1958|196| 4|
| 1959|223| 7|
| 1960|172| 11|
| 1961|197| 10|
| 1962|191| 6|
| 1963|201| 4|
| 1964|187| 2|
| 1965|181| 1|
| 1966|168| 1|
| 1967|184| 3|
| 1968|174| 4|
| 1969|164| 3|
| 1970|154| 1|
| 1971|162| 3|
| 1972|119| 0|
| 1973|118| 1|
| 1974| 91| 0|
| 1975|109| 2|
| 1976| 62| 0|
I am not sure what you mean about sum. If you mean sum whole column's values, you can use agg function. Or if you want to sum like _10 + _12 and create a new column then use withColumn function
>>> data = sc.parallelize([
... ('yearID','H','3B'),
... ('2004','0','0'),
... ('2006','0','0'),
... ('2007','0','0'),
... ('2008','0','0'),
... ('2009','0','0'),
... ('2010','0','0'),
... ('1954','131','6'),
... ('1955','189','9'),
... ('1956','200','14'),
... ('1957','198','6')
... ])
>>>
>>> cols = ['_2','_10','_12']
>>>
>>> df = spark.createDataFrame(data,cols)
18/10/01 04:22:48 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
>>>
>>> df.show()
+------+---+---+
| _2|_10|_12|
+------+---+---+
|yearID| H| 3B|
| 2004| 0| 0|
| 2006| 0| 0|
| 2007| 0| 0|
| 2008| 0| 0|
| 2009| 0| 0|
| 2010| 0| 0|
| 1954|131| 6|
| 1955|189| 9|
| 1956|200| 14|
| 1957|198| 6|
+------+---+---+
>>> df.agg({'_10':'sum','_12':'sum'}).show()
+--------+--------+
|sum(_12)|sum(_10)|
+--------+--------+
| 35.0| 718.0|
+--------+--------+
>>> df.withColumn('new_col', df['_10']+df['_12']).show()
+------+---+---+-------+
| _2|_10|_12|new_col|
+------+---+---+-------+
|yearID| H| 3B| null|
| 2004| 0| 0| 0.0|
| 2006| 0| 0| 0.0|
| 2007| 0| 0| 0.0|
| 2008| 0| 0| 0.0|
| 2009| 0| 0| 0.0|
| 2010| 0| 0| 0.0|
| 1954|131| 6| 137.0|
| 1955|189| 9| 198.0|
| 1956|200| 14| 214.0|
| 1957|198| 6| 204.0|
+------+---+---+-------+

Row count broken up by a focal value

I have the following DataFrame in Spark using Scala:
val df = List(
("random", 0),
("words", 1),
("in", 1),
("a", 1),
("column", 1),
("are", 0),
("what", 0),
("have", 1),
("been", 1),
("placed", 0),
("here", 1),
("now", 1)
).toDF(Seq("words", "numbers"): _*)
df.show()
+------+-------+
| words|numbers|
+------+-------+
|random| 0|
| words| 1|
| in| 1|
| a| 1|
|column| 1|
| are| 0|
| what| 0|
| have| 1|
| been| 1|
|placed| 0|
| here| 1|
| now| 1|
+------+-------+
I'd like to add a column that contains the count of rows which is started over at every 0 in the numbers column. It would look like this:
+------+-------+-----+
| words|numbers|count|
+------+-------+-----+
|random| 0| 5|
| words| 1| 5|
| in| 1| 5|
| a| 1| 5|
|column| 1| 5|
| are| 0| 1|
| what| 0| 3|
| have| 1| 3|
| been| 1| 3|
|placed| 0| 3|
| here| 1| 3|
| now| 1| 3|
+------+-------+-----+
Here is a method using selectExpr with SQL window functions sum and count; sum of 1-numbers generates the group id which increases by 1 when a zero is encountered, then count the number of rows by this group id:
This might be inefficient since you don't have any partition column.
df.selectExpr(
"words", "numbers",
"count(*) over(partition by sum(1-numbers) over (order by monotonically_increasing_id())) as count"
).show
+------+-------+-----+
| words|numbers|count|
+------+-------+-----+
|random| 0| 5|
| words| 1| 5|
| in| 1| 5|
| a| 1| 5|
|column| 1| 5|
| are| 0| 1|
| what| 0| 3|
| have| 1| 3|
| been| 1| 3|
|placed| 0| 3|
| here| 1| 3|
| now| 1| 3|
+------+-------+-----+