Dropping rows from a spark dataframe based on a condition - pyspark

I want to drop rows from a spark dataframe of lists based on a condition. The condition is the length of the list being a certain length.
I have tried converting it into a list of lists and then using a for loop (demonstrated below) but I'm hoping to do it in one statement within spark and just creating a new immutable df from the original df based on this condition.
newList = df2.values.tolist()
finalList = []
for subList in newList:
if len(subList) < 4:
finalList.append(subList)
So for instance, if the dataframe is a one column dataframe and the column is named sequences, it looks like:
sequences
____________
[1, 2, 4]
[1, 6, 3]
[9, 1, 4, 6]
I want to drop all rows where the length of the list is more than 3, resulting in:
sequences
____________
[1, 2, 4]
[1, 6, 3]

Here it is one approach in Spark >= 1.5 using the build-in size function:
from pyspark.sql import Row
from pyspark.sql.functions import size
df = spark.createDataFrame([Row(a=[9, 3, 4], b=[8,9,10]),Row(a=[7, 2, 6, 4], b=[2,1,5]), Row(a=[7, 2, 4], b=[8,2,1,5]), Row(a=[2, 4], b=[8,2,10,12,20])])
df.where(size(df['a']) <= 3).show()
Output:
+---------+------------------+
| a| b|
+---------+------------------+
|[9, 3, 4]| [8, 9, 10]|
|[7, 2, 4]| [8, 2, 1, 5]|
| [2, 4]|[8, 2, 10, 12, 20]|
+---------+------------------+

Related

Looking to get counts of items within ArrayType column without using Explode

NOTE: I'm working with Spark 2.4
Here is my dataset:
df
col
[1,3,1,4]
[1,1,1,2]
I'd like to essentially get a value_counts of the values in the array. The results df wou
df_upd
col
[{1:2},{3:1},{4:1}]
[{1:3},{2:1}]
I know I can do this by exploding df and then taking a group by but I'm wondering if I can do this without exploding.
Here's a solution using a udf that outputs the result as a MapType. It expects integer values in your arrays (easily changed) and to return integer counts.
from pyspark.sql import functions as F
from pyspark.sql import types as T
df = sc.parallelize([([1, 2, 3, 3, 1],),([4, 5, 6, 4, 5],),([2, 2, 2],),([3, 3],)]).toDF(['arrays'])
df.show()
+---------------+
| arrays|
+---------------+
|[1, 2, 3, 3, 1]|
|[4, 5, 6, 4, 5]|
| [2, 2, 2]|
| [3, 3]|
+---------------+
from collections import Counter
#F.udf(returnType=T.MapType(T.IntegerType(), T.IntegerType(), valueContainsNull=False))
def count_elements(array):
return dict(Counter(array))
df.withColumn('counts', count_elements(F.col('arrays'))).show(truncate=False)
+---------------+------------------------+
|arrays |counts |
+---------------+------------------------+
|[1, 2, 3, 3, 1]|[1 -> 2, 2 -> 1, 3 -> 2]|
|[4, 5, 6, 4, 5]|[4 -> 2, 5 -> 2, 6 -> 1]|
|[2, 2, 2] |[2 -> 3] |
|[3, 3] |[3 -> 2] |
+---------------+------------------------+

Pyspark - Padding zeros of array int datatype without pandas udf

Need to left padding in Array column of pyspark dataframe without using pandasudf.
Input Dataframe:
|lags|
|----|
|[0]|
|[0,1,2]|
|[0,1]|
Output Data frame:
|lags|
|----|
|[0,0,0]|
|[0,1,2]|
|[0,0,1]|
You can use array_repeat to create zero padding array and concat them.
Use #ARCrow's function to identify the max array size.
max_arr_size = 3
df = (df.withColumn('pad', F.array_repeat(F.lit(0), max_arr_size - F.size('lags')))
.withColumn('padded', F.concat('pad', 'lags')))
This is how I did it
import pyspark.sql.functions as f
df = spark.createDataFrame([
([0],),
([0,1,2],),
([0,1],),
(None,)
], ['lags'])
max_size = (df
.withColumn('array_size', f.size(f.col('lags')))
.groupBy()
.agg(f.max(f.col('array_size')).alias('max_size'))
.collect()[0].max_size
)
df = (df
.withColumn('lags', f.when(f.col('lags').isNull(), f.array(*[])).otherwise(f.col('lags'))) #to deal with null values
.withColumn('pre_zeros', f.sequence(f.lit(0), f.lit(max_size) - f.size(f.col('lags'))))
.withColumn('zeros', f.expr('transform(slice(pre_zeros, 1, size(pre_zeros) - 1), element -> 0)'))
.withColumn('final_lags', f.concat(f.col('zeros'), f.col('lags')))
)
df.show()
And the output is:
+---------+------------+---------+----------+
| lags| pre_zeros| zeros|final_lags|
+---------+------------+---------+----------+
| [0]| [0, 1, 2]| [0, 0]| [0, 0, 0]|
|[0, 1, 2]| [0]| []| [0, 1, 2]|
| [0, 1]| [0, 1]| [0]| [0, 0, 1]|
| []|[0, 1, 2, 3]|[0, 0, 0]| [0, 0, 0]|
+---------+------------+---------+----------+

scala dataframe join columns and split arrays explode spark

I have some co-ordinates in multiple array columns in a dataframe and want to split them to have the x,y,z in separate columns in order, column1 data first, then column 2
for example...
COL 1 | COL2
[[x,y,z],[x,y,z],[x,y,z]...] | [[x,y,z],[x,y,z],[x,y,z]...]
e.g
[[1,1,1],[2,2,2],[3,3,3]...] | [[8,8,8],[9,9,9],[10,10,10]...]
required OUTPUT
COL X | COL Y | COL Z
x,x,x,x,x.... | y,y,y,y,y.... | z,z,z,z,z....
e.g.
1,2,3,..,8,9,10.. | 1,2,3,..,8,9,10.. | 1,2,3,..,8,9,10..
any help appreciated
You can use array_union function as follows
df.select(
array_union($"col1._1", $"col2._1").as("x"),
array_union($"col1._2", $"col2._2").as("y"),
array_union($"col1._3", $"col2._3").as("z"))
INPUT
+--------------------------------------------+--------------------------------------------------+
|col1 |col2 |
+--------------------------------------------+--------------------------------------------------+
|[[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]|[[8, 8, 8], [9, 9, 9], [10, 10, 10], [11, 11, 11]]|
+--------------------------------------------+--------------------------------------------------+
OTUPUT
+--------------------------+--------------------------+--------------------------+
|x |y |z |
+--------------------------+--------------------------+--------------------------+
|[1, 2, 3, 4, 8, 9, 10, 11]|[1, 2, 3, 4, 8, 9, 10, 11]|[1, 2, 3, 4, 8, 9, 10, 11]|
+--------------------------+--------------------------+--------------------------+

How to find sum of arrays in a column which is grouped by another column values in a spark dataframe using scala

I have a dataframe like below
c1 Value
A Array[47,97,33,94,6]
A Array[59,98,24,83,3]
A Array[77,63,93,86,62]
B Array[86,71,72,23,27]
B Array[74,69,72,93,7]
B Array[58,99,90,93,41]
C Array[40,13,85,75,90]
C Array[39,13,33,29,14]
C Array[99,88,57,69,49]
I need an output as below.
c1 Value
A Array[183,258,150,263,71]
B Array[218,239,234,209,75]
C Array[178,114,175,173,153]
Which is nothing but grouping column c1 and find the sum of values in column value in a sequential manner .
Please help, I couldn't find any way of doing this in google .
It is not very complicated. As you mention it, you can simply group by "c1" and aggregate the values of the array index by index.
Let's first generate some data:
val df = spark.range(6)
.select('id % 3 as "c1",
array((1 to 5).map(_ => floor(rand * 10)) : _*) as "Value")
df.show()
+---+---------------+
| c1| Value|
+---+---------------+
| 0|[7, 4, 7, 4, 0]|
| 1|[3, 3, 2, 8, 5]|
| 2|[2, 1, 0, 4, 4]|
| 0|[0, 4, 2, 1, 8]|
| 1|[1, 5, 7, 4, 3]|
| 2|[2, 5, 0, 2, 2]|
+---+---------------+
Then we need to iterate over the values of the array so as to aggregate them. It is very similar to the way we created them:
val n = 5 // if you know the size of the array
val n = df.select(size('Value)).first.getAs[Int](0) // If you do not
df
.groupBy("c1")
.agg(array((0 until n).map(i => sum(col("Value").getItem(i))) :_* ) as "Value")
.show()
+---+------------------+
| c1| Value|
+---+------------------+
| 0|[11, 18, 15, 8, 9]|
| 1| [2, 10, 5, 7, 4]|
| 2|[7, 14, 15, 10, 4]|
+---+------------------+

Spark Dataframe Arraytype columns

I would like to create a new column on a dataframe, which is the result of applying a function to an arraytype column.
Something like this:
df = df.withColumn("max_$colname", max(col(colname)))
where each row of the column holds an array of values?
The functions in spark.sql.function appear to work on a column basis only.
You can apply user defined functions on the array column.
1.DataFrame
+------------------+
| arr|
+------------------+
| [1, 2, 3, 4, 5]|
|[4, 5, 6, 7, 8, 9]|
+------------------+
2.Creating UDF
import org.apache.spark.sql.functions._
def max(arr: TraversableOnce[Int])=arr.toList.max
val maxUDF=udf(max(_:Traversable[Int]))
3.Applying UDF in query
df.withColumn("arrMax",maxUDF(df("arr"))).show
4.Result
+------------------+------+
| arr|arrMax|
+------------------+------+
| [1, 2, 3, 4, 5]| 5|
|[4, 5, 6, 7, 8, 9]| 9|
+------------------+------+