how to delete the columns in dataframe - pyspark

df2000.drop('jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec').show()
now it's showing without deleted columns in dataframe
df2000.show()
when i run the show command alone to check the table .but comes with deleted column.

drop is not a side-effecting function. it returns a new Dataframe with specified columns removed. so you would have assign the new dataframe to a value to be referenced later as shown below.
>>> df2000 = spark.createDataFrame([('a',10,20,30),('a',10,20,30),('a',10,20,30),('a',10,20,30)],['key', 'jan', 'feb', 'mar'])
>>> cols = ['jan', 'feb', 'mar']
>>> df2000.show()
+---+---+---+---+
|key|jan|feb|mar|
+---+---+---+---+
| a| 10| 20| 30|
| a| 10| 20| 30|
| a| 10| 20| 30|
| a| 10| 20| 30|
+---+---+---+---+
>>> cols = ['jan', 'feb', 'mar']
>>> df2000_dropped_col = reduce(lambda x,y: x.drop(y),cols,df2000)
>>> df2000_dropped_col.show()
+---+
|key|
+---+
| a|
| a|
| a|
| a|
+---+
now doing a show on the new dataframe will yield the desired result with all the month columns dropped.

Related

Calculate number of columns with missing values per each row in PySpark

Let see we have the following data set
columns = ['id', 'dogs', 'cats']
values = [(1, 2, 0),(2, None, None),(3, None,9)]
df = spark.createDataFrame(values,columns)
df.show()
+----+----+----+
| id|dogs|cats|
+----+----+----+
| 1| 2| 0|
| 2|null|null|
| 3|null| 9|
+----+----+----+
I would like to calculate number ("miss_nb") and percents ("miss_pt") of columns with missing values per rows and get the following table
+----+-------+-------+
| id|miss_nb|miss_pt|
+----+-------+-------+
| 1| 0| 0.00|
| 2| 2| 0.67|
| 3| 1| 0.33|
+----+-------+-------+
The number of columns should be any (non-fixed list).
How to do it?
Thanks!

Drop rows in Pyspark

How can I drop the row values in Pyspark based on the value of row number/row index value?
I am new to Pyspark (and coding) -- I have tried coding something but it is not working.
You can't drop specific cols, but you can just filter the ones you want, by using filter or its alias, where.
Imagine you want "to drop" the rows where the age of a person is lower than 3. You can just keep the opposite rows, like this:
df.filter(df.age >= 3)
import pyspark.sql.functions as F
schema1 = StructType([StructField('rownumber', IntegerType(), True),StructField('name', StringType(), True)])
data1 = [(1,'a'),(2,'b'),(3,'c'),(4,'d'),(5,'e')]
df1 = spark.createDataFrame(data1, schema1)
df1.show()
+---------+----+
|rownumber|name|
+---------+----+
| 1| a|
| 2| b|
| 3| c|
| 4| d|
| 5| e|
+---------+----+
df1.filter(F.col("rownumber").between(2,4)).show()
+---------+----+
|rownumber|name|
+---------+----+
| 2| b|
| 3| c|
| 4| d|
+---------+----+

How get the percentage of totals for each count after a groupBy in pyspark?

Given the following DataFrame:
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local").appName("test").getOrCreate()
df = spark.createDataFrame([['a',1],['b', 2],['a', 3]], ['category', 'value'])
df.show()
+--------+-----+
|category|value|
+--------+-----+
| a| 1|
| b| 2|
| a| 3|
+--------+-----+
I want to count the number of items in each category and provide a percentage of total for each count, like so
+--------+-----+----------+
|category|count|percentage|
+--------+-----+----------+
| b| 1| 0.333|
| a| 2| 0.667|
+--------+-----+----------+
You can obtain the count and percentage/ratio of totals with the following
import pyspark.sql.functions as f
from pyspark.sql.window import Window
df.groupBy('category').count()\
.withColumn('percentage', f.round(f.col('count') / f.sum('count')\
.over(Window.partitionBy()),3)).show()
+--------+-----+----------+
|category|count|percentage|
+--------+-----+----------+
| b| 1| 0.333|
| a| 2| 0.667|
+--------+-----+----------+
The previous statement can be divided in steps. df.groupBy('category').count() produces the count:
+--------+-----+
|category|count|
+--------+-----+
| b| 1|
| a| 2|
+--------+-----+
then by applying window functions we can obtain the total count on each row:
df.groupBy('category').count().withColumn('total', f.sum('count').over(Window.partitionBy())).show()
+--------+-----+-----+
|category|count|total|
+--------+-----+-----+
| b| 1| 3|
| a| 2| 3|
+--------+-----+-----+
where the total column is calculated by adding together all the counts in the partition (a single partition that includes all rows).
Once we have count and total for each row we can calculate the ratio:
df.groupBy('category')\
.count()\
.withColumn('total', f.sum('count').over(Window.partitionBy()))\
.withColumn('percentage',f.col('count')/f.col('total'))\
.show()
+--------+-----+-----+------------------+
|category|count|total| percentage|
+--------+-----+-----+------------------+
| b| 1| 3|0.3333333333333333|
| a| 2| 3|0.6666666666666666|
+--------+-----+-----+------------------+
You can groupby and aggregate with agg:
import pyspark.sql.functions as F
df.groupby('category').agg(F.count('value') / df.count()).show()
Output:
+--------+------------------+
|category|(count(value) / 3)|
+--------+------------------+
| b|0.3333333333333333|
| a|0.6666666666666666|
+--------+------------------+
To make it nicer you can use:
df.groupby('category').agg(
(
F.round(F.count('value') / df.count(), 2)
).alias('ratio')
).show()
Output:
+--------+-----+
|category|ratio|
+--------+-----+
| b| 0.33|
| a| 0.67|
+--------+-----+
You can also use SQL:
df.createOrReplaceTempView('df')
spark.sql(
"""
SELECT category, COUNT(*) / (SELECT COUNT(*) FROM df) AS ratio
FROM df
GROUP BY category
"""
).show()

Split large array columns into multiple columns - Pyspark

I have:
+---+-------+-------+
| id| var1| var2|
+---+-------+-------+
| a|[1,2,3]|[1,2,3]|
| b|[2,3,4]|[2,3,4]|
+---+-------+-------+
I want:
+---+-------+-------+-------+-------+-------+-------+
| id|var1[0]|var1[1]|var1[2]|var2[0]|var2[1]|var2[2]|
+---+-------+-------+-------+-------+-------+-------+
| a| 1| 2| 3| 1| 2| 3|
| b| 2| 3| 4| 2| 3| 4|
+---+-------+-------+-------+-------+-------+-------+
The solution provided by How to split a list to multiple columns in Pyspark?
df1.select('id', df1.var1[0], df1.var1[1], ...).show()
works, but some of my arrays are very long (max 332).
How can I write this so that it takes account of all length arrays?
This solution will work for your problem, no matter the number of initial columns and the size of your arrays. Moreover, if a column has different array sizes (eg [1,2], [3,4,5]), it will result in the maximum number of columns with null values filling the gap.
from pyspark.sql import functions as F
df = spark.createDataFrame(sc.parallelize([['a', [1,2,3], [1,2,3]], ['b', [2,3,4], [2,3,4]]]), ["id", "var1", "var2"])
columns = df.drop('id').columns
df_sizes = df.select(*[F.size(col).alias(col) for col in columns])
df_max = df_sizes.agg(*[F.max(col).alias(col) for col in columns])
max_dict = df_max.collect()[0].asDict()
df_result = df.select('id', *[df[col][i] for col in columns for i in range(max_dict[col])])
df_result.show()
>>>
+---+-------+-------+-------+-------+-------+-------+
| id|var1[0]|var1[1]|var1[2]|var2[0]|var2[1]|var2[2]|
+---+-------+-------+-------+-------+-------+-------+
| a| 1| 2| 3| 1| 2| 3|
| b| 2| 3| 4| 2| 3| 4|
+---+-------+-------+-------+-------+-------+-------+

pyspark MlLib: exclude a column value in a row

I am trying to create an RDD of LabeledPoint from a data frame, so I can later use it for MlLib.
The code below works fine if my_target column is the first column in sparkDF. However, if my_target column is not the first column, how do I modify the code below to exclude my_target to create a correct LabeledPoint?
import pyspark.mllib.classification as clf
labeledData = sparkDF.rdd.map(lambda row: clf.LabeledPoint(row['my_target'],row[1:]))
logRegr = clf.LogisticRegressionWithSGD.train(labeledData)
That is, row[1:] now exclude the value in the first column; if I want to exclude value in column N of row, how do I do this? Thanks!
>>> a = [(1,21,31,41),(2,22,32,42),(3,23,33,43),(4,24,34,44),(5,25,35,45)]
>>> df = spark.createDataFrame(a,["foo","bar","baz","bat"])
>>> df.show()
+---+---+---+---+
|foo|bar|baz|bat|
+---+---+---+---+
| 1| 21| 31| 41|
| 2| 22| 32| 42|
| 3| 23| 33| 43|
| 4| 24| 34| 44|
| 5| 25| 35| 45|
+---+---+---+---+
>>> N = 2
# N is the column that you want to exclude (in this example the third, indexing starts at 0)
>>> labeledData = df.rdd.map(lambda row: LabeledPoint(row['foo'],row[:N]+row[N+1:]))
# it is just a concatenation with N that is excluded both in row[:N] and row[N+1:]
>>> labeledData.collect()
[LabeledPoint(1.0, [1.0,21.0,41.0]), LabeledPoint(2.0, [2.0,22.0,42.0]), LabeledPoint(3.0, [3.0,23.0,43.0]), LabeledPoint(4.0, [4.0,24.0,44.0]), LabeledPoint(5.0, [5.0,25.0,45.0])]