How to remove outliers from multiple columns in pyspark using mean and standard deviation - pyspark

I have the below data frame and I want to remove outliers from defined columns. In the below example price and income. Outliers should be removed for each group of data. In this example its 'cd' and 'segment' columns. Outliers should be removed based 5 standard deviations.
data = [
('a', '1',20,10),
('a', '1',30,16),
('a', '1',50,91),
('a', '1',60,34),
('a', '1',200,23),
('a', '2',33,87),
('a', '2',86,90),
('a','2',89,35),
('a', '2',90,24),
('a', '2',40,97),
('a', '2',1,21),
('b', '1',45,96),
('b', '1',56,99),
('b', '1',89,23),
('b', '1',98,64),
('b', '2',86,42),
('b', '2',45,54),
('b', '2',67,95),
('b','2',86,70),
('b', '2',91,64),
('b', '2',2,53),
('b', '2',4,87)
]
data = (spark.createDataFrame(data, ['cd','segment','price','income']))
I have used the code below to remove outliers but this would work only for one column.
mean_std = (
data
.groupBy('cd', 'segment')
.agg(
*[f.mean(colName).alias('{}{}'.format('mean_',colName)) for colName in ['price']],
*[f.stddev(colName).alias('{}{}'.format('stddev_',colName)) for colName in ['price']])
)
mean_columns = ['mean_price']
std_columns = ['stddev_price']
upper = mean_std
for col_1 in mean_columns:
for col_2 in std_columns:
if col_1 != col_2:
name = col_1 + '_upper_limit'
upper = upper.withColumn(name, f.col(col_1) + f.col(col_2)*5)
lower = upper
for col_1 in mean_columns:
for col_2 in std_columns:
if col_1 != col_2:
name = col_1 + '_lower_limit'
lower = lower.withColumn(name, f.col(col_1) - f.col(col_2)*5)
outliers = (data.join(lower,
how = 'left',
on = ['cd', 'segment'])
.withColumn('is_outlier_price', f.when((f.col('price')>f.col('mean_price_upper_limit')) |
(f.col('price')<f.col('mean_price_lower_limit')),1)
.otherwise(None))
)
my final output should have a column for each variable stating whether its 1 = remove or 0 = keep.
Really appreciate any help on this.

Your code works almost 100% fine. All you have to do is to replace the single fixed column name with an array of column names and then loop over this array:
numeric_cols = ['price', 'income']
mean_std = \
data \
.groupBy('cd', 'segment') \
.agg( \
*[F.mean(colName).alias('mean_{}'.format(colName)) for colName in numeric_cols],\
*[F.stddev(colName).alias('stddev_{}'.format(colName)) for colName in numeric_cols])
mean_std is now a dataframe with two columns (mean_... and stddev_...) per element of numeric_cols.
In the next step we calculate the lower and upper limit per element of numeric_cols:
mean_std_min_max = mean_std
for colName in numeric_cols:
meanCol = 'mean_{}'.format(colName)
stddevCol = 'stddev_{}'.format(colName)
minCol = 'min_{}'.format(colName)
maxCol = 'max_{}'.format(colName)
mean_std_min_max = mean_std_min_max.withColumn(minCol, F.col(meanCol) - 5 * F.col(stddevCol))
mean_std_min_max = mean_std_min_max.withColumn(maxCol, F.col(meanCol) + 5 * F.col(stddevCol))
mean_std_min_max now contains the two additional columns min_... and max... per element of numeric_cols.
Finally the join, followed by the calculation of the is_outliers_... columns as before:
outliers = data.join(mean_std_min_max, how = 'left', on = ['cd', 'segment'])
for colName in numeric_cols:
isOutlierCol = 'is_outlier_{}'.format(colName)
minCol = 'min_{}'.format(colName)
maxCol = 'max_{}'.format(colName)
meanCol = 'mean_{}'.format(colName)
stddevCol = 'stddev_{}'.format(colName)
outliers = outliers.withColumn(isOutlierCol, F.when((F.col(colName) > F.col(maxCol)) | (F.col(colName) < F.col(minCol)), 1).otherwise(0))
outliers = outliers.drop(minCol,maxCol, meanCol, stddevCol)
The last line of the loop is only to clean up and drop the intermediate columns. It might be helpful to comment it out.
The final result is:
+---+-------+-----+------+----------------+-----------------+
| cd|segment|price|income|is_outlier_price|is_outlier_income|
+---+-------+-----+------+----------------+-----------------+
| b| 2| 86| 42| 0| 0|
| b| 2| 45| 54| 0| 0|
| b| 2| 67| 95| 0| 0|
| b| 2| 86| 70| 0| 0|
| b| 2| 91| 64| 0| 0|
+---+-------+-----+------+----------------+-----------------+
only showing top 5 rows

you can use the list comprehension using F.when.
A very simplified example of your problem:
import pyspark.sql.functions as F
tst1= sqlContext.createDataFrame([(1,2,3,4,1,10),(1,3,5,7,2,11),(9,9,10,6,2,9),(2,4,90,9,1,2),(2,10,3,4,1,7),(3,5,11,5,7,8),(10,9,12,6,7,9),(3,6,99,8,1,9)],schema=['val1','val1_low_lim','val1_upper_lim','val2','val2_low_lim','val2_upper_lim'])
tst_res = tst1.select(tst1.columns+[(F.when((F.col(coln)<F.col(coln+'_upper_lim'))&(F.col(coln)>F.col(coln+'_low_lim')),1).otherwise(0)).alias(coln+'_valid') for coln in tst1.columns if "_lim" not in coln ])
The results:
tst_res.show()
+----+------------+--------------+----+------------+--------------+----------+----------+
|val1|val1_low_lim|val1_upper_lim|val2|val2_low_lim|val2_upper_lim|val1_valid|val2_valid|
+----+------------+--------------+----+------------+--------------+----------+----------+
| 1| 2| 3| 4| 1| 10| 0| 1|
| 1| 3| 5| 7| 2| 11| 0| 1|
| 9| 9| 10| 6| 2| 9| 0| 1|
| 2| 4| 90| 9| 1| 2| 0| 0|
| 2| 10| 3| 4| 1| 7| 0| 1|
| 3| 5| 11| 5| 7| 8| 0| 0|
| 10| 9| 12| 6| 7| 9| 1| 0|
| 3| 6| 99| 8| 1| 9| 0| 1|
+----+------------+--------------+----+------------+--------------+----------+----------+

Related

How to compare value of one row with all the other rows in PySpark on grouped values

Problem statement
Consider the following data (see code generation at the bottom)
+-----+-----+-------+--------+
|index|group|low_num|high_num|
+-----+-----+-------+--------+
| 0| 1| 1| 1|
| 1| 1| 2| 2|
| 2| 1| 3| 3|
| 3| 2| 1| 3|
+-----+-----+-------+--------+
Then for a given index, I want to count how many times that one indexes high_num is greater than low_num for all low_num in the group.
For instance, consider the second row with index: 1. Index: 1 is in group: 1 and the high_num is 2. high_num on index 1 is greater than the high_num on index 0, equal to low_num, and smaller than the one on index 2. So the high_num of index: 1 is greater than low_num across the group once, so then I want the value in the answer column to say 1.
Dataset with desired output
+-----+-----+-------+--------+-------+
|index|group|low_num|high_num|desired|
+-----+-----+-------+--------+-------+
| 0| 1| 1| 1| 0|
| 1| 1| 2| 2| 1|
| 2| 1| 3| 3| 2|
| 3| 2| 1| 3| 1|
+-----+-----+-------+--------+-------+
Dataset generation code
from pyspark.sql import SparkSession
spark = (
SparkSession
.builder
.getOrCreate()
)
## Example df
## Note the inclusion of "desired" which is the desired output.
df = spark.createDataFrame(
[
(0, 1, 1, 1, 0),
(1, 1, 2, 2, 1),
(2, 1, 3, 3, 2),
(3, 2, 1, 3, 1)
],
schema=["index", "group", "low_num", "high_num", "desired"]
)
Pseudocode that might have solved the problem
A pseusocode might look like this:
import pyspark.sql.functions as F
from pyspark.sql.window import Window
w_spec = Window.partitionBy("group").rowsBetween(
Window.unboundedPreceding, Window.unboundedFollowing)
## F.collect_list_when does not exist
## F.current_col does not exist
## Probably wouldn't work like this anyways
ddf = df.withColumn("Counts",
F.size(F.collect_list_when(
F.current_col("high_number") > F.col("low_number"), 1
).otherwise(None).over(w_spec))
)
You can do a filter on the collect_list, and check its size:
import pyspark.sql.functions as F
df2 = df.withColumn(
'desired',
F.expr('size(filter(collect_list(low_num) over (partition by group), x -> x < high_num))')
)
df2.show()
+-----+-----+-------+--------+-------+
|index|group|low_num|high_num|desired|
+-----+-----+-------+--------+-------+
| 0| 1| 1| 1| 0|
| 1| 1| 2| 2| 1|
| 2| 1| 3| 3| 2|
| 3| 2| 1| 3| 1|
+-----+-----+-------+--------+-------+

How to create rows and increment it in given df in pyspark

What I want is create a new row based on the given dataframe I have and It looks like the following:
TEST_schema = StructType([StructField("date", StringType(), True),\
StructField("col1", IntegerType(), True),
StructField("col2", IntegerType(), True)\
])
TEST_data = [('2020-08-17',0,0),('2020-08-18',2,1),('2020-08-19',0,2),('2020-08-20',3,0),('2020-08-21',4,2),\
('2020-08-22',1,3),('2020-08-23',2,2),('2020-08-24',1,2),('2020-08-25',3,1)]
rdd3 = sc.parallelize(TEST_data)
TEST_df = sqlContext.createDataFrame(TEST_data, TEST_schema)
TEST_df = TEST_df.withColumn("date",to_date("date", 'yyyy-MM-dd'))
TEST_df.show()
+----------+----+----+
| date|col1|col2|
+----------+----+----+
|2020-08-17| 0| 0|
|2020-08-18| 2| 1|
|2020-08-19| 0| 2|
|2020-08-20| 3| 0|
|2020-08-21| 4| 2|
|2020-08-22| 1| 3|
|2020-08-23| 2| 2|
|2020-08-24| 1| 2|
|2020-08-25| 3| 1|
+----------+----+----+
Let's say I want to calculate for today's date which is current_date() and let's say i want to calculate col1 as follows: If col1 >0 return col1+col2, otherwise 0 where date == yesturday 's date which is going to be current_date() -1
calculate col2 as follow, coalesce( lag(col2),0)
so my result dataframe would be something like this:
+----------+----+----+
| date|col1|want|
+----------+----+----+
|2020-08-17| 0| 0|
|2020-08-18| 2| 0|
|2020-08-19| 0| 1|
|2020-08-20| 3| 2|
|2020-08-21| 4| 0|
|2020-08-22| 1| 2|
|2020-08-23| 2| 3|
|2020-08-24| 1| 2|
|2020-08-25| 3| 2|
|2020-08-26| 4| 1|
+----------+----+----+
This would be so easy if we use withcolumn (column based) method but I want to know how to do this with rows. My initial idea is calculate by column first and transpose it and make it rowbased.
IIUC, you can try the following:
Step-1: create a new dataframe with a single row having current_date() as date, nulls for col1 and col2 and then union it back to the TEST_df (Note: change all 2020-08-26 to current_date() in your final code):
df_new = TEST_df.union(spark.sql("select '2020-08-26', null, null"))
Edit: Practically, data are partitioned and each partition should add one row, you can do something like the following:
from pyspark.sql.functions import current_date, col, lit
#columns used for Window partitionBy
cols_part = ['pcol1', 'pcol2']
df_today = TEST_df.select([
(current_date() if c == 'date' else col(c) if c in cols_part else lit(None)).alias(c)
for c in TEST_df.columns
]).distinct()
df_new = TEST_df.union(df_today)
Step-2: do calculations to fill the above null values:
df_new.selectExpr(
"date",
"IF(date < '2020-08-26', col1, lag(IF(col1>0, col1+col2,0)) over(order by date)) as col1",
"lag(col2,1,0) over(order by date) as col2"
).show()
+----------+----+----+
| date|col1|col2|
+----------+----+----+
|2020-08-17| 0| 0|
|2020-08-18| 2| 0|
|2020-08-19| 0| 1|
|2020-08-20| 3| 2|
|2020-08-21| 4| 0|
|2020-08-22| 1| 2|
|2020-08-23| 2| 3|
|2020-08-24| 1| 2|
|2020-08-25| 3| 2|
|2020-08-26| 4| 1|
+----------+----+----+

Spark Dataframe: Group and rank rows on a certain column value

I am trying to rank a column when the "ID" column numbering starts from 1 to max and then resets from 1.
So, the first three rows have a continuous numbering on "ID"; hence these should be grouped with group rank =1. Rows four and five are in another group, group rank = 2.
The rows are sorted by "rownum" column. I am aware of the row_number window function but I don't think I can apply for this use case as there is no constant window. I can only think of looping through each row in the dataframe but not sure how I can update a column when number resets to 1.
val df = Seq(
(1, 1 ),
(2, 2 ),
(3, 3 ),
(4, 1),
(5, 2),
(6, 1),
(7, 1),
(8, 2)
).toDF("rownum", "ID")
df.show()
Expected result is below:
You can do it with 2 window-functions, the first one to flag the state, the second one to calculate a running sum:
df
.withColumn("increase", $"ID" > lag($"ID",1).over(Window.orderBy($"rownum")))
.withColumn("group_rank_of_ID",sum(when($"increase",lit(0)).otherwise(lit(1))).over(Window.orderBy($"rownum")))
.drop($"increase")
.show()
gives:
+------+---+----------------+
|rownum| ID|group_rank_of_ID|
+------+---+----------------+
| 1| 1| 1|
| 2| 2| 1|
| 3| 3| 1|
| 4| 1| 2|
| 5| 2| 2|
| 6| 1| 3|
| 7| 1| 4|
| 8| 2| 4|
+------+---+----------------+
As #Prithvi noted, we can use lead here.
The tricky part is in order to use window function such as lead, we need to at least provide the order.
Consider
val nextID = lag('ID, 1, -1) over Window.orderBy('rownum)
val isNewGroup = 'ID <= nextID cast "integer"
val group_rank_of_ID = sum(isNewGroup) over Window.orderBy('rownum)
/* you can try
df.withColumn("intermediate", nextID).show
// ^^^^^^^-- can be `isNewGroup`, or other vals
*/
df.withColumn("group_rank_of_ID", group_rank_of_ID).show
/* returns
+------+---+----------------+
|rownum| ID|group_rank_of_ID|
+------+---+----------------+
| 1| 1| 0|
| 2| 2| 0|
| 3| 3| 0|
| 4| 1| 1|
| 5| 2| 1|
| 6| 1| 2|
| 7| 1| 3|
| 8| 2| 3|
+------+---+----------------+
*/
df.withColumn("group_rank_of_ID", group_rank_of_ID + 1).show
/* returns
+------+---+----------------+
|rownum| ID|group_rank_of_ID|
+------+---+----------------+
| 1| 1| 1|
| 2| 2| 1|
| 3| 3| 1|
| 4| 1| 2|
| 5| 2| 2|
| 6| 1| 3|
| 7| 1| 4|
| 8| 2| 4|
+------+---+----------------+
*/

Split large array columns into multiple columns - Pyspark

I have:
+---+-------+-------+
| id| var1| var2|
+---+-------+-------+
| a|[1,2,3]|[1,2,3]|
| b|[2,3,4]|[2,3,4]|
+---+-------+-------+
I want:
+---+-------+-------+-------+-------+-------+-------+
| id|var1[0]|var1[1]|var1[2]|var2[0]|var2[1]|var2[2]|
+---+-------+-------+-------+-------+-------+-------+
| a| 1| 2| 3| 1| 2| 3|
| b| 2| 3| 4| 2| 3| 4|
+---+-------+-------+-------+-------+-------+-------+
The solution provided by How to split a list to multiple columns in Pyspark?
df1.select('id', df1.var1[0], df1.var1[1], ...).show()
works, but some of my arrays are very long (max 332).
How can I write this so that it takes account of all length arrays?
This solution will work for your problem, no matter the number of initial columns and the size of your arrays. Moreover, if a column has different array sizes (eg [1,2], [3,4,5]), it will result in the maximum number of columns with null values filling the gap.
from pyspark.sql import functions as F
df = spark.createDataFrame(sc.parallelize([['a', [1,2,3], [1,2,3]], ['b', [2,3,4], [2,3,4]]]), ["id", "var1", "var2"])
columns = df.drop('id').columns
df_sizes = df.select(*[F.size(col).alias(col) for col in columns])
df_max = df_sizes.agg(*[F.max(col).alias(col) for col in columns])
max_dict = df_max.collect()[0].asDict()
df_result = df.select('id', *[df[col][i] for col in columns for i in range(max_dict[col])])
df_result.show()
>>>
+---+-------+-------+-------+-------+-------+-------+
| id|var1[0]|var1[1]|var1[2]|var2[0]|var2[1]|var2[2]|
+---+-------+-------+-------+-------+-------+-------+
| a| 1| 2| 3| 1| 2| 3|
| b| 2| 3| 4| 2| 3| 4|
+---+-------+-------+-------+-------+-------+-------+

Calculate links between nodes using Spark

I have the following two DataFrames in Spark 2.2 and Scala 2.11. The DataFrame edges defines the edges of a directed graph, while the DataFrame types defines the type of each node.
edges =
+-----+-----+----+
|from |to |attr|
+-----+-----+----+
| 1| 0| 1|
| 1| 4| 1|
| 2| 2| 1|
| 4| 3| 1|
| 4| 5| 1|
+-----+-----+----+
types =
+------+---------+
|nodeId|type |
+------+---------+
| 0| 0|
| 1| 0|
| 2| 2|
| 3| 4|
| 4| 4|
| 5| 4|
+------+---------+
For each node, I want to know the number of edges to the nodes of the same type. Please notice that I only want to count the edges outgoing from a node, since I deal with the directed graph.
In order to reach this objective, I performed the joining of both DataFrames:
val graphDF = edges
.join(types, types("nodeId") === edges("from"), "left")
.drop("nodeId")
.withColumnRenamed("type","type_from")
.join(types, types("nodeId") === edges("to"), "left")
.drop("nodeId")
.withColumnRenamed("type","type_to")
I obtained the following new DataFrame graphDF:
+-----+-----+----+---------------+---------------+
|from |to |attr|type_from |type_to |
+-----+-----+----+---------------+---------------+
| 1| 0| 1| 0| 0|
| 1| 4| 1| 0| 4|
| 2| 2| 1| 2| 2|
| 4| 3| 1| 4| 4|
| 4| 5| 1| 4| 4|
+-----+-----+----+---------------+---------------+
Now I need to get the following final result:
+------+---------+---------+
|nodeId|numLinks |type |
+------+---------+---------+
| 0| 0| 0|
| 1| 1| 0|
| 2| 0| 2|
| 3| 0| 4|
| 4| 2| 4|
| 5| 0| 4|
+------+---------+---------+
I was thinking about using groupBy and agg(count(...), but I do not know how to deal with directed edges.
Update:
numLinks is calculated as the number of edges outgoing from a given node. For example, the node 5 does not have any outgoing edges (only ingoing edge 4->5, see the DataFrame edges). The same refers to the node 0. But the node 4 has two outgoing edges (4->3 and 4->5).
My solution:
This is my solution, but it lacks those nodes that have 0 links.
graphDF.filter("from != to").filter("type_from == type_to").groupBy("from").agg(count("from") as "numLinks").show()
You can filter, aggregate by id and type and add missing nodes using types:
val graphDF = Seq(
(1, 0, 1, 0, 0), (1, 4, 1, 0, 4), (2, 2, 1, 2, 2),
(4, 3, 1, 4, 4), (4, 5, 1, 4, 4)
).toDF("from", "to", "attr", "type_from", "type_to")
val types = Seq(
(0, 0), (1, 0), (2, 2), (3, 4), (4,4), (5, 4)
).toDF("nodeId", "type")
graphDF
// I want to know the number of edges to the nodes of the same type
.where($"type_from" === $"type_to" && $"from" =!= $"to")
// I only want to count the edges outgoing from a node,
.groupBy($"from" as "nodeId", $"type_from" as "type")
.agg(count("*") as "numLinks")
// but it lacks those nodes that have 0 links.
.join(types, Seq("nodeId", "type"), "rightouter")
.na.fill(0)
// +------+----+--------+
// |nodeId|type|numLinks|
// +------+----+--------+
// | 0| 0| 0|
// | 1| 0| 1|
// | 2| 2| 1|
// | 3| 4| 0|
// | 4| 4| 2|
// | 5| 4| 0|
// +------+----+--------+
To skip self-links add $"from" =!= $"to" to the selection:
graphDF
.where($"type_from" === $"type_to" && $"from" =!= $"to")
.groupBy($"from" as "nodeId", $"type_from" as "type")
.agg(count("*") as "numLinks")
.join(types, Seq("nodeId", "type"), "rightouter")
.na.fill(0)
// +------+----+--------+
// |nodeId|type|numLinks|
// +------+----+--------+
// | 0| 0| 0|
// | 1| 0| 1|
// | 2| 2| 0|
// | 3| 4| 0|
// | 4| 4| 2|
// | 5| 4| 0|
// +------+----+--------+