How to check specific partition data from Spark partitions in Pyspark - pyspark

I have a created two dataframes in pyspark from my hive table as:
data1 = spark.sql("""
SELECT ID, MODEL_NUMBER, MODEL_YEAR ,COUNTRY_CODE
from MODEL_TABLE1 where COUNTRY_CODE in ('IND','CHN','USA','RUS','AUS')
""");
each country is having millions of unique ID in alphanumeric format.
data2 = spark.sql("""
SELECT ID,MODEL_NUMBER, MODEL_YEAR, COUNTRY_CODE
from MODEL_TABLE2 where COUNTRY_CODE in ('IND','CHN')
""");
I want to join both of these dataframe using pyspark on ID column.
How can we re-partition our data so that its get distributed uniformly across the partitions.
Can i use below to reparation my data?
newdf1 = data2.repartition(100, "ID")
newdf2 = data2.repartition(100, "ID")
what would be the best way for partitioning so that join work faster?

As far as I know your approach repartition providing an ID column is correct. Consider the following as proof of concept using spark_partition_id() to get the corrresponding partition id:
Create some dummy data
import pandas as pd
import numpy as np
from pyspark.sql.functions import spark_partition_id
def create_dummy_data():
data = np.vstack([np.random.randint(0, 5, size=10),
np.random.random(10)])
df = pd.DataFrame(data.T, columns=["id", "values"])
return spark.createDataFrame(df)
def show_partition_id(df):
"""Helper function to show partition."""
return df.select(*df.columns, spark_partition_id().alias("pid")).show()
df1 = create_dummy_data()
df2 = create_dummy_data()
Show partition id before repartioning
show_partition_id(df1)
+---+-------------------+---+
| id| values|pid|
+---+-------------------+---+
|1.0| 0.6051170383675885| 0|
|3.0| 0.4613520717857513| 0|
|0.0| 0.797734780966592| 1|
|2.0|0.35594664760134587| 1|
|2.0|0.08223203758144915| 2|
|0.0| 0.3112880092048709| 2|
|4.0| 0.2689639324292137| 3|
|1.0| 0.6466782159542134| 3|
|0.0| 0.8340472796153436| 3|
|4.0| 0.8054752411745659| 3|
+---+-------------------+---+
show_partition_id(df2)
+---+-------------------+---+
| id| values|pid|
+---+-------------------+---+
|4.0| 0.8950517294190533| 0|
|3.0| 0.4084717827425539| 0|
|3.0| 0.798146627431009| 1|
|4.0| 0.8039931522181247| 1|
|3.0| 0.732125135531736| 2|
|0.0| 0.536328329270619| 2|
|1.0|0.25952064363007576| 3|
|2.0| 0.1958334111199559| 3|
|0.0| 0.728098753644471| 3|
|0.0| 0.9825387111807906| 3|
+---+-------------------+---+
Show partition id after repartitioning
show_partition_id(df1.repartition(2, "id"))
+---+-------------------+---+
| id| values|pid|
+---+-------------------+---+
|1.0| 0.6051170383675885| 0|
|3.0| 0.4613520717857513| 0|
|4.0| 0.2689639324292137| 0|
|1.0| 0.6466782159542134| 0|
|4.0| 0.8054752411745659| 0|
|0.0| 0.797734780966592| 1|
|2.0|0.35594664760134587| 1|
|2.0|0.08223203758144915| 1|
|0.0| 0.3112880092048709| 1|
|0.0| 0.8340472796153436| 1|
+---+-------------------+---+
show_partition_id(df2.repartition(2, "id"))
+---+-------------------+---+
| id| values|pid|
+---+-------------------+---+
|4.0| 0.8950517294190533| 0|
|3.0| 0.4084717827425539| 0|
|3.0| 0.798146627431009| 0|
|4.0| 0.8039931522181247| 0|
|3.0| 0.732125135531736| 0|
|1.0|0.25952064363007576| 0|
|0.0| 0.536328329270619| 1|
|2.0| 0.1958334111199559| 1|
|0.0| 0.728098753644471| 1|
|0.0| 0.9825387111807906| 1|
+---+-------------------+---+
After repartitioning, ids 0 and 2 are located on the same partition and the rest is on the other partition.

Related

Get % of rows that have a unique value by id

I have a pyspark dataframe that looks like this
import pandas as pd
spark.createDataFrame(
pd.DataFrame({'ch_id': [1,1,1,1,1,
2,2,2,2],
'e_id': [0,0,1,2,2,
0,0,1,1],
'seg': ['h','s','s','a','s',
'h','s','s','h']})
).show()
+-----+----+---+
|ch_id|e_id|seg|
+-----+----+---+
| 1| 0| h|
| 1| 0| s|
| 1| 1| s|
| 1| 2| a|
| 1| 2| s|
| 2| 0| h|
| 2| 0| s|
| 2| 1| s|
| 2| 1| h|
+-----+----+---+
I would like for every c_id to get:
the % of e_id for which there is one unique value of s
The output would like like this:
+----+-------+
|c_id|%_major|
+----+-------+
| 1| 66.6|
| 2| 0.0|
+----+-------+
How could I achieve that in pyspark ?

How to assign non unique incrementing index (index markup) in Spark SQL, set back to 0 on joining the specific value from another dataframe

There is a DataFrame of data like
|timestamp |value|
|2021-01-01 12:00:00| 10.0|
|2021-01-01 12:00:01| 10.0|
|2021-01-01 12:00:02| 10.0|
|2021-01-01 12:00:03| 10.0|
|2021-01-01 12:00:04| 10.0|
|2021-01-01 12:00:05| 10.0|
|2021-01-01 12:00:06| 10.0|
|2021-01-01 12:00:07| 10.0|
and DataFrame of events like
|timestamp |event|
|2021-01-01 12:00:01| true|
|2021-01-01 12:00:05| true|
based on that I'd like to add one more column to the initial DataFrame that is an index of the data since beginning of the event:
|timestamp |value|index|
|2021-01-01 12:00:00| 10.0| 1|
|2021-01-01 12:00:01| 10.0| 2|
|2021-01-01 12:00:02| 10.0| 3|
|2021-01-01 12:00:03| 10.0| 4|
|2021-01-01 12:00:04| 10.0| 5|
|2021-01-01 12:00:05| 10.0| 1|
|2021-01-01 12:00:06| 10.0| 2|
|2021-01-01 12:00:07| 10.0| 3|
I have tried with
.withColumn("index",monotonically_increasing_id())
but there is no way to set it back to 0 at joining it with some other DataFrame. So, any ideas are welcome.
You can join data df with event df on timestamp then use a conditional cumulative sum on event column to define groups. Finally, partition by the group column to set row number.
Something like this:
import org.apache.spark.sql.expressions.Window
val result = data.join(
events,
Seq("timestamp"),
"left"
).withColumn(
"group",
sum(when(col("event"), 1).otherwise(0)).over(Window.orderBy("timestamp"))
).withColumn(
"index",
row_number().over(Window.partitionBy("group").orderBy("timestamp"))
).drop("group", "event")
result.show
//+-------------------+-----+-----+
//| timestamp|value|index|
//+-------------------+-----+-----+
//|2021-01-01 12:00:00| 10.0| 1|
//|2021-01-01 12:00:01| 10.0| 1|
//|2021-01-01 12:00:02| 10.0| 2|
//|2021-01-01 12:00:03| 10.0| 3|
//|2021-01-01 12:00:04| 10.0| 4|
//|2021-01-01 12:00:05| 10.0| 1|
//|2021-01-01 12:00:06| 10.0| 2|
//|2021-01-01 12:00:07| 10.0| 3|
//+-------------------+-----+-----+
You could use a Window function to achieve it:
from pyspark.sql import SparkSessionRow, Window
from pyspark.sql import functions as F
spark = SparkSession.builder.getOrCreate()
Example data after joining the original DFs (I changed the timestamp column to integer type for simplicity):
df = spark.createDataFrame([
Row(timestamp=0, value='foo', event=True),
Row(timestamp=1, value='foo', event=None),
Row(timestamp=2, value='foo', event=None),
Row(timestamp=3, value='foo', event=None),
Row(timestamp=4, value='foo', event=None),
Row(timestamp=5, value='foo', event=True),
Row(timestamp=6, value='foo', event=None),
Row(timestamp=7, value='foo', event=None),
])
Then I create a column with a group_id by forward-filling the first timestamp for the "groups".
This group_id can then be used to create the index using F.row_number():
(
df
.withColumn('group_id', F.when(F.col('event'), F.col('timestamp')))
.withColumn('group_id', F.last('group_id', ignorenulls=True).over(Window.orderBy('timestamp')))
.withColumn('index', F.row_number().over(Window.partitionBy('group_id').orderBy('timestamp')))
.show()
)
# Output:
+---------+-----+-----+--------+-----+
|timestamp|value|event|group_id|index|
+---------+-----+-----+--------+-----+
| 0| foo| true| 0| 1|
| 1| foo| null| 0| 2|
| 2| foo| null| 0| 3|
| 3| foo| null| 0| 4|
| 4| foo| null| 0| 5|
| 5| foo| true| 5| 1|
| 6| foo| null| 5| 2|
| 7| foo| null| 5| 3|
+---------+-----+-----+--------+-----+

Apache Spark visualization

I'm new to Apache Spark and trying to learn visualization in Apache Spark/Databricks at the moment. If I have the following csv datasets;
Patient.csv
+---+---------+------+---+-----------------+-----------+------------+-------------+
| Id|Post_Code|Height|Age|Health_Cover_Type|Temperature|Disease_Type|Infected_Date|
+---+---------+------+---+-----------------+-----------+------------+-------------+
| 1| 2096| 131| 22| 5| 37| 4| 891717742|
| 2| 2090| 136| 18| 5| 36| 1| 881250949|
| 3| 2004| 120| 9| 2| 36| 2| 878887136|
| 4| 2185| 155| 41| 1| 36| 1| 896029926|
| 5| 2195| 145| 25| 5| 37| 1| 887100886|
| 6| 2079| 172| 52| 2| 37| 5| 871205766|
| 7| 2006| 176| 27| 1| 37| 3| 879487476|
| 8| 2605| 129| 15| 5| 36| 1| 876343336|
| 9| 2017| 145| 19| 5| 37| 4| 897281846|
| 10| 2112| 171| 47| 5| 38| 6| 882539696|
| 11| 2112| 102| 8| 5| 36| 5| 873648586|
| 12| 2086| 151| 11| 1| 35| 1| 894724066|
| 13| 2142| 148| 22| 2| 37| 1| 889446276|
| 14| 2009| 158| 57| 5| 38| 2| 887072826|
| 15| 2103| 167| 34| 1| 37| 3| 892094506|
| 16| 2095| 168| 37| 5| 36| 1| 893400966|
| 17| 2010| 156| 20| 3| 38| 5| 897313586|
| 18| 2117| 143| 17| 5| 36| 2| 875238076|
| 19| 2204| 155| 24| 4| 38| 6| 884159506|
| 20| 2103| 138| 15| 5| 37| 4| 886765356|
+---+---------+------+---+-----------------+-----------+------------+-------------+
And coverType.csv
+--------------+-----------------+
|cover_type_key| cover_type_label|
+--------------+-----------------+
| 1| Single|
| 2| Couple|
| 3| Family|
| 4| Concession|
| 5| Disable|
+--------------+-----------------+
Which I've managed to load as DataFrames (Patient and coverType);
val PatientDF=spark.read
.format("csv")
.option("header","true")
.option("inferSchema","true")
.option("nullValue","NA")
.option("timestampFormat","yyyy-MM-dd'T'HH:mm:ss")
.option("mode","failfast")
.option("path","/spark-data/Patient.csv")
.load()
val coverTypeDF=spark.read
.format("csv")
.option("header","true")
.option("inferSchema","true")
.option("nullValue","NA")
.option("timestampFormat","yyyy-MM-dd'T'HH:mm:ss")
.option("mode","failfast")
.option("path","/spark-data/covertype.csv")
.load()
How do I generate a bar chart visualization to show the distribution of different Disease_Type in my dataset.
How do I generate a bar chart visualization to show the average Post_Code of each cover type with string labels for cover type.
How do I extract the year (YYYY) from the Infected_Date (represented in date (unix seconds since 1/1/1970 UTC)) ordering the result in decending order of the year and average age.
To display charts natively with Databricks you need to use the display function on a dataframe. For number one, we can accomplish what you'd like by aggregating the dataframe on disease type.
display(PatientDF.groupBy(Disease_Type).count())
Then you can use the charting options to build a bar chart, you can do the same for your 2nd question, but instead of .count() use .avg("Post_Code")
For the third question you need to use the year function after casting the timestamp to a date and an orderBy.
from pyspark.sql.functions import *
display(PatientDF.select(year(to_timestamp("Infected_Date")).alias("year")).orderBy("year"))

Spark Scala Window extend result until the end

I will expose my problem based on the initial dataframe and the one I want to achieve:
val df_997 = Seq [(Int, Int, Int, Int)]((1,1,7,10),(1,10,4,300),(1,3,14,50),(1,20,24,70),(1,30,12,90),(2,10,4,900),(2,25,30,40),(2,15,21,60),(2,5,10,80)).toDF("policyId","FECMVTO","aux","IND_DEF").orderBy(asc("policyId"), asc("FECMVTO"))
df_997.show
+--------+-------+---+-------+
|policyId|FECMVTO|aux|IND_DEF|
+--------+-------+---+-------+
| 1| 1| 7| 10|
| 1| 3| 14| 50|
| 1| 10| 4| 300|
| 1| 20| 24| 70|
| 1| 30| 12| 90|
| 2| 5| 10| 80|
| 2| 10| 4| 900|
| 2| 15| 21| 60|
| 2| 25| 30| 40|
+--------+-------+---+-------+
Imagine I have partitioned this DF by the column policyId and created the column row_num based on it to better see the Windows:
val win = Window.partitionBy("policyId").orderBy("FECMVTO")
val df_998 = df_997.withColumn("row_num",row_number().over(win))
df_998.show
+--------+-------+---+-------+-------+
|policyId|FECMVTO|aux|IND_DEF|row_num|
+--------+-------+---+-------+-------+
| 1| 1| 7| 10| 1|
| 1| 3| 14| 50| 2|
| 1| 10| 4| 300| 3|
| 1| 20| 24| 70| 4|
| 1| 30| 12| 90| 5|
| 2| 5| 10| 80| 1|
| 2| 10| 4| 900| 2|
| 2| 15| 21| 60| 3|
| 2| 25| 30| 40| 4|
+--------+-------+---+-------+-------+
Now, for each window, if the value of aux is 4, I want to set the value of IND_DEF column for that register to the column FEC_MVTO for this register on until the end of the window.
The resulting DF would be:
+--------+-------+---+-------+-------+
|policyId|FECMVTO|aux|IND_DEF|row_num|
+--------+-------+---+-------+-------+
| 1| 1| 7| 10| 1|
| 1| 3| 14| 50| 2|
| 1| 300| 4| 300| 3|
| 1| 300| 24| 70| 4|
| 1| 300| 12| 90| 5|
| 2| 5| 10| 80| 1|
| 2| 900| 4| 900| 2|
| 2| 900| 21| 60| 3|
| 2| 900| 30| 40| 4|
+--------+-------+---+-------+-------+
Thanks for your suggestions as I am very stuck in here...
Here's one approach: First left-join the DataFrame with its aux == 4 filtered version, followed by applying Window function first to backfill nulls with the wanted IND_DEF values per partition, and finally conditionally recreate column FECMVTO:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
(1,1,7,10), (1,10,4,300), (1,3,14,50), (1,20,24,70), (1,30,12,90),
(2,10,4,900), (2,25,30,40), (2,15,21,60), (2,5,10,80)
).toDF("policyId","FECMVTO","aux","IND_DEF")
val win = Window.partitionBy("policyId").orderBy("FECMVTO").
rowsBetween(Window.unboundedPreceding, 0)
val df2 = df.
select($"policyId", $"aux", $"IND_DEF".as("IND_DEF2")).
where($"aux" === 4)
df.join(df2, Seq("policyId", "aux"), "left_outer").
withColumn("IND_DEF3", first($"IND_DEF2", ignoreNulls=true).over(win)).
withColumn("FECMVTO", coalesce($"IND_DEF3", $"FECMVTO")).
show
// +--------+---+-------+-------+--------+--------+
// |policyId|aux|FECMVTO|IND_DEF|IND_DEF2|IND_DEF3|
// +--------+---+-------+-------+--------+--------+
// | 1| 7| 1| 10| null| null|
// | 1| 14| 3| 50| null| null|
// | 1| 4| 300| 300| 300| 300|
// | 1| 24| 300| 70| null| 300|
// | 1| 12| 300| 90| null| 300|
// | 2| 10| 5| 80| null| null|
// | 2| 4| 900| 900| 900| 900|
// | 2| 21| 900| 60| null| 900|
// | 2| 30| 900| 40| null| 900|
// +--------+---+-------+-------+--------+--------+
Columns IND_DEF2, IND_DEF3 are kept only for illustration (and can certainly be dropped).
#I believe below can be solution for your issue
Considering input_df is your input dataframe
//Step#1 - Filter rows with IND_DEF = 4 from input_df
val only_FECMVTO_4_df1 = input_df.filter($"IND_DEF" === 4)
//Step#2 - Filling FECMVTO value from IND_DEF for the above result
val only_FECMVTO_4_df2 = only_FECMVTO_4_df1.withColumn("FECMVTO_NEW",$"IND_DEF").drop($"FECMVTO").withColumnRenamed("FECMVTO",$"FECMVTO_NEW")
//Step#3 - removing all the records from step#1 from input_df
val input_df_without_FECMVTO_4 = input_df.except(only_FECMVTO_4_df1)
//combining Step#2 output with output of Step#3
val final_df = input_df_without_FECMVTO_4.union(only_FECMVTO_4_df2)

How do I replace null values of multiple columns with values from multiple different columns

I have a data frame like below
data = [
(1, None,7,10,11,19),
(1, 4,None,10,43,58),
(None, 4,7,67,88,91),
(1, None,7,78,96,32)
]
df = spark.createDataFrame(data, ["A_min", "B_min","C_min","A_max", "B_max","C_max"])
df.show()
and I would want the columns which show name as 'min' to be replaced by their equivalent max column.
Example null values of A_min column should be replaced by A_max column
It should be like the data frame below.
+-----+-----+-----+-----+-----+-----+
|A_min|B_min|C_min|A_max|B_max|C_max|
+-----+-----+-----+-----+-----+-----+
| 1| 11| 7| 10| 11| 19|
| 1| 4| 58| 10| 43| 58|
| 67| 4| 7| 67| 88| 91|
| 1| 96| 7| 78| 96| 32|
+-----+-----+-----+-----+-----+-----+
I have tried the code below by defining the columns but clearly this does not work. Really appreciate any help.
min_cols = ["A_min", "B_min","C_min"]
max_cols = ["A_max", "B_max","C_max"]
for i in min_cols
df = df.withColumn(i,when(f.col(i)=='',max_cols.otherwise(col(i))))
display(df)
Assuming you have the same number of max and min columns, you can use coalesce along with python's list comprehension to obtain your solution
from pyspark.sql.functions import coalesce
min_cols = ["A_min", "B_min","C_min"]
max_cols = ["A_max", "B_max","C_max"]
df.select(*[coalesce(df[val], df[max_cols[pos]]).alias(val) for pos, val in enumerate(min_cols)], *max_cols).show()
Output:
+-----+-----+-----+-----+-----+-----+
|A_min|B_min|C_min|A_max|B_max|C_max|
+-----+-----+-----+-----+-----+-----+
| 1| 11| 7| 10| 11| 19|
| 1| 4| 58| 10| 43| 58|
| 67| 4| 7| 67| 88| 91|
| 1| 96| 7| 78| 96| 32|
+-----+-----+-----+-----+-----+-----+