I'm currently trying to migrate some code to polars but noticed some performance differences in the process.
import os, platform, timeit, numpy as np, pandas as pd, polars as pl
data = np.random.rand(100000, 1)
df_pandas = pd.DataFrame(data)
df_polars = pl.DataFrame(data)
def timer(expr):
return round(min(timeit.repeat(expr, repeat=5, number=5)), 8)
print("---- info ----")
print(f"platform={platform.platform()}; processor={platform.processor()}; CPUs={os.cpu_count()}")
print(f"python={platform.python_version()}; numpy={np.__version__}; pandas={pd.__version__}; polars={pl.__version__}")
print("---- pow(2) ----")
print("pandas:", timer(lambda: df_pandas.pow(2)))
print("polars:", timer(lambda: df_polars.select(pl.all().pow(2))))
print("---- sum ----")
print("pandas:", timer(lambda: df_pandas.sum()))
print("polars:", timer(lambda: df_polars.sum()))
The output of this snippet is
---- info ----
platform=macOS-11.6.5-x86_64-i386-64bit; processor=i386; CPUs=4
python=3.8.13; numpy=1.22.4; pandas=1.4.2; polars=0.13.47
---- pow(2) ----
pandas: 0.00147684
polars: 0.01482804
---- sum ----
pandas: 0.00300668
polars: 0.00027682
These results imply that polars is much slower than pandas for operations that include a Python select, but are faster for ones that are performed directly on the dataframe.
In reality, my dataframe is much different bigger (rows > 1,000,000, cols > 100,000), where the performance difference is much more significant.
Any suggestions for what might be going on and if there is a faster way to achieve the same (or better) performance in polars?
In polars >= 0.13.49 the power operation is optimized to a square optimization on certain powers. If I run this, both operations are faster than pandas.
---- info ----
platform=Linux-5.13.0-51-generic-x86_64-with-glibc2.31; processor=x86_64; CPUs=12
python=3.9.12; numpy=1.22.4; pandas=1.4.2; polars=0.13.49
---- pow(2) ----
pandas: 0.00041451
polars: 0.0003346
---- sum ----
pandas: 0.00157432
polars: 0.00011628
Related
I have serious difficulties in understanding why I cannot run a transform which, after waiting so many minutes (sometimes hours), returns the error "Serialized Results too large".
In the transform I have a list of dates that I am iterating in a for loop to proceed with delta calculations within specific time intervals.
The expected dataset is the union of the iterated datasets and should contain 450k rows, not too many, but i have a lot of computing stages, tasks and attempts!!
The profile is already set a Medium profile and i can't scale on other profile and i can't set maxResultSize = 0.
Example of code:
Date_list = [All weeks from: '2021-01-01', to: '2022-01-01'] --> ~50 elements
df_total = spark.createDataframe([], schema)
df_date = []
for date in Date_list:
tmp = df.filter(between [date, date-7days]).withColumn('example', F.lit(date))
........
df2 = df.join(tmp, 'column', 'inner').......
df_date += [df2]
df_total = df_total.unionByName(union_many(*df_date))
return df_total
Don't pay attention to the syntax. This is just an example to show that there are a series of operations inside the loop. My desidered output is an dataframe which contains the dataframe of each iteration!
Thank you!!
Initial Theory
You are hitting a known limitation of Spark, similar to the findings discussed over here.
However, there are ways to work around this by re-thinking your implementation to instead be a series of dispatched instructions describing the batches of data you wish to operate on, similar to how you create your tmp DataFrame.
This may unfortunately require quite a bit more work to re-think your logic in this way since you'll want to imagine your manipulations purely as a series of column manipulation commands given to PySpark instead of row-by-row manipulations. There are some operations you cannot do purely using PySpark calls, so this isn't always possible. In general it's worth thinking through very carefully.
Concretely
As an example, your data range calculations are possible to perform purely in PySpark and will be substantially faster if you do this operations over many years or other increased scale. Instead of using Python list comprehension or other logic, we instead use column manipulations on a small set of initial data to build up our ranges.
I've written up some example code here on how you can create your date batches, this should let you perform a join to create your tmp DataFrame, after which you can describe the types of operations you wish to do to it.
Code to create date ranges (start and end dates of each week of the year):
from pyspark.sql import types as T, functions as F, SparkSession, Window
from datetime import date
spark = SparkSession.builder.getOrCreate()
year_marker_schema = T.StructType([
T.StructField("max_year", T.IntegerType(), False),
])
year_marker_data = [
{"max_year": 2022}
]
year_marker_df = spark.createDataFrame(year_marker_data, year_marker_schema)
year_marker_df.show()
"""
+--------+
|max_year|
+--------+
| 2022|
+--------+
"""
previous_week_window = Window.partitionBy(F.col("start_year")).orderBy("start_week_index")
year_marker_df = year_marker_df.select(
(F.col("max_year") - 1).alias("start_year"),
"*"
).select(
F.to_date(F.col("max_year").cast(T.StringType()), "yyyy").alias("max_year_date"),
F.to_date(F.col("start_year").cast(T.StringType()), "yyyy").alias("start_year_date"),
"*"
).select(
F.datediff(F.col("max_year_date"), F.col("start_year_date")).alias("days_between"),
"*"
).select(
F.floor(F.col("days_between") / 7).alias("weeks_between"),
"*"
).select(
F.sequence(F.lit(0), F.col("weeks_between")).alias("week_indices"),
"*"
).select(
F.explode(F.col("week_indices")).alias("start_week_index"),
"*"
).select(
F.lead(F.col("start_week_index"), 1).over(previous_week_window).alias("end_week_index"),
"*"
).select(
((F.col("start_week_index") * 7) + 1).alias("start_day"),
((F.col("end_week_index") * 7) + 1).alias("end_day"),
"*"
).select(
F.concat_ws(
"-",
F.col("start_year"),
F.col("start_day").cast(T.StringType())
).alias("start_day_string"),
F.concat_ws(
"-",
F.col("start_year"),
F.col("end_day").cast(T.StringType())
).alias("end_day_string"),
"*"
).select(
F.to_date(
F.col("start_day_string"),
"yyyy-D"
).alias("start_date"),
F.to_date(
F.col("end_day_string"),
"yyyy-D"
).alias("end_date"),
"*"
)
year_marker_df.drop(
"max_year",
"start_year",
"weeks_between",
"days_between",
"week_indices",
"max_year_date",
"start_day_string",
"end_day_string",
"start_day",
"end_day",
"start_week_index",
"end_week_index",
"start_year_date"
).show()
"""
+----------+----------+
|start_date| end_date|
+----------+----------+
|2021-01-01|2021-01-08|
|2021-01-08|2021-01-15|
|2021-01-15|2021-01-22|
|2021-01-22|2021-01-29|
|2021-01-29|2021-02-05|
|2021-02-05|2021-02-12|
|2021-02-12|2021-02-19|
|2021-02-19|2021-02-26|
|2021-02-26|2021-03-05|
|2021-03-05|2021-03-12|
|2021-03-12|2021-03-19|
|2021-03-19|2021-03-26|
|2021-03-26|2021-04-02|
|2021-04-02|2021-04-09|
|2021-04-09|2021-04-16|
|2021-04-16|2021-04-23|
|2021-04-23|2021-04-30|
|2021-04-30|2021-05-07|
|2021-05-07|2021-05-14|
|2021-05-14|2021-05-21|
+----------+----------+
only showing top 20 rows
"""
Potential Optimizations
Once you have this code and if you are unable to express your work through joins / column derivations alone and are forced to perform the operation with the union_many, you may consider using Spark's localCheckpoint feature on your df2 result. This will allow Spark to simply calculate the resultant DataFrame and not add its query plan onto the result you will push to your df_total. This could be paired with the cache to also keep the resultant DataFrame in memory, but this will depend on your data scale.
localCheckpoint and cache are useful to avoid re-computing the same DataFrames many times over and truncating the amount of query planning done on top of your intermediate DataFrames.
You'll likely find localCheckpoint and cache can be useful on your df DataFrame as well since it will be used many times over in your loop (assuming you are unable to re-work your logic to use SQL-based operations and instead are still forced to use the loop).
As a quick and dirty summary of when to use each:
Use localCheckpoint on a DataFrame that was complex to compute and is going to be used in operations later. Oftentimes these are the nodes feeding into unions
Use cache on a DataFrame that is going to be used many times later. This often is a DataFrame sitting outside of a for/while loop that will be called in the loop
All Together
Your initial code
Date_list = [All weeks from: '2021-01-01', to: '2022-01-01'] --> ~50 elements
df_total = spark.createDataframe([], schema)
df_date = []
for date in Date_list:
tmp = df.filter(between [date, date-7days]).withColumn('example', F.lit(date))
........
df2 = df.join(tmp, 'column', 'inner').......
df_date += [df2]
df_total = df_total.unionByName(union_many(*df_date))
return df_total
Should now look like:
# year_marker_df as derived in my code above
year_marker_df = year_marker_df.cache()
df = df.join(year_marker_df, df.my_date_column between year_marker_df.start_date, year_marker_df.end_date)
# Other work previously in your for_loop, resulting in df_total
return df_total
Or, if you are unable to re-work your inner loop operations, you can do some optimizations like:
Date_list = [All weeks from: '2021-01-01', to: '2022-01-01'] --> ~50 elements
df_total = spark.createDataframe([], schema)
df_date = []
df = df.cache()
for date in Date_list:
tmp = df.filter(between [date, date-7days]).withColumn('example', F.lit(date))
........
df2 = df.join(tmp, 'column', 'inner').......
df2 = df2.localCheckpoint()
df_date += [df2]
df_total = df_total.unionByName(union_many(*df_date))
return df_total
I have extracted the coalesce value from a table using Spark SQL. Then I'm converting the result to String so that I can INSERT that value into another table.
However, the column name of the COALESCE is getting inserted into the table instead of the coalesce value.
These are my COALESCE and INSERT queries,
COALESCE:
---------
val lastPartition = spark.sql("SELECT COALESCE(MAX(partition_name), 'XXXXX') FROM db1.table1").toString.mkString
Result:
-------
COALESCE(MAX(partition_name),XXXXX
20210309
INSERT:
-------
val result = spark.sql(s"""INSERT INTO db2.table2 VALUES ('col1','col2','${lastPartition}','col4')""")
Result:
--------
col1 col2 col3 col4
1 John [COALESCE(MAX(partition_name),XXXXX):string] 15313.21
Here, I want the value of column (col3) to be 20210309 and not the coalesce column name.
You need to use .head().getString(0) to get the string as the variable. Otherwise, if you use .toString, you'll get the expression instead because of lazy evaluation.
val lastPartition = spark.sql("SELECT COALESCE(MAX(partition_name), 'XXXXX') FROM db1.table1").head().getString(0)
Data
I want to apply groupby for column1 and want to calculate the percentage of passed and failed percentage for each 1 and as well count
Example ouput I am looking for
Using pyspark I am doing the below code but I am only getting the percentage
levels = ["passed", "failed","blocked"]
exprs = [avg((col("Column2") == level).cast("double")*100).alias(level)
for level in levels]
df = sparkSession.read.json(hdfsPath)
result1 = df1.select('Column1','Column2').groupBy("Column1").agg(*exprs)
You would need to explicitly calculate the counts, and then do some string formatting to combine the percentages in the counts into a single column.
from pyspark.sql.functions import avg, col, count, concat, lit
levels = ["passed", "failed","blocked"]
# percentage aggregations
pct_exprs = [avg((col("Column2") == level).cast("double")*100).alias('{}_pct'.format(level))
for level in levels]
# count aggregations
count_exprs = [sum((col("Column2") == level).cast("int")).alias('{}_count'.format(level))
for level in levels]
# combine all aggregations
exprs = pct_exprs + count_exprs
# string formatting select expressions
select_exprs = [
concat(
col('{}_pct'.format(level)).cast('string'),
lit('('),
col('{}_count'.format(level)).cast('string'),
lit(')')
).alias('{}_viz'.format(level))
for level in levels
]
df = sparkSession.read.json(hdfsPath)
result1 = (
df1
.select('Column1','Column2')
.groupBy("Column1")
.agg(*exprs)
.select('Column1', *select_exprs)
)
NB: it seems like you are trying to use Spark to make a nice visualization of the results of your calculations, but I don't think Spark is well-suited for this task. If you have few enough records that you can see all of them at once, you might as well work locally in Pandas or something similar. And if you have enough records that using Spark makes sense, then you can't see all of them at once anyway so it doesn't matter too much whether they look nice.
I have three dataframes, dictionary,SourceDictionary and MappedDictionary. The dictionary andSourceDictionary have only one column, say words as String. The dictionary which has million records, is a subset of MappedDictionary (Around 10M records) and each record in MappedDictionary is substring of dictionary. So, I need to map the ditionary with SourceDictionary to MappedDictionary.
Example:
Records in ditionary : BananaFruit, AppleGreen
Records in SourceDictionary : Banana,grape,orange,lemon,Apple,...
Records to be mapped in MappedDictionary (Contains two columns) :
BananaFruit Banana
AppleGreen Apple
I planned to do like two for loops in java and make substring operation but the problem is 1 million * 10 million = 10 Trillion iterations
Also, I can't get correct way to iterate a dataframe like a for loop
Can someone give a solution for a way to make iteration in Dataframe and perform substring operations?
Sorry for my poor English, I am a non-native
Thanks for stackoverflow community members in advance :-)
Though you have million record in sourceDictionary because it has only one column broadcasting it to every node won't take up much memory and it will speed up the total performance.
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.sql.catalyst.encoders.RowEncoder
//Assuming the schema names
val SourceDictionarySchema = StructType(StructField("value",StringType,false))
val dictionarySchema = StructType(StructField("value",StringType,false))
val MappedDictionary = StructType(StructField("value",StringType,false), StructField("key",StringType,false))
val sourceDictionaryBC = sc.broadcast(
sourceDictionary.map(row =>
row.getAs[String]("value")
).collect.toList
)
val MappedDictionaryN = dictionary.map(row =>
val value = row.getAs[String]("value")
val matchedKey = sourceDictionaryBC.value.find(value.contains)
Seq(value, matchedKey.orNull)
)(RowEncoder(MappedDictionary))
After this you have all the new mapped records. If you want to combine it with the existing MappedDictionary just do a simple union.
MappedDictionaryN.union(MappedDictionary)
i have a requirement to validate an ingest operation , bassically, i have two big files within HDFS, one is avro formatted (ingested files), another one is parquet formatted (consolidated file).
Avro file has this schema:
filename, date, count, afield1,afield2,afield3,afield4,afield5,afield6,...afieldN
Parquet file has this schema:
fileName,anotherField1,anotherField1,anotherField2,anotherFiel3,anotherField14,...,anotherFieldN
If i try to load both files in a DataFrame and then try to use a naive join-where, the job in my local machine takes more than 24 hours!, which is unaceptable.
ingestedDF.join(consolidatedDF).where($"filename" === $"fileName").count()
¿Which is the best way to achieve this? ¿dropping colums from the DataFrame before doing the join-where-count? ¿calculating the counts per dataframe and then join and sum?
PD
I was reading about map-side-joint technique but it looks that this technique would work for me if there was a small file able to fit in RAM, but i cant assure that, so, i would like to know which is the prefered way from the community to achieve this.
http://dmtolpeko.com/2015/02/20/map-side-join-in-spark/
I would approach this problem by stripping down the data to only the field I'm interested in (filename), making a unique set of the filename with the source it comes from (the origin dataset).
At this point, both intermediate datasets have the same schema, so we can union them and just count. This should be orders of magnitude faster than using a join on the complete data.
// prepare some random dataset
val data1 = (1 to 100000).filter(_ => scala.util.Random.nextDouble<0.8).map(i => (s"file$i", i, "rubbish"))
val data2 = (1 to 100000).filter(_ => scala.util.Random.nextDouble<0.7).map(i => (s"file$i", i, "crap"))
val df1 = sparkSession.createDataFrame(data1).toDF("filename", "index", "data")
val df2 = sparkSession.createDataFrame(data2).toDF("filename", "index", "data")
// select only the column we are interested in and tag it with the source.
// Lets make it distinct as we are only interested in the unique file count
val df1Filenames = df1.select("filename").withColumn("df", lit("df1")).distinct
val df2Filenames = df2.select("filename").withColumn("df", lit("df2")).distinct
// union both dataframes
val union = df1Filenames.union(df2Filenames).toDF("filename","source")
// let's count the occurrences of filename, by using a groupby operation
val occurrenceCount = union.groupBy("filename").count
// we're interested in the count of those files that appear in both datasets (with a count of 2)
occurrenceCount.filter($"count"===2).count