I have what I assume should be a very parallelisable problem, but I cannot seem to make it work. I am using Azure Databrick, with the 10.4 LTS ML runtime.
I have a dataset, which within contains sets of test results. It is around 4Gb in size and contains around 20,000 tests. Each test result contains around 5,000-10,000 data points, which makes the shape of several peaks (the number of peaks and their location is different for each test). For a single test, I want to remove the space between the peaks, and separate them out into different dataframes. I have some pseudocode attached here, which is applied to a single test's results:
def peak_finder(pandas_dataframe):
# code to find peaks
return list_of_Pandas_dataframes
In this function, I use pyspark.pandas.DataFrame.truncate, which I am unsure of how to replicate in pure pyspark. I return a list of Pyspark Pandas dataframes which each contain 1 peak from the test. Running this, on a dataframe, takes around 0.02 seconds
My problem is then applying this to the other 20,000 tests in the entire set. This is my current method:
#list_test_ids = list of all test ids
# all_tests = the full dataset
all_peaks = []
for single_test_id in list_test_ids:
single_test = all_tests.where(col("TestId") == single_test_id)
single_test = single_test.toPandas()
peaks = peak_finder(single_test)
all_peaks.extend(peaks)
This is incredibly slow, or causes an Out of Memory error (Ive already increased the size of the driver). I think the using .toPandas() is partly to blame, as this seems slow. Generally though, this seems like an incredibly parallelisable problem, one which is currently not parallel.
My questions:
On large sets like this, should i ever use PySpark Pandas? Is it good practise to always use the regular API?
I feel like using a loop here is a mistake. However, I do not know what to replace it with. Using a map or a forEach seems more appropriate. But, I can't see how I could make this work with my peak_finder function or with PySpark Pandas.
For problems like this, where I am trying to manipulate a 4Gb dataset, what worker/driver configuration do you recommend? Perhaps my current choice is not suitable?
Related
I've got a process which is really bogged down by the version computing for the target delta table.
Little bit of context - there are other things that run, all contributing uniform structured dataframes that I want to persist in a delta table. Ultimately these are all compiled into lots_of_dataframes to be logged.
for i in lots_of_dataframes:
i.write.insertInto("target_delta_table")
# ... take a while to compute version
I've got in to the documentation but couldn't find any setting to ignore the version compute. I did see vacuuming, but not sure that'll do the since there will still be a lot of activity in a small window of time.
I know that I can union all of the dataframes together and just do the insert once, but I'm wondering if there is a more Databricks-ian way to do it. Like a configuration to only maintain 1 version at a time and not worry about computing for a restore.
Most probably, but it's hard to say exactly without details, the problem arise from the following facts:
Spark is lazy - the actual data processing doesn't happen until you perform action, like writing data into a destination table. So if you have a lot of transformations, etc., they will happen when you're writing data.
You're writing data in the loop - you can potentially speedup it a bit by doing a union of all tables into a single dataframe, that will be written into one go:
import functools
unioned = functools.reduce(lambda x,y: x.union(y), lots_of_dataframes)
unioned.write.insertInto("target_delta_table")
So we have a Pyspark Dataframe which has around 25k records. We are trying to perform a count/empty check on this and it is taking too long. We tried,
df.count()
df.rdd.isEmpty()
len(df.head(1))==0
Converted to Pandas and tried pandas_df.empty()
Tried the arrow option
df.cache() and df.persist() before the counts
df.repartition(n)
Tried writing the df to DBFS, but writing is also taking quite a long time(cancelled after 20 mins)
Could you please help us on what we are doing wrong.
Note : There are no duplicate values in df and we have done multiple joins to form the df
Without looking at the df.explain() it's challenging to know specifically the issue but it certainly seems like you have could have a skewed data set.
(Skew usually is represented in the Spark UI with 1 executor taking a lot longer than the other partitions to finish.) If you on a recent version of spark there are tools to help with this out of the box:
spark.sql.adaptive.enabled = true
spark.sql.adaptive.skewJoin.enabled = true
Count is not taking too long. It's taking the time it needs to, to complete what you asked spark to do. To refine what it's doing you should do things you are likely already doing, filter the data first before joining so only critical data is being transferred to the joins. Reviewing your data for Skew, and programming around it, if you can't use adaptive query.
Convince yourself this is a data issue. Limit your source [data/tables] to 1000 or 10000 records and see if it runs fast. Then one at a time, remove the limit from only one [table/data source] (and apply limit to all others) and find the table that is the source of your problem. Then study the [table/data source] and figure out how you can work around the issue.(If you can't use adaptive query to fix the issue.)
(Finally If you are using hive tables, you should make sure the table stats are up to date.)
ANALYZE TABLE mytable COMPUTE STATISTICS;
To load a large dataset into Polars efficiently one can use the lazy API and the scan_* functions. This works well when we are performing an aggregation (so we have a big input dataset but a small result). However, if I want to process a big dataset in it's entirety (for example, change a value in each row of a column), it seems that there is no way around using collect and loading the whole (result) dataset into memory.
Is it instead possible to write a LazyFrame to disk directly, and have the processing operate on chunks of the dataset sequentially, in order to limit memory usage?
Edit (2023-01-08)
Polars' has growing support for streaming/out of core processing.
To run a query streaming collect your LazyFrame with collect(streaming=True).
If the result does not fit into memory, try to sink it to disk with sink_parquet.
Old answer (not true anymore).
Polars' algorithms are not streaming, so they need all data in memory for the operations like join, groupby, aggregations etc. So writing to disk directly would still have those intermediate DataFrames in memory.
There are of course things you can do. Depending on the type of query you do, it may lend itself to embarrassingly parallellizaton. A sum could for instance easily be computed in chunks.
You could also process columns in smaller chunks. This allows you to still compute harder aggregations/ computations.
Use lazy
If you have many filters in your query and polars is able to do them at the scan, your memory pressure is reduced to the selectivity ratio.
I just encountered a case where Polars manages memory much better using Lazy. When using the join function I highly recommend using scan_csv/scan_parquet/scan_ipc if memory is an issue.
import polars as pl
# combine datasets
PATH_1 = "/.../big_dataset.feather"
PATH_2 = "/.../other_big_dataset.feather"
big_dataset_1 = pl.scan_ipc(PATH_1)
big_dataset_2 = pl.scan_ipc(PATH_2)
big_dataset_expanded = big_dataset_1.join(
big_dataset_2, right_on="id_1", left_on="id_2", how="left"
)
big_dataset_expanded = big_dataset_expanded.collect()
I'm using Spark 1.6 in Scala.
I know it's some of the ideas behind the Spark Framework. But I couldn't answer it to myself by reading different tutorials.. (maybe the wrong ones).
I joined two DataFrames to a new one (nDF). Now I know, it's not yet proceeded, as long I say show, first or count.
But since I want to do exactly this, I want to inspect nDF in different ways:
nDF.show
nDF.count
nDF.filter()
..and so on, it would each time take a long time, since the original DataFrames are big. Couldn't I bring/copy the data to this new one. So I could solve these new actions as quick as on the original sets? (First I thought it's 'collect', but it only returns a Array, no DataFrame)
This is a classic scenario. When you join 2 Dataframes spark doesn't do any operation as it evaluates lazily when an action called on the resulting dataframe . Action mean show, count, print etc.
Now when show, count is being called on nDF, spark is evaluating the resultant dataframe every time i.e once when you called show, then when count is being called and so on. This means internally it is performing map/reduce every time an action is called on the resultant dataframe.
Spark doesn't cache the resulting dataframe in memory unless it is hinted to do so by doing df.cache / df.persist.
So when you do
val nDF = a.join(b).persist
And then call the count/show it will evaluate the nDF once and store the resulting dataframe in memory. Hence subsequent actions will be faster.
However the fist evaluation might be little slower also you need to using little more executor memory.
If the memory available to you is good with respect to the size of your dataset, what you're probably looking for is df.cache(). If the size of your dataset is too much, consider using df.persist() as it allows different levels of persistence.
Hope this is what you're looking for. Cheers
When referring to the spark ml/mllib docs, they all start from a svm stored example. This is really frustrating me since there doesn't seem to be a straightforward way to go from a standard RDD[Row] or Dataframe (taken from a "table" select) to this notation without first storing it.
This is just an inconvenience when dealing with 3 features or so, but when you scale that up to lots and lots of features, it's implying you will be doing a lot of typing and searching.
I ended up with something like this: (where "train" is a random split of a dataset w/ features stored in a table)
val trainLp = train.map(row => LabeledPoint(row.getInt(0).toDouble, Vectors.dense(row(8).asInstanceOf[Int].toDouble,row(9).asInstanceOf[Int].toDouble,row(10).asInstanceOf[Int].toDouble,row(11).asInstanceOf[Int].toDouble,row(12).asInstanceOf[Int].toDouble,row(13).asInstanceOf[Int].toDouble,row(14).asInstanceOf[Int].toDouble,row(15).asInstanceOf[Int].toDouble,row(18).asInstanceOf[Int].toDouble,row(21).asInstanceOf[Int].toDouble,row(27).asInstanceOf[Int].toDouble,row(28).asInstanceOf[Int].toDouble,row(29).asInstanceOf[Int].toDouble,row(30).asInstanceOf[Int].toDouble,row(31).asInstanceOf[Double],row(32).asInstanceOf[Double],row(33).asInstanceOf[Double],row(34).asInstanceOf[Double],row(35).asInstanceOf[Double],row(36).asInstanceOf[Double],row(37).asInstanceOf[Double],row(38).asInstanceOf[Double],row(39).asInstanceOf[Double],row(40).asInstanceOf[Double],row(41).asInstanceOf[Double],row(42).asInstanceOf[Double],row(43).asInstanceOf[Double])))
This is a nightmare to maintain, since these rows tend to change pretty often.
And here I'm only in the stage of getting labeled points, I'm not even at a svm stored version of this data.
What am I missing here that could potentially save me days of misery?
EDIT:
I got one step closer to the solution using something called a vectorassembler to build up my vector
Usually, CSV files are raw, unfiltered sources of information. Often they provide the original source of information.
In order to build a model, you usually want to go through a data cleansing, data preparation, data wrangling (and maybe more "data x" wording) phase before you build your model. This phase usually takes a big piece of the model building, and usually requires exploration of data. Typically, a process of transformation and feature selection (and creation) occurs between the original data and the data that builds the model.
If your CSV files don't need any of these preliminary phases - good for you!
You can always make configuration files that can keep track of certain columns or column indexes that build your model.
If your DataFrame comes from a "select", I guess what you can do to improve legibility and maintainability is to use column names instead of index numbers.
df.select($"my_col_1", $"my_col_2", .. )
and then operate through
row.getAs[String]("my_col_1")