Caching an intermediate dataframe when there is only one action - scala

In Spark, let's say that I have a dataframe that undergoes some 100 transformations and then there is a single action applied. Will caching an intermediate dataframe help under any circumstances? I can see that caching will help when there is more than one action applied on a dataframe but how about a single action?
To clarify:
I have a dataframe A using which I obtain 2 different dataframes B and C. Then I do a union of B and C to form D on which I apply an action. Imagine this happening in a very complicated scenario with lots of branches. Will caching A speed up the process?

Caching a DataFrame has no benefit on the first time it needs to be evaluated (in fact it actually has a performance cost and obviously increases memory use). It's only on reuse that caching helps.
If you split A into B and C, then use both B and C, you've just used A twice, so caching it will help.
Number of Actions is not an important measure, what matters is the execution path.

Related

Does Spark do one pass through the data for multiple withColumn?

Does Spark do one or multiple passes through data when multiple withColumn functions are chained?
For example:
val dfnew = df.withColumn("newCol1", f1(col("a")))
.withColumn("newCol2", f2(col("b")))
.withColumn("newCol3", f3(col("c")))
where
df is my input DataFrame containing at least columns a, b, c
dfnew is output DataFrame with three new columns newCol1, newCol2, newCol3
f1, f2, f3 are some user defined functions or some spark operations on columns like cast, etc In my project I can have even 30 independent withColumn function chained with foldLeft.
Important
I am assuming here that f2 does not depend on result of f1, while f3 does not depend on result of f1 and f2. The functions could be performed in any order. There is no shuffle in any function
My observations
all functions are in the same stage
addition of new withColumn does not increase execution time in such a way to suspect additional passages through data.
I have tested for example single SQLTransformer with select statement containing all functions vs multiple separate SQLTransformer one for each function and the execution time was similar.
Questions
Will spark make one or three passages through the data, once for each withColumn?
Does it depend on the type of functions f1, f2, f3? UDF vs generic Spark operations?
If the functions f1, f2, f3 are inside the same stage, does it mean they are in the same data pass?
Does number of passages depend on shuffles within functions? If there is no shuffle?
If I chain the withColumn functions with foldLeft will it change number of passages?
I could do something similar with three SQLTransformers or just one SQLTransformer with all three transformations in the same select_statement. How many passes through data that would do?
Basically it doesn't matter, the time of execution will be similar for 1 and 3 passages?
Will spark make one or three passages through the data, once for each withColumn?
Spark will "make one passage" through the data. Why? Because spark doesn't actually do anything when this code is reached, it just builds an execution plan which would tell it what to do when dfnew is used (i.e. some action, e.g. count, collect, write etc.) is executed on it. Then, it would be able to compute all functions at once for each record.
Does it depend on the type of functions f1, f2, f3? UDF vs generic Spark operations?
No.
If the functions f1, f2, f3 are inside the same stage, does it mean they are in the same data pass?
Yes.
Does number of passages depend on shuffles within functions? If there is no shuffle?
Almost. First, as long as no caching / checkpointing is used, the number of passages over the data will be the number of actions executed on the resulting newdf DataFrame. Then, each shuffle means each record is read, potentially sent between worker nodes, potentially written to disk, and then read again.
If I chain the withColumn functions with foldLeft will it change number of passages?
No. It will only change the way the above-mentioned plan is constructed, but it will have no effect on how this plan looks (would be the exact same plan), so the computation will remain the same.
I could do something similar with three SQLTransformers or just one SQLTransformer with all three transformations in the same select_statement. How many passes through data that would do?
Again, this won't make any difference, as the execution plan will remain the same.
Basically it doesn't matter, the time of execution will be similar for 1 and 3 passages?
Not sure what this means, but sounds like this is not correct: the time of execution is mostly a factor of number of shuffles and number of actions (assuming same data and same cluster setup).

How to split a large data frame and use the smaller parts to do multiple broadcast joins in Spark?

Let's say we have two very large data frames - A and B. Now, I understand if I use same hash partitioner for both RDDs and then do the join, the keys will be co-located and the join might be faster with reduced shuffling (the only shuffling that will happen will be when the partitioner changes on A and B).
I wanted to try something different though - I want to try broadcast join like so -> let's say the B is smaller than A so we pick B to broadcast but B is still a very big dataframe. So, what we want to do is to make multiple data frames out of B and then send each as broadcast to be joined on A.
Has anyone tried this?
To split one data frame into many I am only seeing randomSplit method but that doesn't look so great an option.
Any other better way to accomplish this task?
Thanks!
Has anyone tried this?
Yes, someone already tried that. In particular GoDataDriven. You can find details below:
presentation - https://databricks.com/session/working-skewed-data-iterative-broadcast
code - https://github.com/godatadriven/iterative-broadcast-join
They claim pretty good results for skewed data, however there are three problems you have to consider doing this yourself:
There is no split in Spark. You have to filter data multiple times or eagerly cache complete partitions (How do I split an RDD into two or more RDDs?) to imitate "splitting".
Huge advantage of broadcast is reduction in the amount of transferred data. If data is large, then amount of data to be transferred can actually significantly increase: (Why my BroadcastHashJoin is slower than ShuffledHashJoin in Spark)
Each "join" increases complexity of the execution plan and with long series of transformations things can get really slow on the driver side.
randomSplit method but that doesn't look so great an option.
It is actually not a bad one.
Any other better way to accomplish this task?
You may try to filter by partition id.

Is manually managing memory with .unpersist() a good idea?

I've read a lot of questions and answers here about unpersist() on dataframes. I so far haven't found an answer to this question:
In Spark, once I am done with a dataframe, is it a good idea to call .unpersist() to manually force that dataframe to be unpersisted from memory, as opposed to waiting for GC (which is an expensive task)? In my case I am loading many dataframes so that I can perform joins and other transformations.
So, for example, if I wish to load and join 3 dataframes A, B and C:
I load dataframe A and B, join these two to create X, and then .unpersist() B because I don't need it any more (but I will need A), and could use the memory to load C (which is big). So then I load C, and join C to X, .unpersist() on C so I have more memory for the operations I will now perform on X and A.
I understand that GC will unpersist for me eventually, but I also understand than GC is an expensive task that should be avoided if possible. To re-phrase my question: Is this an appropriate method of manually managing memory, to optimise my spark jobs?
My understanding (please correct if wrong):
I understand that .unpersist() is a very cheap operation.
I understand that GC calls .unpersist() on my dataframes eventually anyway.
I understand that spark monitors cache and drops based on Last Recently Used policy. But in some cases I do not want the 'Last Used' DF to be dropped, so instead I can call.unpersist() on the datafames I know I will not need in future, so that I don't drop the DFs I will need and have to reload them later.
To re-phrase my question again if unclear: is this an appropriate use of .unpersist(), or should I just let Spark and GC do their job?
Thanks in advance :)
There seem to be some misconception. While using unpersist is a valid approach to get better control over the storage, it doesn't avoid garbage collection. In fact all the on heap objects associated with cached data will be left garbage collector.
So while operation itself is relatively cheap, chain of events it triggers might not be cheap. Luckily explicit persist is not worse than waiting for automatic cleaner or GC triggered cleaner, so if you want to clean specific objects, go ahead and do it.
To limit GC on unpersist it might be worth to take a look at the OFF_HEAP StorageLevel.

Bring data of DataFrame back to local node for further actions (count / show) in spark/scala

I'm using Spark 1.6 in Scala.
I know it's some of the ideas behind the Spark Framework. But I couldn't answer it to myself by reading different tutorials.. (maybe the wrong ones).
I joined two DataFrames to a new one (nDF). Now I know, it's not yet proceeded, as long I say show, first or count.
But since I want to do exactly this, I want to inspect nDF in different ways:
nDF.show
nDF.count
nDF.filter()
..and so on, it would each time take a long time, since the original DataFrames are big. Couldn't I bring/copy the data to this new one. So I could solve these new actions as quick as on the original sets? (First I thought it's 'collect', but it only returns a Array, no DataFrame)
This is a classic scenario. When you join 2 Dataframes spark doesn't do any operation as it evaluates lazily when an action called on the resulting dataframe . Action mean show, count, print etc.
Now when show, count is being called on nDF, spark is evaluating the resultant dataframe every time i.e once when you called show, then when count is being called and so on. This means internally it is performing map/reduce every time an action is called on the resultant dataframe.
Spark doesn't cache the resulting dataframe in memory unless it is hinted to do so by doing df.cache / df.persist.
So when you do
val nDF = a.join(b).persist
And then call the count/show it will evaluate the nDF once and store the resulting dataframe in memory. Hence subsequent actions will be faster.
However the fist evaluation might be little slower also you need to using little more executor memory.
If the memory available to you is good with respect to the size of your dataset, what you're probably looking for is df.cache(). If the size of your dataset is too much, consider using df.persist() as it allows different levels of persistence.
Hope this is what you're looking for. Cheers

Spark: understanding the DAG and forcing transformations

Hello stackoverflow community.
I ask your help in understanding if my thoughts are correct or I'm missing some points in my Spark job.
I currently have two rdds that I want to subtract.
Both the rdds are built as different transformations on the same father RDD.
First of all, the father RDD is cached after it is obtained:
val fatherRdd = grandFather.repartition(n).mapPartitions(mapping).cache
Then the two rdds are transformed.
One is (pseudocode):
son1= rddFather.filter(filtering_logic).map(take_only_key).distinct
The other one is :
son2= rddFather.filter(filtering_logic2).map(take_only_key).distinct
The two sons are then subtracted to obtain only the keys in son1:
son1.subtract(son2)
I would expect the squence of the transformations to be the following:
mapPartitions
repartition
caching
Then, starting from cached data, map filter map distinct on both rdds and then subtracting.
This is not happening, what I see are the two distinct operations running in parallel, apparently not exploiting the benefits of caching (there are no skipped tasks), and taking almost the same computation time.
Below the image of the dag taken from spark ui.
Do you have any suggestions for me?
You are correct in your observations. Transformations on RDDs are lazy, so caching will happen after the first time the RDD is actually computed.
If you call an action on your parent RDD, it should be computed and cached. Then your subsequent operations will operate on the cached data.