Is manually managing memory with .unpersist() a good idea? - scala

I've read a lot of questions and answers here about unpersist() on dataframes. I so far haven't found an answer to this question:
In Spark, once I am done with a dataframe, is it a good idea to call .unpersist() to manually force that dataframe to be unpersisted from memory, as opposed to waiting for GC (which is an expensive task)? In my case I am loading many dataframes so that I can perform joins and other transformations.
So, for example, if I wish to load and join 3 dataframes A, B and C:
I load dataframe A and B, join these two to create X, and then .unpersist() B because I don't need it any more (but I will need A), and could use the memory to load C (which is big). So then I load C, and join C to X, .unpersist() on C so I have more memory for the operations I will now perform on X and A.
I understand that GC will unpersist for me eventually, but I also understand than GC is an expensive task that should be avoided if possible. To re-phrase my question: Is this an appropriate method of manually managing memory, to optimise my spark jobs?
My understanding (please correct if wrong):
I understand that .unpersist() is a very cheap operation.
I understand that GC calls .unpersist() on my dataframes eventually anyway.
I understand that spark monitors cache and drops based on Last Recently Used policy. But in some cases I do not want the 'Last Used' DF to be dropped, so instead I can call.unpersist() on the datafames I know I will not need in future, so that I don't drop the DFs I will need and have to reload them later.
To re-phrase my question again if unclear: is this an appropriate use of .unpersist(), or should I just let Spark and GC do their job?
Thanks in advance :)

There seem to be some misconception. While using unpersist is a valid approach to get better control over the storage, it doesn't avoid garbage collection. In fact all the on heap objects associated with cached data will be left garbage collector.
So while operation itself is relatively cheap, chain of events it triggers might not be cheap. Luckily explicit persist is not worse than waiting for automatic cleaner or GC triggered cleaner, so if you want to clean specific objects, go ahead and do it.
To limit GC on unpersist it might be worth to take a look at the OFF_HEAP StorageLevel.

Related

How to iteratively bring a Dataframe back to the driver in Spark

Main question: How do you safely (without risking crashing due to OOM) iterate over every row (guaranteed every row) in a dataframe from the driver node in Spark? I need to control how big the data is as it comes back, operate on it, and discard it to retrieve the next batch (say 1000 rows at a time or something)
I am trying to safaely and iteratively bring the data in a potentially large Dataframe back to the driver program so that I may use the data to perform HTTP calls. I have been attempting use someDf.foreachPartition{makeApiCall(_)} and allowing the Executors to handle the calls. It works - but debugging and handling errors has proven to be pretty difficult when launching in prod envs, especially on failed calls.
I know there is someDf.collect() action, which brings ALL the data back to the driver all at once. However, this solution is not suggested, because if you have a very large DF, you risk crashing the driver.
Any suggestions?
if the data does not fit into memory, you could use something like :
df.toLocalIterator().forEachRemaining( row => {makeAPICall(row)})
but toLocalIterator has considerable overhead compared to collect
Or you can collect your dataframe batch-wise (which does essentially the same as toLocalIterator):
val partitions = df.rdd.partitions.map(_.index)
partitions.toStream.foreach(i => df.where(spark_partition_id() === lit(i)).collect().map(row => makeAPICall(row)))
It is a bad idea to bring all that data back to driver because driver is just 1 node and it will become the bottleneck. The scalability will be lost. If you had to do this then think twice if you really need a big data application? probably not.
dataframe.collect() is the best way to bring the data to driver and it will bring all data. The alternative is toLocalIterator which will bring data of the largest partition which can be big too. So this should be rarely used and for small amount of data only.
If you insist then you can write the output to a file or queue and read that file in a controlled manner. This will be partially scalable solution which I won't prefer.

How to purge spark driver memory after collect()

I have to do a lot of little collect() operations in my application to send data through HTTPcall.
val payload = sparkSession.sql(s"select * from table where ID = id").toJSON.collect().mkString("\n")
Is there a way to purge used objects to free some memory space in my driver between operations?
First off, I agree with #Luis Miguel Mejia Suarez here in that collects are generally bad practice and a bad code smell. I'd take a look at why you are doing collects, and determine if you can do this in a different way.
As for your actual question, the garbage collector will free any unreferenced memory once memory starts getting tight. The code snippet you showed above should be fine since the output of collect is immediately operated on and then discarded so that output should be removed during the next GC pause, while the mkString output would be kept. So make sure that this applies to the other collect statements you are using.
Additionally, if you are seeing long GC pauses, consider lowering your driver memory size, so that there's less memory to collect. You might also look into tuning your GC parameters. There's lots of documentation on that on the internet, and it is too intricate to describe in detail here.
Finally, you can force the JVM to run garbage collection. You should be able to use System.gc() (https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#gc()). This is a Java function but Scala should be able to call it as well.

How to split a large data frame and use the smaller parts to do multiple broadcast joins in Spark?

Let's say we have two very large data frames - A and B. Now, I understand if I use same hash partitioner for both RDDs and then do the join, the keys will be co-located and the join might be faster with reduced shuffling (the only shuffling that will happen will be when the partitioner changes on A and B).
I wanted to try something different though - I want to try broadcast join like so -> let's say the B is smaller than A so we pick B to broadcast but B is still a very big dataframe. So, what we want to do is to make multiple data frames out of B and then send each as broadcast to be joined on A.
Has anyone tried this?
To split one data frame into many I am only seeing randomSplit method but that doesn't look so great an option.
Any other better way to accomplish this task?
Thanks!
Has anyone tried this?
Yes, someone already tried that. In particular GoDataDriven. You can find details below:
presentation - https://databricks.com/session/working-skewed-data-iterative-broadcast
code - https://github.com/godatadriven/iterative-broadcast-join
They claim pretty good results for skewed data, however there are three problems you have to consider doing this yourself:
There is no split in Spark. You have to filter data multiple times or eagerly cache complete partitions (How do I split an RDD into two or more RDDs?) to imitate "splitting".
Huge advantage of broadcast is reduction in the amount of transferred data. If data is large, then amount of data to be transferred can actually significantly increase: (Why my BroadcastHashJoin is slower than ShuffledHashJoin in Spark)
Each "join" increases complexity of the execution plan and with long series of transformations things can get really slow on the driver side.
randomSplit method but that doesn't look so great an option.
It is actually not a bad one.
Any other better way to accomplish this task?
You may try to filter by partition id.

Caching an intermediate dataframe when there is only one action

In Spark, let's say that I have a dataframe that undergoes some 100 transformations and then there is a single action applied. Will caching an intermediate dataframe help under any circumstances? I can see that caching will help when there is more than one action applied on a dataframe but how about a single action?
To clarify:
I have a dataframe A using which I obtain 2 different dataframes B and C. Then I do a union of B and C to form D on which I apply an action. Imagine this happening in a very complicated scenario with lots of branches. Will caching A speed up the process?
Caching a DataFrame has no benefit on the first time it needs to be evaluated (in fact it actually has a performance cost and obviously increases memory use). It's only on reuse that caching helps.
If you split A into B and C, then use both B and C, you've just used A twice, so caching it will help.
Number of Actions is not an important measure, what matters is the execution path.

Suggestions on how to go about a simple example displaying scala's multiprocessor capabilities

So i have completed the coursera course on scala and have taken it upon myself to do a small POC showing off the multiprocessor capabilities of scala.
i am looking at creating a very small example where a application can launch multiple tasks(each task will do some network related queries etc) and i can show the usage of multiple cores as well.
Also there will be a thread that will listen on a specific port of a machine and spawn tasks based on what information it receives there.
Any suggestions on how to proceed with this kind of a problem?
I don't want to use AKKA now.
Parallel collections are perhaps the least-effort way to make use of multiple processors in Scala. It naturally leads into how best to organise one's code and data to take advantage of the parallel operations, and more importantly what doesn't get faster.
As a more concrete problem, suppose you have read a CSV file (or XML document, or whatever) and want to parse the data. If the records have already been split into a collection such as a List[String], you can then do .par to create a parallel List, and then a subsequent .map will use all cores where possible. The resulting List[whatever] will retain the same ordering even though the operations were not executed sequentially. Consider summing the values on each line:
val in: List[String] = ...
val out = in.par.map { line =>
val cols = line split ','
cols.map(_.toInt).sum
}
So an in of List("1,2,3", "4,5,6") would result in an out of List(6, 15), as it would without the .par. but it'll run across multiple cores. Whether it's faster is another matter, since there is overhead in using parallel collections that likely makes a trivial example such as this slower. You will want to experiment to see where parallel collections are a benefit for your use cases.
There is a more extensive discussion and documentation of parallel collections at http://docs.scala-lang.org/overviews/parallel-collections/overview.html
What about the sleeping barber problem? You could implement it in a distributed manner over the network, with the barber(s)' spawning service listening on one port and the customers spawning and requesting the barber(s) services over the network.
I think that would be vast and interesting enough while not being impossible.
Then you can build on it to expand it as much as you want, such as adding specialized barbers for different things (hair cut or shaving) and down from there. Sky (or, better, thread's no. cap) is the limit!