Spark: MEMORY_ONLY_SER_2 - strange memory behavior - scala

I am storing an RDD using a storageLevel = MEMORY_ONLY_SER_2 in case if one executor lost, there is another copy of data.
Then I found something strange:
The Size in Memory of 2xReplicated RDD seems to be the same comparing when I used storageLevel = MEMORY_ONLY_SER (1xReplicated)
Fraction Cached couldn't reach 100% even though I still have a lot of storage memory left.
Am I understanding storageLevel = MEMORY_ONLY_SER_2 correctly? why 2xReplicated doesn't have twice the Size in Memory compared with 1xReplicated ? Thanks!

I guess maybe all your memory for cached is used, so no matter how many replication you used.
I do not know how many memory allocated for every executor, if you allocated a lot, you can increase the value of spark.storage.memoryFraction, the default values is 0.6.
If you just to verify whether the MEMORY_ONLY_SER_2 will cost as twice as the MEMORY_ONLY_SER, you can use a small dataset.

Related

BsonChunkPool and memory leak

I use Mongodb 3.6 + .net driver (MongoDb.Driver 2.10) to manage our data. Recenyly, we've noticed that our services (background) consume a lot of memory. After anaylyzing a dump, it turned out that there's a mongo object called BsonChunkPool that always consumes around 0.5 GB of memory. Is it normal ? I cannot really find any valuable documentation about this type, and what it actually does. Can anyone help ?
The BsonChunkPool exists so that large memory buffers (chunks) can be reused, thus easing the amount of work the garbage collector has to do.
Initially the pool is empty, but as buffers are returned to the pool the pool is no longer empty. Whatever memory is held in the pool will not be garbage collected. This is by design. That memory is intended to be reused. This is not a memory leak.
The default configuration of the BsonChunkPool is such that it can hold a maximum of 8192 chunks of 64KB each, so if the pool were to grow to its maximum size it would use 512MB of memory (even more than the 7 or 35 MB you are observing).
If for some reason you don't want the BsonChunkPool to use that much memory, you can configure it differently by putting the following statement at the beginning of your application:
BsonChunkPool.Default = new BsonChunkPool(16, 64 * 1024); // e.g. max 16 chunks of 64KB each, for a total of 1MB
We haven't experimented with different values for chunk counts and sizes so if you do decide to change the default BsonChunkPool configuration you should do some benchmarking and verify that it doesn't have an adverse impact on your performance.
From jira: BsonChunkPool and memory leak

Spark 2.2 write partitionBy out of memory exception

I think anyone that has used Spark has ran across OOM errors, and usually the source of the problem can be found easily. However, I am a bit perplexed by this one. Currently, I am trying to save by two different partitions, using the partitionBy function. It looks something like below (made up names):
df.write.partitionBy("account", "markers")
.mode(SaveMode.Overwrite)
.parquet(s"$location$org/$corrId/")
This particular dataframe has around 30gb of data, 2000 accounts and 30 markers. The accounts and markers are close to evenly distributed. I have tried using 5 core nodes and 1 master node driver of amazon's r4.8xlarge (220+ gb of memory) with the default maximize resource allocation setting (which 2x cores for executors and around 165gb of memory). I have also explicitly set the number of cores, executors to different numbers, but had the same issues. When looking at Ganglia, I don't see any excessive memory consumption.
So, it seems very likely that the root cause is the 2gb ByteArrayBuffer issue that can happen on shuffles. I then tried repartitioning the dataframe with various numbers, such as 100, 500, 1000, 3000, 5000, and 10000 with no luck. The job occasionally logs a heap space error, but most of the time gives a node lost error. When looking at the individual node logs, it just seems to suddenly fail with no indication of the problem (which isn't surprising with some oom exceptions).
For dataframe writes, is there a trick to partitionBy's to either get passed the memory heap space error?

Observable takes infinite amount of memory?

Since each Observable has a cache which can be traced back to the very first emitted value it seems an amount of memory used to store this cache is not bounded.
I've tested this assumption with the following code:
Observable.interval(1.microsecond).map(_ => System.currentTimeMillis)
.subscribe(x => ())
And indeed the memory usage has been steadily rising during the whole 10 minute period while an app was running.
My question is if it's possible to instantiate a special Observable without cache or maybe instruct it to cap it's cache at some level?
Only a specific set of Observables (ReplaySubject, replay(), GroupedObservable for example) tend to cache items, but not Observable.interval().
What you are likely experiencing here is the hundreds of thousands of boxed Long values. If you have a lot of RAM, GC might not kick in but just increase the heap size up to a maximum. Assuming you can really get a 1 microsecond timer, you have roughly 24 MB/s allocation rate or 1.4 GB/minute. If left alone for 10 minutes, you'd likely see a sawtooth like shape in memory usage.

spark: shuffle spill to disk always happen even there is enough memory

I ran a Spark job, and I noticed that there is always some amount of shuffle spill to disk even though there should be enough memory for the shuffle spill. Does anyone know why this is happening? Would if affect performance because of such spill to disk? Thanks!

Spark out of memory

I have a folder with 150 G of txt files (around 700 files, on average each 200 MB).
I'm using scala to process the files and calculate some aggregate statistics in the end. I see two possible approaches to do that:
manually loop through all the files, do the calculations per file and merge the results in the end
read the whole folder to one RDD, do all the operations on this single RDD and let spark do all the parallelization
I'm leaning towards the second approach as it seems cleaner (no need for parallelization specific code), but I'm wondering if my scenario will fit the constraints imposed by my hardware and data. I have one workstation with 16 threads and 64 GB of RAM available (so the parallelization will be strictly local between different processor cores). I might scale the infrastructure with more machines later on, but for now I would just like to focus on tunning the settings for this one workstation scenario.
The code I'm using:
- reads TSV files, and extracts meaningful data to (String, String, String) triplets
- afterwards some filtering, mapping and grouping is performed
- finally, the data is reduced and some aggregates are calculated
I've been able to run this code with a single file (~200 MB of data), however I get a java.lang.OutOfMemoryError: GC overhead limit exceeded
and/or a Java out of heap exception when adding more data (the application breaks with 6GB of data but I would like to use it with 150 GB of data).
I guess I would have to tune some parameters to make this work. I would appreciate any tips on how to approach this problem (how to debug for memory demands). I've tried increasing the 'spark.executor.memory' and using a smaller number of cores (the rational being that each core needs some heap space), but this didn't solve my problems.
I don't need the solution to be very fast (it can easily run for a few hours even days if needed). I'm also not caching any data, but just saving them to the file system in the end. If you think it would be more feasible to just go with the manual parallelization approach, I could do that as well.
Me and my team had processed a csv data sized over 1 TB over 5 machine #32GB of RAM each successfully. It depends heavily what kind of processing you're doing and how.
If you repartition an RDD, it requires additional computation that
has overhead above your heap size, try loading the file with more
paralelism by decreasing split-size in
TextInputFormat.SPLIT_MINSIZE and TextInputFormat.SPLIT_MAXSIZE
(if you're using TextInputFormat) to elevate the level of
paralelism.
Try using mapPartition instead of map so you can handle the
computation inside a partition. If the computation uses a temporary
variable or instance and you're still facing out of memory, try
lowering the number of data per partition (increasing the partition
number)
Increase the driver memory and executor memory limit using
"spark.executor.memory" and "spark.driver.memory" in spark
configuration before creating Spark Context
Note that Spark is a general-purpose cluster computing system so it's unefficient (IMHO) using Spark in a single machine
To add another perspective based on code (as opposed to configuration): Sometimes it's best to figure out at what stage your Spark application is exceeding memory, and to see if you can make changes to fix the problem. When I was learning Spark, I had a Python Spark application that crashed with OOM errors. The reason was because I was collecting all the results back in the master rather than letting the tasks save the output.
E.g.
for item in processed_data.collect():
print(item)
failed with OOM errors. On the other hand,
processed_data.saveAsTextFile(output_dir)
worked fine.
Yes, PySpark RDD/DataFrame collect() function is used to retrieve all the elements of the dataset (from all nodes) to the driver node. We should use the collect() on smaller dataset usually after filter(), group(), count() etc. Retrieving larger dataset results in out of memory.