spark: shuffle spill to disk always happen even there is enough memory - scala

I ran a Spark job, and I noticed that there is always some amount of shuffle spill to disk even though there should be enough memory for the shuffle spill. Does anyone know why this is happening? Would if affect performance because of such spill to disk? Thanks!

Related

what are the disadvantages of single contiguous memory allocation

I am not able to understand why does the os need to partition memory, cant we have a single memory block and allocate everything in that ? There will be no fragmentations in doing that.

changing persistence level of an RDD

So I have a question about RDD's persistence. Let's say I have an RDD that's persisted MEMORY_AND_DISK, and I know that I now have enough memory space cleared up that I can force the data on disk into memory. Is it possible to tell spark to re-evaluate the open RDD memory and move that information?
Essentially I'm running into an issue with my RDD where I persist it and the entire RDD doesn't end up in memory until I query the RDD multiple times. This makes the first few runs extremely slow. One thing I'm hoping to try is to initially set the RDD to MEMORY_AND_DISK and then force the disk data back into memory.

Scala concurrency performance issues

I have a data mining app.
There is 1 Mining Actor which receives and processes a Json containing 1000 objects. I put this into a list and foreach, I log the data by sending it to 1 Logger Actor which logs data into many files.
Processing the list sequentially, my app uses 700MB and takes ~15 seconds of 20% cpu power to process (4 core cpu). When I parallelize the list, my app uses 2GB and ~ the same amount of time and cpu to process.
My questions are:
Since I parallelized the list and thus the computation, shouldn't the compute-time decrease?
I think having only one Logger Actor is a bottleneck in this case. The computation may be faster but the bottleneck hides the speed increase. So if I add more Loggers to the pool, the app time should decrease?
Why does the memory usage jump to 2GB? Does the JVM have to store the entire collection in memory to parallelize it? And after the computation is done, the JVM garbage collector should deal with it?
Without more details, any answer is a guess. However, even a guess might point you to the right direction.
Parallelized execution should decrease the running time but your problem might lie elsewhere. For some reason, your CPU is idling a lot even in the single-threaded mode. You do not specify whether you read the input from disk or the network or where you write your output to. You explicitly say that you write logs to a lot of files. Disk and network reading/writing might in your case take much longer than data processing. Most probably your process is idle due to this I/O waiting. You should not expect any speedups from parallelizing a job that spends 80% of its time waiting on I/O. I therefore also suspect that loggers are not the bottleneck here.
The memory usage might jump if your threads allocate a lot of memory each. In that case, the more threads you have the more memory will be required. I don't know what kind of collection you are parallelizing on, but most are stored in memory, completely. Yes, the garbage collector will free any resources that do not require you to explicitly free them, such as files.
How many threads for reading and writing to the hard disk?
The memory increases because I send messages faster than the Logger can write, so the Mailbox balloons in size until the Logger has processed the messages and the GC kicks in.
I solved this by writing state to a protocol buffer file. Before doing any writes, I compare with the protobuf file because reads are significantly cheaper than writes. My resource usage is now 10% for 2 seconds, and less than 400MB RAM.

Spark: MEMORY_ONLY_SER_2 - strange memory behavior

I am storing an RDD using a storageLevel = MEMORY_ONLY_SER_2 in case if one executor lost, there is another copy of data.
Then I found something strange:
The Size in Memory of 2xReplicated RDD seems to be the same comparing when I used storageLevel = MEMORY_ONLY_SER (1xReplicated)
Fraction Cached couldn't reach 100% even though I still have a lot of storage memory left.
Am I understanding storageLevel = MEMORY_ONLY_SER_2 correctly? why 2xReplicated doesn't have twice the Size in Memory compared with 1xReplicated ? Thanks!
I guess maybe all your memory for cached is used, so no matter how many replication you used.
I do not know how many memory allocated for every executor, if you allocated a lot, you can increase the value of spark.storage.memoryFraction, the default values is 0.6.
If you just to verify whether the MEMORY_ONLY_SER_2 will cost as twice as the MEMORY_ONLY_SER, you can use a small dataset.

How to distribute data to worker nodes

I have a general question regarding Apache Spark and how to distribute data from driver to executors.
I load a file with 'scala.io.Source' into collection. Then I parallelize the collection with 'SparkContext.parallelize'. Here begins the issue - when I don't specify the number of partitions, then the number of workers is used as the partitions value, task is sent to nodes and I got the warning that recommended task size is 100kB and my task size is e.g. 15MB (60MB file / 4 nodes). The computation then ends with 'OutOfMemory' exception on nodes. When I parallelize to more partitions (e.g. 600 partitions - to get the 100kB per task). The computations are performed successfully on workers but the 'OutOfMemory' exceptions is raised after some time in the driver. This case, I can open spark UI and observe how te memory of driver is slowly consumed during the computation. It looks like the driver holds everything in memory and doesn't store the intermediate results on disk.
My questions are:
Into how many partitions to divide RDD?
How to distribute data 'the right way'?
How to prevent memory exceptions?
Is there a way how to tell driver/worker to swap? Is it a configuration option or does it have to be done 'manually' in program code?
Thanks
How to distribute data 'the right way'?
You will need a distributed file system, such as HDFS, to host your file. That way, each worker can read a piece of the file in parallel. This will deliver better performance than serializing and the data.
How to prevent memory exceptions?
Hard to say without looking at the code. Most operations will spill to disk. If I had to guess, I'd say you are using groupByKey ?
Into how many partitions to divide RDD?
I think the rule of thumbs (for optimal parallelism) is 2-4x the amount of cores available for your job. As you have done, you can compromise time for memory usage.
Is there a way how to tell driver/worker to swap? Is it a configuration option or does it have to be done 'manually' in program code?
Shuffle spill behavior is controlled by the property spark.shuffle.spill. It's true (=spill to disk) by default.